PostgreSQL Database Version 13.4 To MySQL Database Version 8.0.20 Migration by using SQLines Tool
PostgreSQL to MySQL
------------------------
This Document will give details about PostgreSQL to MySQL Database .
Environment Details :-
---------------------------
Server 1 == Source Database == PostgreSQL Database Version == 13.4
Server 2 == Target Database == MySQL Database Version == 8.0.20
Server 3 == Jump host == Where you will install Sqline and connect to Source DB and Target DB.
I have consider 3 EC2 Machines from AWS [ RHEL 8.x ]
postgres-source-server == 172.31.30.141 [ Private Ipaddress ] = EC2 Machine = RHEL 8.4
mysql-target-server == 172.31.17.136 [ Private Ipaddress ] = EC2 Machine = RHEL 8.4
jump-host == 172.31.27.241 [ Private Ipaddress ] = EC2 Machine = RHEL 8.4
Step - 1 Install PostgreSQL Database Version in Source Server
Step - 2 Install MySQL Database Verion in Target Server
Step - 3 Install / Setup sqline s/w in Jump host server
Step - 4 Loading sample data in Source Database i.e PostgreSQL Database
Step - 5 Create Super User / Remote user for PostgreSQL and MySQL Database Server
Step - 6 Moving / Generating DDL's of PostgreSQL Through AWS Schema Conversion Tool / MySQL Workbench Tool
Step - 7 Moving / Generating DDL's of PostgreSQL Through Full converter Tool / MySQL Workbench Tool and Workbench Issues and Solutions
Step - 8 Final DDL and Schema definations gathered from Workbench Tool
Step - 9 Take a backup of DDL's for all Databases in PostgresSQL [ For safer side ]
Step - 10 Most Imp Queries while doing Migration from PostgreSQL to MySQL
Step - 11 Configure client packages for PostgreSQL and MySQL to connect from jump host to Source and Target DB Servers
Step - 12 Prepare configuration files for PostgreSQL and MySQL Databases from Sqlines configuration files in Jump host
Step - 13 Prepare the Row count script for Source Database and Target Database in jump host [ PostgreSQL --> MySQL ]
Step - 14 Prepare the Truncate tables for Target Database [ MySQL ]
Step - 15 Run the Create Schema / Create DDL Objectives [ Tables , Views , SP , Trigger , Functions ..etc in Target Database [ MySQL ]
Step - 16 Start the Data Migration from PostgreSQL to MySQL Database by using SQline Tool
Step - 17 Stored Procedure for Tables Row count in MySQL Database
Step - 18 Final Validation of Rows count of all tables and objectives between PostgreSQL and MySQL Databases
----------------------------------------------------------------
Step - 1 Install PostgreSQL Database Version in Source Server
----------------------------------------------------------------
https://www.postgresql.org/download/linux/redhat/
(i) Install postgres Database version 13.4 based on above link in Source PostgreSQL Server
[root@postgres-source-server ~]# hostname
postgres-source-server
[root@postgres-source-server ~]# hostname -I
172.31.30.141
[root@postgres-source-server ~]# cat /etc/redhat-release
Red Hat Enterprise Linux release 8.4 (Ootpa)
[root@postgres-source-server ~]# sudo dnf install -y https://download.postgresql.org/pub/repos/yum/reporpms/EL-8-x86_64/pgdg-redhat-repo-latest.noarch.rpm
[root@postgres-source-server ~]# sudo dnf -qy module disable postgresql
[root@postgres-source-server ~]# sudo dnf install -y postgresql13-server
[root@postgres-source-server ~]# sudo /usr/pgsql-13/bin/postgresql-13-setup initdb
Initializing database ... OK
[root@postgres-source-server ~]# sudo systemctl enable postgresql-13
Created symlink /etc/systemd/system/multi-user.target.wants/postgresql-13.service → /usr/lib/systemd/system/postgresql-13.service.
[root@postgres-source-server ~]# sudo systemctl start postgresql-13
[root@postgres-source-server ~]# sudo systemctl status postgresql-13
● postgresql-13.service - PostgreSQL 13 database server
Loaded: loaded (/usr/lib/systemd/system/postgresql-13.service; enabled; vendor preset: disabled)
Active: active (running) since Fri 2021-08-27 00:54:56 UTC; 6s ago
Docs: https://www.postgresql.org/docs/13/static/
Process: 56297 ExecStartPre=/usr/pgsql-13/bin/postgresql-13-check-db-dir ${PGDATA} (code=exited, status=0/SUCCESS)
Main PID: 56302 (postmaster)
Tasks: 8 (limit: 11272)
Memory: 16.8M
CGroup: /system.slice/postgresql-13.service
├─56302 /usr/pgsql-13/bin/postmaster -D /var/lib/pgsql/13/data/
├─56304 postgres: logger
├─56306 postgres: checkpointer
├─56307 postgres: background writer
├─56308 postgres: walwriter
├─56309 postgres: autovacuum launcher
├─56310 postgres: stats collector
└─56311 postgres: logical replication launcher
Aug 27 00:54:56 postgres-source-server systemd[1]: Starting PostgreSQL 13 database server...
Aug 27 00:54:56 postgres-source-server postmaster[56302]: 2021-08-27 00:54:56.652 UTC [56302] LOG: redirecting log output to logging collector process
Aug 27 00:54:56 postgres-source-server postmaster[56302]: 2021-08-27 00:54:56.652 UTC [56302] HINT: Future log output will appear in directory "log".
Aug 27 00:54:56 postgres-source-server systemd[1]: Started PostgreSQL 13 database server.
[root@postgres-source-server ~]# ps -ef | grep -i postgres
postgres 56302 1 0 00:54 ? 00:00:00 /usr/pgsql-13/bin/postmaster -D /var/lib/pgsql/13/data/
postgres 56304 56302 0 00:54 ? 00:00:00 postgres: logger
postgres 56306 56302 0 00:54 ? 00:00:00 postgres: checkpointer
postgres 56307 56302 0 00:54 ? 00:00:00 postgres: background writer
postgres 56308 56302 0 00:54 ? 00:00:00 postgres: walwriter
postgres 56309 56302 0 00:54 ? 00:00:00 postgres: autovacuum launcher
postgres 56310 56302 0 00:54 ? 00:00:00 postgres: stats collector
postgres 56311 56302 0 00:54 ? 00:00:00 postgres: logical replication launcher
root 56370 1324 0 00:58 pts/0 00:00:00 grep --color=auto -i postgres
[root@postgres-source-server ~]# cd /var/lib/pgsql/13/data/
[root@postgres-source-server data]# pwd
/var/lib/pgsql/13/data
[root@postgres-source-server data]# ls -ltr
drwx------. 2 postgres postgres 6 Aug 27 00:54 pg_notify
drwx------. 2 postgres postgres 6 Aug 27 00:54 pg_dynshmem
drwx------. 2 postgres postgres 6 Aug 27 00:54 pg_commit_ts
-rw-------. 1 postgres postgres 3 Aug 27 00:54 PG_VERSION
drwx------. 2 postgres postgres 6 Aug 27 00:54 pg_twophase
drwx------. 2 postgres postgres 6 Aug 27 00:54 pg_tblspc
drwx------. 2 postgres postgres 6 Aug 27 00:54 pg_stat
drwx------. 2 postgres postgres 6 Aug 27 00:54 pg_snapshots
drwx------. 2 postgres postgres 6 Aug 27 00:54 pg_serial
drwx------. 2 postgres postgres 6 Aug 27 00:54 pg_replslot
drwx------. 4 postgres postgres 36 Aug 27 00:54 pg_multixact
-rw-------. 1 postgres postgres 28080 Aug 27 00:54 postgresql.conf
-rw-------. 1 postgres postgres 88 Aug 27 00:54 postgresql.auto.conf
-rw-------. 1 postgres postgres 1636 Aug 27 00:54 pg_ident.conf
-rw-------. 1 postgres postgres 4548 Aug 27 00:54 pg_hba.conf
drwx------. 2 postgres postgres 18 Aug 27 00:54 pg_xact
drwx------. 3 postgres postgres 60 Aug 27 00:54 pg_wal
drwx------. 2 postgres postgres 18 Aug 27 00:54 pg_subtrans
drwx------. 2 postgres postgres 4096 Aug 27 00:54 global
drwx------. 5 postgres postgres 41 Aug 27 00:54 base
drwx------. 4 postgres postgres 68 Aug 27 00:54 pg_logical
drwx------. 2 postgres postgres 32 Aug 27 00:54 log
-rw-------. 1 postgres postgres 30 Aug 27 00:54 current_logfiles
-rw-------. 1 postgres postgres 58 Aug 27 00:54 postmaster.opts
-rw-------. 1 postgres postgres 104 Aug 27 00:54 postmaster.pid
drwx------. 2 postgres postgres 25 Aug 27 00:57 pg_stat_tmp
[root@postgres-source-server data]#
[root@postgres-source-server ~]# rpm -qa | grep -i postgres
postgresql13-13.4-1PGDG.rhel8.x86_64
postgresql13-libs-13.4-1PGDG.rhel8.x86_64
postgresql13-server-13.4-1PGDG.rhel8.x86_64
[root@postgres-source-server ~]# su - postgres
[postgres@postgres-source-server ~]$ psql
psql (13.4)
Type "help" for help.
postgres=# select version();
version
-------------------------------------------------------------------------------------
PostgreSQL 13.4 on x86_64-pc-linux-gnu, compiled by gcc (GCC) 8.4.1 20200928 (Red Hat 8.4.1-1), 64-bit
(1 row)
postgres=# \l
List of databases
Name | Owner | Encoding | Collate | Ctype | Access privileges
-----------+----------+----------+-------------+-------------+-----------------------
postgres | postgres | UTF8 | en_US.UTF-8 | en_US.UTF-8 |
template0 | postgres | UTF8 | en_US.UTF-8 | en_US.UTF-8 | =c/postgres +
| | | | | postgres=CTc/postgres
template1 | postgres | UTF8 | en_US.UTF-8 | en_US.UTF-8 | =c/postgres +
| | | | | postgres=CTc/postgres
(3 rows)
postgres=# select usename from pg_user;
usename
----------
postgres
(1 row)
postgres=#\q
Instalation is done .......
-----------------------------------------------------------
Step - 2 Install MySQL Database Verion in Target Server
-----------------------------------------------------------
(i) By using YUM installation , we are going to install MySQL Database version 8.0.20
https://repo.mysql.com//mysql80-community-release-el8-1.noarch.rpm
[root@mysql-target-server ~]# hostname -I
172.31.17.136
[root@mysql-target-server ~]# hostname
mysql-target-server
[root@mysql-target-server ~]# cat /etc/redhat-release
Red Hat Enterprise Linux release 8.4 (Ootpa)
[root@mysql-target-server ~]# sudo yum install https://repo.mysql.com//mysql80-community-release-el8-1.noarch.rpm
[root@mysql-target-server ~]# yum repolist enabled | grep "mysql.*-community.*"
[root@mysql-target-server ~]# yum repolist all | grep mysql
[root@mysql-target-server ~]# sudo yum-config-manager --disable mysql57-community
[root@mysql-target-server ~]# sudo yum-config-manager --enable mysql80-community
[root@mysql-target-server ~]# cd /etc/yum.repos.d/
[root@mysql-target-server yum.repos.d]# pwd
/etc/yum.repos.d
[root@mysql-target-server yum.repos.d]# ls -ltr
-rw-r--r--. 1 root root 1050 Oct 4 2019 mysql-community-source.repo
-rw-r--r--. 1 root root 4782 Aug 18 11:07 redhat-rhui-beta.repo.disabled
-rw-r--r--. 1 root root 5768 Aug 27 01:09 redhat-rhui.repo
-rw-r--r--. 1 root root 482 Aug 27 01:09 redhat-rhui-client-config.repo
-rw-r--r--. 1 root root 995 Aug 27 01:14 mysql-community.repo
[root@mysql-target-server yum.repos.d]# sudo yum module disable mysql
[root@mysql-target-server yum.repos.d]# sudo yum install mysql-community-server
[root@mysql-target-server yum.repos.d]# cd
[root@mysql-target-server ~]# systemctl start mysqld
[root@mysql-target-server ~]# systemctl status mysqld
● mysqld.service - MySQL Server
Loaded: loaded (/usr/lib/systemd/system/mysqld.service; enabled; vendor preset: disabled)
Active: active (running) since Fri 2021-08-27 01:17:16 UTC; 3s ago
Docs: man:mysqld(8)
http://dev.mysql.com/doc/refman/en/using-systemd.html
Process: 58339 ExecStartPre=/usr/bin/mysqld_pre_systemd (code=exited, status=0/SUCCESS)
Main PID: 58410 (mysqld)
Status: "Server is operational"
Tasks: 38 (limit: 11272)
Memory: 444.8M
CGroup: /system.slice/mysqld.service
└─58410 /usr/sbin/mysqld
Aug 27 01:17:06 mysql-target-server systemd[1]: Starting MySQL Server...
Aug 27 01:17:16 mysql-target-server systemd[1]: Started MySQL Server.
[root@mysql-target-server ~]# sudo grep 'temporary password' /var/log/mysqld.log
2021-08-27T01:17:11.247580Z 6 [Note] [MY-010454] [Server] A temporary password is generated for root@localhost: srg1;1FM1k=)
[root@mysql-target-server ~]# mysql -uroot -p
Enter password: srg1;1FM1k=)
Welcome to the MySQL monitor. Commands end with ; or \g.Your MySQL connection id is 8 Server version: 8.0.26 .Copyright (c) 2000, 2021, Oracle and/or its affiliates.Oracle is a registered trademark of Oracle Corporation and/or its affiliates. Other names may be trademarks of their respective owners. Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.
mysql> ALTER USER 'root'@'localhost' IDENTIFIED BY 'root';
ERROR 1819 (HY000): Your password does not satisfy the current policy requirements
mysql> ALTER USER 'root'@'localhost' IDENTIFIED BY 'Root@123';
Query OK, 0 rows affected (0.00 sec)
mysql> FLUSH PRIVILEGES;
Query OK, 0 rows affected (0.00 sec)
mysql> exit
Bye
[root@mysql-target-server ~]# mysql -uroot -p
Enter password: Root@123
Welcome to the MySQL monitor. Commands end with ; or \g.Your MySQL connection id is 9.Server version: 8.0.26 MySQL Community Server - GPL Copyright (c) 2000, 2021, Oracle and/or its affiliates.Oracle is a registered trademark of Oracle Corporation and/or its affiliates. Other names may be trademarks of their respective owners.Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.
mysql> status;
--------------
mysql Ver 8.0.26 for Linux on x86_64 (MySQL Community Server - GPL)
Connection id: 9
Current database:
Current user: root@localhost
SSL: Not in use
Current pager: stdout
Using outfile: ''
Using delimiter: ;
Server version: 8.0.26 MySQL Community Server - GPL
Protocol version: 10
Connection: Localhost via UNIX socket
Server characterset: utf8mb4
Db characterset: utf8mb4
Client characterset: utf8mb4
Conn. characterset: utf8mb4
UNIX socket: /var/lib/mysql/mysql.sock
Binary data as: Hexadecimal
Uptime: 1 min 21 sec
Threads: 2 Questions: 10 Slow queries: 0 Opens: 131 Flush tables: 3 Open tables: 47 Queries per second avg: 0.123
--------------
mysql> show databases;
+--------------------+
| Database |
+--------------------+
| information_schema |
| mysql |
| performance_schema |
| sys |
+--------------------+
4 rows in set (0.00 sec)
mysql> select user,host,authentication_string from mysql.user;
+------------------+-----------+-------------------------------------
| user | host | authentication_string |
+------------------+-----------+-------------------------------------
| mysql.infoschema | localhost | $A$005$THISISACOMBINATIONOFINVALIDSALTANDPASSWORDTHATMUSTNEVERBRBEUSED |
| mysql.session | localhost | $A$005$THISISACOMBINATIONOFINVALIDSALTANDPASSWORDTHATMUSTNEVERBRBEUSED |
| mysql.sys | localhost | $A$005$THISISACOMBINATIONOFINVALIDSALTANDPASSWORDTHATMUSTNEVERBRBEUSED |
DG*pwbcdktmeDwPnE/qCRaKsqiq8mjwoEWU2Gs4OQhD6zkV. |
+------------------+-----------+-------------------------------------
4 rows in set (0.00 sec)
mysql> select user,host from mysql.user;
+------------------+-----------+
| user | host |
+------------------+-----------+
| mysql.infoschema | localhost |
| mysql.session | localhost |
| mysql.sys | localhost |
| root | localhost |
+------------------+-----------+
4 rows in set (0.00 sec)
mysql>exit
Installation is done.........
----------------------------------------------------------------
Step - 3 Install / Setup sqline s/w in Jump host server
----------------------------------------------------------------
Download the s/w from the below link. You can choose any version .But here i am using 3.1.773 S/w.
https://www.sqlines.com/download
[root@jump-host ~]# hostname -I
172.31.27.241
[root@jump-host ~]# hostname
jump-host
[root@jump-host ~]# cat /etc/redhat-release
Red Hat Enterprise Linux release 8.4 (Ootpa)
[root@jump-host ec2-user]# pwd
/home/ec2-user
[root@jump-host ec2-user]# ls -ltr
-rw-rw-r--. 1 ec2-user ec2-user 6634087 Mar 14 2019 sqlinesdata31773_x86_64_linux.tar.gz
-rw-r--r--. 1 root root 16524 Aug 27 01:46 sqlinesdata-3.1.773_x86_64_linux.tar
[root@jump-host ec2-user]# chown root:root sqlinesdata31773_x86_64_linux.tar.gz
[root@jump-host ec2-user]# ls -ltr
-rw-rw-r--. 1 root root 6634087 Mar 14 2019 sqlinesdata31773_x86_64_linux.tar.gz
-rw-r--r--. 1 root root 16524 Aug 27 01:46 sqlinesdata-3.1.773_x86_64_linux.tar
[root@jump-host ec2-user]# ls -ltr
-rw-rw-r--. 1 root root 6634087 Mar 14 2019 sqlinesdata31773_x86_64_linux.tar.gz
-rw-r--r--. 1 root root 16524 Aug 27 01:46 sqlinesdata-3.1.773_x86_64_linux.tar
[root@jump-host ec2-user]# gunzip sqlinesdata31773_x86_64_linux.tar.gz
[root@jump-host ec2-user]# ls -ltr
-rw-rw-r--. 1 root root 15872000 Mar 14 2019 sqlinesdata31773_x86_64_linux.tar
-rw-r--r--. 1 root root 16524 Aug 27 01:46 sqlinesdata-3.1.773_x86_64_linux.tar
[root@jump-host ec2-user]# tar -xvf sqlinesdata31773_x86_64_linux.tar
sqlinesdata31773_x86_64_linux/
sqlinesdata31773_x86_64_linux/sqldata
sqlinesdata31773_x86_64_linux/sqlines_dtmap.txt
sqlinesdata31773_x86_64_linux/sqlines_qf.txt
sqlinesdata31773_x86_64_linux/libQtCore.so
sqlinesdata31773_x86_64_linux/sqlines_tmap.txt
sqlinesdata31773_x86_64_linux/readme.txt
sqlinesdata31773_x86_64_linux/libQtGui.so
sqlinesdata31773_x86_64_linux/sqlines_cmap.txt
sqlinesdata31773_x86_64_linux/sqlines_cnsmap.txt
sqlinesdata31773_x86_64_linux/sqlines_twhere.txt
sqlinesdata31773_x86_64_linux/sqlines_tsel.txt
sqlinesdata31773_x86_64_linux/sqldata.cfg
sqlinesdata31773_x86_64_linux/license.txt
sqlinesdata31773_x86_64_linux/sqlines_tsel_all.txt
sqlinesdata31773_x86_64_linux/sqlines_tf.txt
[root@jump-host ec2-user]# ls -ltr
drwxr-xr-x. 2 root root 4096 Feb 8 2018 sqlinesdata31773_x86_64_linux
-rw-rw-r--. 1 root root 15872000 Mar 14 2019 sqlinesdata31773_x86_64_linux.tar
-rw-r--r--. 1 root root 16524 Aug 27 01:46 sqlinesdata-3.1.773_x86_64_linux.tar
[root@jump-host sqlinesdata31773_x86_64_linux]# pwd
/home/ec2-user/sqlinesdata31773_x86_64_linux
[root@jump-host sqlinesdata31773_x86_64_linux]# ./sqldata --version
SQLines Data 3.1.773 x86_64 Linux - Database Migration Tool.
[root@jump-host sqlinesdata31773_x86_64_linux]#
Sqline installation is done ....
Imp Note :-
--------------
In my case sqlinesdata31773 Version having some issue, so i was using with sqlinesdata-3.3.171_x86_64_linux.tar.gz version and try to load the data.
Here is the installation steps ..
[root@jump-host ec2-user]# pwd
/home/ec2-user
[root@jump-host ec2-user]# wget https://www.sqlines.com/downloads/sqlinesdata-3.3.171_x86_64_linux.tar.gz
--2021-09-06 00:45:10-- https://www.sqlines.com/downloads/sqlinesdata-3.3.171_x86_64_linux.tar.gz
Resolving www.sqlines.com (www.sqlines.com)... 173.201.209.135
Connecting to www.sqlines.com (www.sqlines.com)|173.201.209.135|:443... connected.
HTTP request sent, awaiting response... 200 OK
Length: 6802107 (6.5M) [application/x-tar]
Saving to: ‘sqlinesdata-3.3.171_x86_64_linux.tar.gz’
sqlinesdata-3.3.171_x86_64_linux.tar. 100%[==================================================================>] 6.49M 8.56MB/s in 0.8s
2021-09-06 00:45:11 (8.56 MB/s) - ‘sqlinesdata-3.3.171_x86_64_linux.tar.gz’ saved [6802107/6802107]
[root@jump-host ec2-user]# ls -ltr
-rw-rw-r--. 1 root root 15872000 Mar 14 2019 sqlinesdata31773_x86_64_linux.tar
-rw-r--r--. 1 root root 6790719 May 13 11:11 sqlinesdata-3.3.117_x86_64_linux.tar.gz
-rwxr-xr-x. 1 root root 2475 Aug 31 12:11 row_count_comparison.sh_sample_not_considered
-rwxr-xr-x. 1 root root 2963 Aug 31 16:45 row_count_comparison.sh_bkp_for_my_reference_not_considered
-rwxr-xr-x. 1 root root 1011 Sep 1 10:33 test.sh
-rwxr-xr-x. 1 root root 5206 Sep 1 10:34 row_count_comparison.sh_not_considered
-rw-r--r--. 1 root root 85 Sep 1 14:55 exp.sh
drwxr-xr-x. 2 root root 73 Sep 1 15:24 tmp_dir
drwxr-xr-x. 2 root root 125 Sep 1 15:30 output_dir
drwxrwxr-x. 5 ec2-user ec2-user 99 Sep 2 02:08 Final_Scripts
drwxr-xr-x. 8 root root 4096 Sep 3 01:21 sqlinesdata31773
drwxr-xr-x. 2 root root 4096 Sep 3 01:33 sqlinesdata-3.3.117_x86_64_linux
-rw-r--r--. 1 root root 6802107 Sep 5 20:30 sqlinesdata-3.3.171_x86_64_linux.tar.gz
[root@jump-host ec2-user]# tar -zxvf sqlinesdata-3.3.171_x86_64_linux.tar.gz
sqlinesdata-3.3.171_x86_64_linux/
sqlinesdata-3.3.171_x86_64_linux/sqldata
sqlinesdata-3.3.171_x86_64_linux/sqlines_dtmap.txt
sqlinesdata-3.3.171_x86_64_linux/sqlines_qf.txt
sqlinesdata-3.3.171_x86_64_linux/libQtCore.so
sqlinesdata-3.3.171_x86_64_linux/sqlines_tmap.txt
sqlinesdata-3.3.171_x86_64_linux/readme.txt
sqlinesdata-3.3.171_x86_64_linux/libQtGui.so
sqlinesdata-3.3.171_x86_64_linux/sqlines_cmap.txt
sqlinesdata-3.3.171_x86_64_linux/sqlines_cnsmap.txt
sqlinesdata-3.3.171_x86_64_linux/sqlines_twhere.txt
sqlinesdata-3.3.171_x86_64_linux/sqlines_tsel.txt
sqlinesdata-3.3.171_x86_64_linux/sqldata.cfg
sqlinesdata-3.3.171_x86_64_linux/license.txt
sqlinesdata-3.3.171_x86_64_linux/sqlines_tsel_all.txt
sqlinesdata-3.3.171_x86_64_linux/sqlines_tf.txt
[root@jump-host ec2-user]# pwd
/home/ec2-user
[root@jump-host ec2-user]# ls -ltr
-rw-rw-r--. 1 root root 15872000 Mar 14 2019 sqlinesdata31773_x86_64_linux.tar
-rw-r--r--. 1 root root 6790719 May 13 11:11 sqlinesdata-3.3.117_x86_64_linux.tar.gz
-rwxr-xr-x. 1 root root 2475 Aug 31 12:11 row_count_comparison.sh_sample_not_considered
-rwxr-xr-x. 1 root root 2963 Aug 31 16:45 row_count_comparison.sh_bkp_for_my_reference_not_considered
-rwxr-xr-x. 1 root root 1011 Sep 1 10:33 test.sh
-rwxr-xr-x. 1 root root 5206 Sep 1 10:34 row_count_comparison.sh_not_considered
-rw-r--r--. 1 root root 85 Sep 1 14:55 exp.sh
drwxr-xr-x. 2 root root 73 Sep 1 15:24 tmp_dir
drwxr-xr-x. 2 root root 125 Sep 1 15:30 output_dir
drwxrwxr-x. 5 ec2-user ec2-user 99 Sep 2 02:08 Final_Scripts
drwxr-xr-x. 8 root root 4096 Sep 3 01:21 sqlinesdata31773
drwxr-xr-x. 2 root root 4096 Sep 3 01:33 sqlinesdata-3.3.117_x86_64_linux
drwxr-xr-x. 2 root root 4096 Sep 5 20:27 sqlinesdata-3.3.171_x86_64_linux
-rw-r--r--. 1 root root 6802107 Sep 5 20:30 sqlinesdata-3.3.171_x86_64_linux.tar.gz
[root@jump-host ec2-user]# cd sqlinesdata-3.3.171_x86_64_linux
[root@jump-host sqlinesdata-3.3.171_x86_64_linux]# pwd
/home/ec2-user/sqlinesdata-3.3.171_x86_64_linux
[root@jump-host sqlinesdata-3.3.171_x86_64_linux]# ls -ltr
-rw-r--r--. 1 root root 7212 Aug 31 12:47 sqldata.cfg
-rw-r--r--. 1 root root 361 Sep 5 20:27 sqlines_twhere.txt
-rw-r--r--. 1 root root 579 Sep 5 20:27 sqlines_tsel.txt
-rw-r--r--. 1 root root 498 Sep 5 20:27 sqlines_tsel_all.txt
-rw-r--r--. 1 root root 192 Sep 5 20:27 sqlines_tmap.txt
-rw-r--r--. 1 root root 273 Sep 5 20:27 sqlines_tf.txt
-rw-r--r--. 1 root root 316 Sep 5 20:27 sqlines_qf.txt
-rw-r--r--. 1 root root 190 Sep 5 20:27 sqlines_dtmap.txt
-rw-r--r--. 1 root root 202 Sep 5 20:27 sqlines_cnsmap.txt
-rw-r--r--. 1 root root 401 Sep 5 20:27 sqlines_cmap.txt
-rw-r--r--. 1 root root 188 Sep 5 20:27 readme.txt
-rw-r--r--. 1 root root 118 Sep 5 20:27 license.txt
-rwxr-xr-x. 1 root root 11670344 Sep 5 20:27 libQtGui.so
-rwxr-xr-x. 1 root root 2701504 Sep 5 20:27 libQtCore.so
-rwxr-xr-x. 1 root root 2051152 Sep 5 20:29 sqldata
[root@jump-host sqlinesdata-3.3.171_x86_64_linux]# ./sqldata --version
SQLines Data 3.3.171 x86_64 Linux - Database Migration Tool.
---------------------------------------------------------------------
Step - 4 Loading sample data in Source Database i.e PostgreSQL Database
---------------------------------------------------------------------
Download the sample data from below link and keep in Postgres Server
https://www.postgresqltutorial.com/postgresql-sample-database/
https://www.postgresqltutorial.com/load-postgresql-sample-database/
The DVD rental database represents the business processes of a DVD rental store. The DVD rental database has many objects including:
15 tables
1 trigger
7 views
8 functions
1 domain
13 sequences
[root@postgres-source-server ~]# su - postgres
Last login: Fri Aug 27 00:59:08 UTC 2021 on pts/0
[postgres@postgres-source-server ~]$ psql -U postgres
psql (13.4)
Type "help" for help.
postgres=# \l
List of databases
Name | Owner | Encoding | Collate | Ctype | Access privileges
-----------+----------+----------+-------------+-------------+-----------------------
postgres | postgres | UTF8 | en_US.UTF-8 | en_US.UTF-8 |
template0 | postgres | UTF8 | en_US.UTF-8 | en_US.UTF-8 | =c/postgres +
| | | | | postgres=CTc/postgres
template1 | postgres | UTF8 | en_US.UTF-8 | en_US.UTF-8 | =c/postgres +
| | | | | postgres=CTc/postgres
(3 rows)
postgres=# CREATE DATABASE dvdrental;
CREATE DATABASE
postgres=# \l
List of databases
Name | Owner | Encoding | Collate | Ctype | Access privileges
-----------+----------+----------+-------------+-------------+-----------------------
dvdrental | postgres | UTF8 | en_US.UTF-8 | en_US.UTF-8 |
postgres | postgres | UTF8 | en_US.UTF-8 | en_US.UTF-8 |
template0 | postgres | UTF8 | en_US.UTF-8 | en_US.UTF-8 | =c/postgres +
| | | | | postgres=CTc/postgres
template1 | postgres | UTF8 | en_US.UTF-8 | en_US.UTF-8 | =c/postgres +
| | | | | postgres=CTc/postgres
(4 rows)
postgres=# \q
[root@postgres-source-server ~]# cd /home/ec2-user/
[root@postgres-source-server ec2-user]# pwd
/home/ec2-user
[root@postgres-source-server ec2-user]# ls -ltr
-rw-rw-r--. 1 ec2-user ec2-user 2835456 May 12 2019 dvdrental.tar
[root@postgres-source-server ec2-user]# chown postgres:postgres dvdrental.tar
[root@postgres-source-server ec2-user]# pwd
/home/ec2-user
[root@postgres-source-server ec2-user]# ls -ltr
-rw-rw-r--. 1 postgres postgres 2835456 May 12 2019 dvdrental.tar
[postgres@postgres-source-server ~]$ cd /home/
[postgres@postgres-source-server home]$ ls -ltr
drwx------. 3 ec2-user ec2-user 116 Aug 27 02:03 ec2-user
[root@postgres-source-server ec2-user]# pwd
/home/ec2-user
[root@postgres-source-server ec2-user]# ls -ltr
-rw-rw-r--. 1 postgres postgres 2835456 May 12 2019 dvdrental.tar
[root@postgres-source-server ec2-user]# mv dvdrental.tar /home
[root@postgres-source-server ec2-user]# ls -ltr
[root@postgres-source-server ec2-user]# su - postgres
Last login: Fri Aug 27 02:07:19 UTC 2021 on pts/0
[postgres@postgres-source-server ~]$ pwd
/var/lib/pgsql
[postgres@postgres-source-server ~]$ cd /home/
[postgres@postgres-source-server home]$ pwd
/home
[postgres@postgres-source-server home]$ ls -ltr
-rw-rw-r--. 1 postgres postgres 2835456 May 12 2019 dvdrental.tar
drwx------. 3 ec2-user ec2-user 95 Aug 27 02:07 ec2-user
[postgres@postgres-source-server home]$ pg_restore -U postgres -d dvdrental dvdrental.tar
[postgres@postgres-source-server home]$ ls -ltr
-rw-rw-r--. 1 postgres postgres 2835456 May 12 2019 dvdrental.tar
drwx------. 3 ec2-user ec2-user 95 Aug 27 02:07 ec2-user
[postgres@postgres-source-server ~]$ psql -U postgres
psql (13.4)
Type "help" for help.
postgres=# \l
List of databases
Name | Owner | Encoding | Collate | Ctype | Access privileges
-----------+----------+----------+-------------+-------------+-----------------------
dvdrental | postgres | UTF8 | en_US.UTF-8 | en_US.UTF-8 |
postgres | postgres | UTF8 | en_US.UTF-8 | en_US.UTF-8 |
template0 | postgres | UTF8 | en_US.UTF-8 | en_US.UTF-8 | =c/postgres +
| | | | | postgres=CTc/postgres
template1 | postgres | UTF8 | en_US.UTF-8 | en_US.UTF-8 | =c/postgres +
| | | | | postgres=CTc/postgres
(4 rows)
postgres=# \l+
List of databases
Name | Owner | Encoding | Collate | Ctype | Access privileges | Size | Tablespace | Description
-----------+----------+----------+-------------+-------------+-----------------------
dvdrental | postgres | UTF8 | en_US.UTF-8 | en_US.UTF-8 | | 15 MB | pg_default |
postgres | postgres | UTF8 | en_US.UTF-8 | en_US.UTF-8 | | 7901 kB | pg_default | default administrative connection database
template0 | postgres | UTF8 | en_US.UTF-8 | en_US.UTF-8 | =c/postgres +| 7753 kB | pg_default | unmodifiable empty database
| | | | | postgres=CTc/postgres | | |
template1 | postgres | UTF8 | en_US.UTF-8 | en_US.UTF-8 | =c/postgres +| 7753 kB | pg_default | default template for new databases
| | | | | postgres=CTc/postgres | | |
(4 rows)
postgres=# \c dvdrental
You are now connected to database "dvdrental" as user "postgres".
dvdrental=# \dt
List of relations
Schema | Name | Type | Owner
--------+---------------+-------+----------
public | actor | table | postgres
public | address | table | postgres
public | category | table | postgres
public | city | table | postgres
public | country | table | postgres
public | customer | table | postgres
public | film | table | postgres
public | film_actor | table | postgres
public | film_category | table | postgres
public | inventory | table | postgres
public | language | table | postgres
public | payment | table | postgres
public | rental | table | postgres
public | staff | table | postgres
public | store | table | postgres
(15 rows)
dvdrental=# select n.nspname as table_schema, c.relname as table_name, c.reltuples as rows from pg_class c join pg_namespace n on n.oid = c.relnamespace where c.relkind = 'r' and n.nspname not in ('information_schema','pg_catalog') order by c.reltuples desc;
table_schema | table_name | rows
--------------+---------------+-------
public | rental | 16044
public | payment | 14596
public | film_actor | 5462
public | inventory | 4581
public | film | 1000
public | film_category | 1000
public | address | 603
public | city | 600
public | customer | 599
public | actor | 200
public | country | 109
public | category | 16
public | language | 6
public | staff | 2
public | store | 2
(15 rows)
dvdrental=# select nsp.nspname as SchemaName,cls.relname as ObjectName,rol.rolname as ObjectOwner,case cls.relkind when 'r' then 'TABLE'
when 'm' then 'MATERIALIZED_VIEW'
when 'i' then 'INDEX'
when 'S' then 'SEQUENCE'
when 'v' then 'VIEW'
when 'c' then 'TYPE'
else cls.relkind::text
end as ObjectType from pg_class cls join pg_roles rol on rol.oid = cls.relowner join pg_namespace nsp on nsp.oid = cls.relnamespace where nsp.nspname not in ('information_schema', 'pg_catalog') and nsp.nspname not like 'pg_toast%' and rol.rolname = 'postgres'
order by ObjectType, 2;
schemaname | objectname | objectowner | objecttype
------------+-----------------------------------------------------+--
public | actor_pkey | postgres | INDEX
public | address_pkey | postgres | INDEX
public | category_pkey | postgres | INDEX
public | city_pkey | postgres | INDEX
public | country_pkey | postgres | INDEX
public | customer_pkey | postgres | INDEX
public | film_actor_pkey | postgres | INDEX
public | film_category_pkey | postgres | INDEX
public | film_fulltext_idx | postgres | INDEX
public | film_pkey | postgres | INDEX
public | idx_actor_last_name | postgres | INDEX
public | idx_fk_address_id | postgres | INDEX
public | idx_fk_city_id | postgres | INDEX
public | idx_fk_country_id | postgres | INDEX
public | idx_fk_customer_id | postgres | INDEX
public | idx_fk_film_id | postgres | INDEX
public | idx_fk_inventory_id | postgres | INDEX
public | idx_fk_language_id | postgres | INDEX
public | idx_fk_rental_id | postgres | INDEX
public | idx_fk_staff_id | postgres | INDEX
public | idx_fk_store_id | postgres | INDEX
public | idx_last_name | postgres | INDEX
public | idx_store_id_film_id | postgres | INDEX
public | idx_title | postgres | INDEX
public | idx_unq_manager_staff_id | postgres | INDEX
public | idx_unq_rental_rental_date_inventory_id_customer_id | postgres | INDEX
public | inventory_pkey | postgres | INDEX
public | language_pkey | postgres | INDEX
public | payment_pkey | postgres | INDEX
public | rental_pkey | postgres | INDEX
public | staff_pkey | postgres | INDEX
public | store_pkey | postgres | INDEX
public | actor_actor_id_seq | postgres | SEQUENCE
public | address_address_id_seq | postgres | SEQUENCE
public | category_category_id_seq | postgres | SEQUENCE
public | city_city_id_seq | postgres | SEQUENCE
public | country_country_id_seq | postgres | SEQUENCE
public | customer_customer_id_seq | postgres | SEQUENCE
public | film_film_id_seq | postgres | SEQUENCE
public | inventory_inventory_id_seq | postgres | SEQUENCE
public | language_language_id_seq | postgres | SEQUENCE
public | payment_payment_id_seq | postgres | SEQUENCE
public | rental_rental_id_seq | postgres | SEQUENCE
public | staff_staff_id_seq | postgres | SEQUENCE
public | store_store_id_seq | postgres | SEQUENCE
public | actor | postgres | TABLE
public | address | postgres | TABLE
public | category | postgres | TABLE
public | city | postgres | TABLE
public | country | postgres | TABLE
public | customer | postgres | TABLE
public | film | postgres | TABLE
public | film_actor | postgres | TABLE
public | film_category | postgres | TABLE
public | inventory | postgres | TABLE
public | language | postgres | TABLE
public | payment | postgres | TABLE
public | rental | postgres | TABLE
public | staff | postgres | TABLE
public | store | postgres | TABLE
public | actor_info | postgres | VIEW
public | customer_list | postgres | VIEW
public | film_list | postgres | VIEW
public | nicer_but_slower_film_list | postgres | VIEW
public | sales_by_film_category | postgres | VIEW
public | sales_by_store | postgres | VIEW
public | staff_list | postgres | VIEW
(67 rows)
dvdrental=# \dt
List of relations
Schema | Name | Type | Owner
--------+---------------+-------+----------
public | actor | table | postgres
public | address | table | postgres
public | category | table | postgres
public | city | table | postgres
public | country | table | postgres
public | customer | table | postgres
public | film | table | postgres
public | film_actor | table | postgres
public | film_category | table | postgres
public | inventory | table | postgres
public | language | table | postgres
public | payment | table | postgres
public | rental | table | postgres
public | staff | table | postgres
public | store | table | postgres
(15 rows)
dvdrental=# \dv
List of relations
Schema | Name | Type | Owner
--------+----------------------------+------+----------
public | actor_info | view | postgres
public | customer_list | view | postgres
public | film_list | view | postgres
public | nicer_but_slower_film_list | view | postgres
public | sales_by_film_category | view | postgres
public | sales_by_store | view | postgres
public | staff_list | view | postgres
(7 rows)
dvdrental=# \df
List of functions
Schema | Name | Result data type | Argument data types | Type
--------+----------------------------+------------------+------------
public | _group_concat | text | text, text | func
public | film_in_stock | SETOF integer | p_film_id integer, p_store_id integer, OUT p_film_count integer | func
public | film_not_in_stock | SETOF integer | p_film_id integer, p_store_id integer, OUT p_film_count integer | func
public | get_customer_balance | numeric | p_customer_id integer, p_effective_date timestamp without time zone | func
public | group_concat | text | text | agg
public | inventory_held_by_customer | integer | p_inventory_id integer | func
public | inventory_in_stock | boolean | p_inventory_id integer | func
public | last_day | date | timestamp without time zone | func
public | last_updated | trigger | | func
public | rewards_report | SETOF customer | min_monthly_purchases integer, min_dollar_amount_purchased numeric | func
(10 rows)
dvdrental=# \dD
List of domains
Schema | Name | Type | Collation | Nullable | Default | Check
--------+------+---------+-----------+----------+---------+----------
public | year | integer | | | | CHECK (VALUE >= 1901 AND VALUE <= 2155)
(1 row)
dvdrental=# \ds
List of relations
Schema | Name | Type | Owner
--------+----------------------------+----------+----------
public | actor_actor_id_seq | sequence | postgres
public | address_address_id_seq | sequence | postgres
public | category_category_id_seq | sequence | postgres
public | city_city_id_seq | sequence | postgres
public | country_country_id_seq | sequence | postgres
public | customer_customer_id_seq | sequence | postgres
public | film_film_id_seq | sequence | postgres
public | inventory_inventory_id_seq | sequence | postgres
public | language_language_id_seq | sequence | postgres
public | payment_payment_id_seq | sequence | postgres
public | rental_rental_id_seq | sequence | postgres
public | staff_staff_id_seq | sequence | postgres
public | store_store_id_seq | sequence | postgres
(13 rows)
dvdrental=# \da
List of aggregate functions
Schema | Name | Result data type | Argument data types | Description
--------+--------------+------------------+---------------------+----
public | group_concat | text | text |
(1 row)
dvdrental=# select event_object_schema as table_schema,
event_object_table as table_name,
trigger_schema,
trigger_name,
string_agg(event_manipulation, ',') as event,
action_timing as activation,
action_condition as condition,
action_statement as definition
from information_schema.triggers
group by 1,2,3,4,6,7,8
order by table_schema,table_name;
table_schema | table_name | trigger_schema | trigger_name | event | activation | condition |
definition
--------------+---------------+----------------+---------------------
public | actor | public | last_updated | UPDATE | BEFORE | | EXECUTE FUNCTION last_updated()
public | address | public | last_updated | UPDATE | BEFORE | | EXECUTE FUNCTION last_updated()
public | category | public | last_updated | UPDATE | BEFORE | | EXECUTE FUNCTION last_updated()
public | city | public | last_updated | UPDATE | BEFORE | | EXECUTE FUNCTION last_updated()
public | country | public | last_updated | UPDATE | BEFORE | | EXECUTE FUNCTION last_updated()
public | customer | public | last_updated | UPDATE | BEFORE | | EXECUTE FUNCTION last_updated()
public | film | public | film_fulltext_trigger | UPDATE,INSERT | BEFORE | | EXECUTE FUNCTION
tsvector_update_trigger('fulltext', 'pg_catalog.english', 'title', 'description')
public | film | public | last_updated | UPDATE | BEFORE | | EXECUTE FUNCTION last_updated()
public | film_actor | public | last_updated | UPDATE | BEFORE | | EXECUTE FUNCTION last_updated()
public | film_category | public | last_updated | UPDATE | BEFORE | | EXECUTE FUNCTION last_updated()
public | inventory | public | last_updated | UPDATE | BEFORE | | EXECUTE FUNCTION last_updated()
public | language | public | last_updated | UPDATE | BEFORE | | EXECUTE FUNCTION last_updated()
public | rental | public | last_updated | UPDATE | BEFORE | | EXECUTE FUNCTION last_updated()
public | staff | public | last_updated | UPDATE | BEFORE | | EXECUTE FUNCTION last_updated()
public | store | public | last_updated | UPDATE | BEFORE | | EXECUTE FUNCTION last_updated()
(15 rows)
dvdrental=# \du
List of roles
Role name | Attributes | Member of
-----------+---------------------------------------------------------
postgres | Superuser, Create role, Create DB, Replication, Bypass RLS | {}
dvdrental=# select DISTINCT trigger_name,event_object_schema as table_schema ,event_object_table as table_name,trigger_schema from information_schema.triggers;
trigger_name | table_schema | table_name | trigger_schema
-----------------------+--------------+---------------+--------------
film_fulltext_trigger | public | film | public
last_updated | public | actor | public
last_updated | public | address | public
last_updated | public | category | public
last_updated | public | city | public
last_updated | public | country | public
last_updated | public | customer | public
last_updated | public | film | public
last_updated | public | film_actor | public
last_updated | public | film_category | public
last_updated | public | inventory | public
last_updated | public | language | public
last_updated | public | rental | public
last_updated | public | staff | public
last_updated | public | store | public
(15 rows)
dvdrental=# select DISTINCT trigger_name from information_schema.triggers;
trigger_name
-----------------------
film_fulltext_trigger
last_updated
(2 rows)
dvdrental=#\c postgres
postgres=# \l+
List of databases
Name | Owner | Encoding | Collate | Ctype | Access privileges | Size | Tablespace | Description
-----------+----------+----------+-------------+-------------+-------
dvdrental | postgres | UTF8 | en_US.UTF-8 | en_US.UTF-8 | | 15 MB | pg_default |
postgres | postgres | UTF8 | en_US.UTF-8 | en_US.UTF-8 | | 7901 kB | pg_default | default administrative connection database
template0 | postgres | UTF8 | en_US.UTF-8 | en_US.UTF-8 | =c/postgres +| 7753 kB | pg_default | unmodifiable empty database
| | | | | postgres=CTc/postgres | | |
template1 | postgres | UTF8 | en_US.UTF-8 | en_US.UTF-8 | =c/postgres +| 7753 kB | pg_default | default template for new databases
| | | | | postgres=CTc/postgres | | |
(4 rows)
postgres=# CREATE DATABASE migration_db;
CREATE DATABASE
postgres=# \l+
List of databases
Name | Owner | Encoding | Collate | Ctype | Access privileges | Size | Tablespace | Description
--------------+----------+----------+-------------+-------------+----
dvdrental | postgres | UTF8 | en_US.UTF-8 | en_US.UTF-8 | | 15 MB | pg_default |
migration_db | postgres | UTF8 | en_US.UTF-8 | en_US.UTF-8 | | 7753 kB | pg_default |
postgres | postgres | UTF8 | en_US.UTF-8 | en_US.UTF-8 | | 7901 kB | pg_default | default administrative connection database
template0 | postgres | UTF8 | en_US.UTF-8 | en_US.UTF-8 | =c/postgres +| 7753 kB | pg_default | unmodifiable empty database
| | | | | postgres=CTc/postgres | | |
template1 | postgres | UTF8 | en_US.UTF-8 | en_US.UTF-8 | =c/postgres +| 7753 kB | pg_default | default template for new databases
| | | | | postgres=CTc/postgres | | |
(5 rows)
postgres=# \c migration_db
You are now connected to database "migration_db" as user "postgres".
migration_db=# CREATE TABLE users(id BIGINT GENERATED ALWAYS AS IDENTITY,PRIMARY KEY(id),hash_firstname TEXT NOT NULL,hash_lastname TEXT NOT NULL,gender VARCHAR(6) NOT NULL CHECK (gender IN ('male', 'female')));
CREATE TABLE
migration_db=# \dt+
List of relations
Schema | Name | Type | Owner | Persistence | Size | Description
--------+-------+-------+----------+-------------+------------+------
public | users | table | postgres | permanent | 8192 bytes |
(1 row)
migration_db=# INSERT INTO users(hash_firstname, hash_lastname, gender) SELECT md5(RANDOM()::TEXT), md5(RANDOM()::TEXT), CASE WHEN RANDOM() < 0.5 THEN 'male ' ELSE 'female' END FROM generate_series(1, 10000);
INSERT 0 10000
migration_db=# INSERT INTO users(hash_firstname, hash_lastname, gender) SELECT md5(RANDOM()::TEXT), md5(RANDOM()::TEXT), CASE WHEN RANDOM() < 0.5 THEN 'male ' ELSE 'female' END FROM generate_series(1, 100000);
INSERT 0 100000
migration_db=# SELECT COUNT(*) FROM users;
count
--------
110000
(1 row)
migration_db=# SELECT COUNT(*) FROM users WHERE gender = 'male';
count
-------
54806
(1 row)
migration_db=# SELECT COUNT(*) FROM users WHERE gender = 'female';
count
-------
55194
(1 row)
migration_db=# SELECT * FROM users LIMIT 15;
id | hash_firstname | hash_lastname | gender
----+----------------------------------+-----------------------------
1 | 86e6a7508e8bbc09bbdaf4d05278d13d | ac907881016951bf5095bad5b6c6230c | male
2 | ce13312f916b291c74d8284b8e7c507e | d884663083d1566e7737c5e3a9557e58 | female
3 | f66fed9dc305c065a7550ad1bae216d5 | cdf904143c1f817b21726f04560a95d0 | female
4 | b25ab3fc57febec70aedc009d14f7961 | 5cb036ccefd5a47bae6e53139426b02e | female
5 | f7d4d0c7dd8e360ba73d5202159e1d4e | 9ea34f2313db581e749a72d72e40d286 | female
6 | 30bb6c7ac42ebe968badf9a3fa39d0fb | eb3261c1a8942320b7b222a88eec718d | male
7 | 903f082efaec509b56c43e9b4bee9053 | cb6e9d3be5a2a696dd4057dc5067e32a | female
8 | 9b6038e0380a63c66d5bf44dd63f09be | 888e4f5c925e45e9268b1616f8335938 | male
9 | ec0c541c02ef34786f73318ca05b9339 | 694e27dd2d04d8dc0041916adbb57b2f | male
10 | 24d36bd244260a96e67b826a4e5c9610 | ee86a576c05dfa06ee1a7a1f726272f6 | female
11 | 71397f30d1d2b0f590d0509db6d2fbfd | 4c3a9ed356dc2457db8dddfc9cbb7635 | female
12 | 176d6bd244eee16dd4e53707026fb26f | 189f7de8641acd36e76cb9e9499a516d | male
13 | b51f4cd03d5ec0191bcf562510d1ad4b | 7eac5237163daef2b19e6af58cbca109 | male
14 | 20f7d7a8d826790ca66365509ab8d6de | a6ae9ffdc540728d699c2e9f0395334d | female
15 | e3171ebb42a3f2af97ed342de24d6564 | 7d345b5782bc67bdcfd62b18e9bf05d5 | female
(15 rows)
migration_db=# \dt+
List of relations
Schema | Name | Type | Owner | Persistence | Size | Description
--------+-------+-------+----------+-------------+-------+-----------
public | users | table | postgres | permanent | 12 MB |
(1 row)
migration_db=# CREATE TABLE employee (id int8 NOT NULL,name varchar(120) NOT NULL,salary int8 NOT NULL,CONSTRAINT emp_pk PRIMARY KEY (id));
CREATE TABLE
migration_db=# \dt
List of relations
Schema | Name | Type | Owner
--------+----------+-------+----------
public | employee | table | postgres
public | users | table | postgres
(2 rows)
migration_db=# WITH salary_list AS ( SELECT '{1000, 2000, 5000}'::INT[] salary) INSERT INTO public.employee (id, name, salary) SELECT n, 'Employee ' || n as name, salary[1 + mod(n, array_length(salary, 1))] FROM salary_list, generate_series(1, 1000000) as n ;
INSERT 0 1000000
migration_db=# select count(*) from employee;
count
---------
1000000
(1 row)
migration_db=# select * from employee LIMIT 15;
id | name | salary
----+-------------+--------
1 | Employee 1 | 2000
2 | Employee 2 | 5000
3 | Employee 3 | 1000
4 | Employee 4 | 2000
5 | Employee 5 | 5000
6 | Employee 6 | 1000
7 | Employee 7 | 2000
8 | Employee 8 | 5000
9 | Employee 9 | 1000
10 | Employee 10 | 2000
11 | Employee 11 | 5000
12 | Employee 12 | 1000
13 | Employee 13 | 2000
14 | Employee 14 | 5000
15 | Employee 15 | 1000
(15 rows)
migration_db=# \c postgres
You are now connected to database "postgres" as user "postgres".
postgres=# CREATE DATABASE test_db;
CREATE DATABASE
postgres=# \l+
List of databases
Name | Owner | Encoding | Collate | Ctype | Access privileges | Size | Tablespace | Description
--------------+----------+----------+-------------+-------------+----
dvdrental | postgres | UTF8 | en_US.UTF-8 | en_US.UTF-8 | | 15 MB | pg_default |
migration_db | postgres | UTF8 | en_US.UTF-8 | en_US.UTF-8 | | 101 MB | pg_default |
postgres | postgres | UTF8 | en_US.UTF-8 | en_US.UTF-8 | | 7901 kB | pg_default | default administrative connection database
template0 | postgres | UTF8 | en_US.UTF-8 | en_US.UTF-8 | =c/postgres +| 7753 kB | pg_default | unmodifiable empty database
| | | | | postgres=CTc/postgres | | |
template1 | postgres | UTF8 | en_US.UTF-8 | en_US.UTF-8 | =c/postgres +| 7753 kB | pg_default | default template for new databases
| | | | | postgres=CTc/postgres | | |
test_db | postgres | UTF8 | en_US.UTF-8 | en_US.UTF-8 | | 7753 kB | pg_default |
(6 rows)
postgres=# \c test_db
You are now connected to database "test_db" as user "postgres".
test_db=# CREATE TABLE users(id SERIAL PRIMARY KEY,email VARCHAR(40) NOT NULL UNIQUE);
CREATE TABLE
test_db=# CREATE TABLE posts(id SERIAL PRIMARY KEY,user_id INTEGER NOT NULL REFERENCES users(id),title VARCHAR(100) NOT NULL UNIQUE);
CREATE TABLE
test_db=# CREATE TABLE comments(id SERIAL PRIMARY KEY,user_id INTEGER NOT NULL REFERENCES users(id),post_id INTEGER NOT NULL REFERENCES posts(id),body VARCHAR(500) NOT NULL);
CREATE TABLE
test_db=# \dt
List of relations
Schema | Name | Type | Owner
--------+----------+-------+----------
public | comments | table | postgres
public | posts | table | postgres
public | users | table | postgres
(3 rows)
test_db=# INSERT INTO users(email)
SELECT
'user_' || seq || '@' || (
CASE (RANDOM() * 2)::INT
WHEN 0 THEN 'gmail'
WHEN 1 THEN 'hotmail'
WHEN 2 THEN 'yahoo'
END
) || '.com' AS email
FROM GENERATE_SERIES(1, 100000) seq;
INSERT 0 100000
test_db=# select count(*) from users;
count
--------
100000
(1 row)
test_db=#INSERT INTO posts(user_id, title)
WITH expanded AS (
SELECT RANDOM(), seq, u.id AS user_id
FROM GENERATE_SERIES(1, 50) seq, users u
), shuffled AS (
SELECT e.*
FROM expanded e
INNER JOIN (
SELECT ei.seq, MIN(ei.random) FROM expanded ei GROUP BY ei.seq
) em ON (e.seq = em.seq AND e.random = em.min)
ORDER BY e.seq
)
SELECT
s.user_id,
'It is ' || s.seq || ' ' || (
CASE (RANDOM() * 2)::INT
WHEN 0 THEN 'sql'
WHEN 1 THEN 'elixir'
WHEN 2 THEN 'ruby'
END
) as title
FROM shuffled s;
INSERT 0 50
test_db=# SELECT * FROM posts LIMIT 10;
id | user_id | title
----+---------+----------------
1 | 90224 | It is 1 elixir
2 | 90272 | It is 2 sql
3 | 48796 | It is 3 elixir
4 | 25106 | It is 4 sql
5 | 9129 | It is 5 sql
6 | 47149 | It is 6 elixir
7 | 12251 | It is 7 sql
8 | 8887 | It is 8 elixir
9 | 60728 | It is 9 ruby
10 | 14985 | It is 10 sql
(10 rows)
test_db=# INSERT INTO comments(user_id, post_id, body)
WITH expanded AS (
SELECT RANDOM(), seq, u.id AS user_id, p.id AS post_id
FROM GENERATE_SERIES(1, 10) seq, users u, posts p
), shuffled AS (
SELECT e.*
FROM expanded e
INNER JOIN (
SELECT ei.seq, MIN(ei.random) FROM expanded ei GROUP BY ei.seq
) em ON (e.seq = em.seq AND e.random = em.min)
ORDER BY e.seq
)
SELECThuffled s;ment ' || s.seq AS body
s.user_id,
s.post_id,
'Here some comment ' || s.seq AS body
FROM shuffled s;
INSERT 0 10
test_db=# SELECT * FROM comments LIMIT 10;
id | user_id | post_id | body
----+---------+---------+----------------------
1 | 51609 | 41 | Here some comment 1
2 | 77322 | 34 | Here some comment 2
3 | 52112 | 17 | Here some comment 3
4 | 54972 | 32 | Here some comment 4
5 | 65922 | 17 | Here some comment 5
6 | 48016 | 37 | Here some comment 6
7 | 63634 | 15 | Here some comment 7
8 | 3617 | 8 | Here some comment 8
9 | 25895 | 6 | Here some comment 9
10 | 78573 | 41 | Here some comment 10
(10 rows)
test_db=# \dt
List of relations
Schema | Name | Type | Owner
--------+----------+-------+----------
public | comments | table | postgres
public | genres | table | postgres
public | posts | table | postgres
public | users | table | postgres
(4 rows)
test_db=# CREATE TABLE kids_name (
id INT PRIMARY KEY,
name VARCHAR
);
CREATE TABLE
test_db=# INSERT INTO kids_name (id, name) VALUES (1, 'raizel');
INSERT 0 1
test_db=# INSERT INTO kids_name (id, name) VALUES (11, 'yeshua'), (2, 'abishai'),(3, 'abiel'),(4, 'dj'),(5, 'ann');
INSERT 0 5
test_db=# CREATE TABLE IF NOT EXISTS kids_audit (
id SERIAL PRIMARY KEY,
kids_id INT,
modified_on TIMESTAMP(6) NOT NULL,
kids_name CHARACTER VARYING(100)
);
CREATE TABLE
test_db=# CREATE OR REPLACE FUNCTION name_changes_log()
RETURNS TRIGGER AS
$BODY$
BEGIN
IF NEW.name <> OLD.name THEN
INSERT INTO kids_audit(kids_id,kids_name,modified_on)
VALUES(OLD.id,OLD.name,now());
END IF;
RETURN NEW;
END;
$BODY$ LANGUAGE plpgsql;
CREATE FUNCTION
test_db=# CREATE TRIGGER name_last_modified
BEFORE UPDATE
ON kids_name
FOR EACH ROW
EXECUTE PROCEDURE name_changes_log();
CREATE TRIGGER
test_db=# UPDATE kids_name SET name = 'Josh' WHERE ID = 5;
UPDATE 1
test_db=# select * from kids_audit;
id | kids_id | modified_on | kids_name
----+---------+----------------------------+-----------
1 | 5 | 2021-08-27 06:15:51.543493 | ann
(1 row)
test_db=# UPDATE kids_name SET name = 'vasu' WHERE ID = 5;
UPDATE 1
test_db=# select * from kids_audit;
id | kids_id | modified_on | kids_name
----+---------+----------------------------+-----------
1 | 5 | 2021-08-27 06:15:51.543493 | ann
2 | 5 | 2021-08-27 06:18:51.488951 | Josh
(2 rows)
test_db=# select * from kids_name;
id | name
----+---------
1 | raizel
11 | yeshua
2 | abishai
3 | abiel
4 | dj
5 | vasu
(6 rows)
postgres=# CREATE DATABASE kidsdb;
CREATE DATABASE
postgres=# \c kidsdb
You are now connected to database "kidsdb" as user "postgres".
kidsdb=# CREATE TABLE Employees
(
Id serial,
Name VARCHAR(100),
DateOfBirth Date,
City VARCHAR(100),
Designation VARCHAR(100),
JoiningDate Date
) ;
CREATE TABLE
kidsdb=# CREATE OR REPLACE PROCEDURE AddEmployee
(
EmpId INOUT INT,
EmpName VARCHAR(100),
EmpDob DATE,
EmpCity VARCHAR(100),
EmpDesignation VARCHAR(100),
EmpJoiningDate DATE
)
LANGUAGE plpgsql AS
$$
BEGIN
INSERT INTO Employees (Name,DateOfBirth,City,Designation,JoiningDate) VALUES
(EmpName,
EmpDob,
EmpCity,
EmpDesignation,
EmpJoiningDate
) RETURNING Id INTO EmpId;
END
$$;
CREATE PROCEDURE
kidsdb=# CALL AddEmployee(null,'Peter Parker','1997-10-01','New York' ,'Web Developer','2020-11-01');
empid
-------
1
(1 row)
kidsdb=# SELECT * FROM Employees;
id | name | dateofbirth | city | designation | joiningdate
----+--------------+-------------+----------+---------------+--------
1 | Peter Parker | 1997-10-01 | New York | Web Developer | 2020-11-01
(1 row)
kidsdb=# CREATE OR REPLACE PROCEDURE UpdateEmployee
(
EmpId INT,
EmpName VARCHAR(100),
EmpDob DATE,
EmpCity VARCHAR(100),
EmpDesignation VARCHAR(100),
EmpJoiningDate DATE
)
LANGUAGE plpgsql AS
$$
BEGIN
UPDATE Employees SET
Name = EmpName,
DateOfBirth = EmpDob,
City = EmpCity,
Designation = EmpDesignation,
JoiningDate = EmpJoiningDate
Where Id = EmpId;
END
$$;
CREATE PROCEDURE
kidsdb=# CALL UpdateEmployee(1,'Peter S Parker','1999-10-01','New York' ,'Web Developer','2020-11-01') ;
CALL
kidsdb=# CREATE OR REPLACE PROCEDURE DeleteEmployee
(
EmpId INT
)
LANGUAGE plpgsql AS
$$
BEGIN
DELETE FROM Employees WHERE Id = EmpId;
END
$$;
CREATE PROCEDURE
kidsdb=# CALL DeleteEmployee(2) ;
CALL
kidsdb=# CREATE OR REPLACE FUNCTION GetAllEmployees()
RETURNS Employees
LANGUAGE SQL
AS
$$
SELECT * FROM Employees;
$$;
CREATE FUNCTION
kidsdb=# SELECT * FROM GetAllEmployees() ;
id | name | dateofbirth | city | designation | joiningdate
----+----------------+-------------+----------+---------------+------
1 | Peter S Parker | 1999-10-01 | New York | Web Developer | 2020-11-01
(1 row)
kidsdb=# CREATE OR REPLACE FUNCTION GetEmployeeById(EmpId INT)
RETURNS Employees
LANGUAGE SQL
AS
$$
SELECT * FROM Employees WHERE Id = EmpId;
$$;
CREATE FUNCTION
kidsdb=# SELECT * FROM GetAllEmployees();
id | name | dateofbirth | city | designation | joiningdate
----+----------------+-------------+----------+---------------+------
1 | Peter S Parker | 1999-10-01 | New York | Web Developer | 2020-11-01
(1 row)
kidsdb=# SELECT * FROM GetEmployeeById(1) ;
id | name | dateofbirth | city | designation | joiningdate
----+----------------+-------------+----------+---------------+------
1 | Peter S Parker | 1999-10-01 | New York | Web Developer | 2020-11-01
(1 row)
kidsdb=# CREATE OR REPLACE FUNCTION GetEmployeeAge(EmpId INT, Age OUT VARCHAR(100))
LANGUAGE plpgsql
AS
$$
BEGIN
SELECT AGE(NOW()::Date,DateOfBirth) into Age FROM Employees WHERE Id = EmpId;
END;
$$ ;
CREATE FUNCTION
kidsdb=# SELECT * FROM GetEmployeeAge(1);
age
--------------------------
21 years 10 mons 26 days
(1 row)
kidsdb=# \dt
List of relations
Schema | Name | Type | Owner
--------+-----------+-------+----------
public | employees | table | postgres
(1 row)
kidsdb=#\c postgres
postgres=# CREATE DATABASE mrsdb;
CREATE DATABASE
postgres=# \l+
List of databases
Name | Owner | Encoding | Collate | Ctype | Access privileges | Size | Tablespace | Description
--------------+----------+----------+-------------+-------------+----
dvdrental | postgres | UTF8 | en_US.UTF-8 | en_US.UTF-8 | | 15 MB | pg_default |
kidsdb | postgres | UTF8 | en_US.UTF-8 | en_US.UTF-8 | | 7965 kB | pg_default |
migration_db | postgres | UTF8 | en_US.UTF-8 | en_US.UTF-8 | | 101 MB | pg_default |
mrsdb | postgres | UTF8 | en_US.UTF-8 | en_US.UTF-8 | | 7753 kB | pg_default |
postgres | postgres | UTF8 | en_US.UTF-8 | en_US.UTF-8 | | 7941 kB | pg_default | default administrative connection database
template0 | postgres | UTF8 | en_US.UTF-8 | en_US.UTF-8 | =c/postgres +| 7753 kB | pg_default | unmodifiable empty database
| | | | | postgres=CTc/postgres | | |
template1 | postgres | UTF8 | en_US.UTF-8 | en_US.UTF-8 | =c/postgres +| 7753 kB | pg_default | default template for new databases
| | | | | postgres=CTc/postgres | | |
test_db | postgres | UTF8 | en_US.UTF-8 | en_US.UTF-8 | | 23 MB | pg_default |
(8 rows)
postgres=# CREATE USER mrs1_pg with password 'mrs1_pg';
CREATE ROLE
postgres=# GRANT ALL PRIVILEGES ON DATABASE mrsdb to mrs1_pg;
GRANT
postgres=# \du
List of roles
Role name | Attributes | Member of
--------------+------------------------------------------------------
migrate_user | Superuser | {}
mrs1_pg | | {}
postgres | Superuser, Create role, Create DB, Replication, Bypass RLS | {}
postgres=#
[root@postgres-source-server ~]# su - postgres
Last login: Fri Aug 27 10:20:59 UTC 2021 on pts/0
[postgres@postgres-source-server ~]$ cd /home/
[postgres@postgres-source-server home]$ ls -ltr
-rw-------. 1 postgres postgres 263 May 12 2019 3073.dat
-rwxrwxrwx. 1 postgres postgres 2835456 May 12 2019 dvdrental.tar
-rw-rw-r--. 1 postgres postgres 118410 Aug 20 15:52 Capture.PNG
-rw-rw-r--. 1 postgres postgres 470550 Aug 22 01:43 explain_analyze.PNG
-rw-rw-r--. 1 postgres postgres 319167 Aug 22 01:44 explain.PNG
-rw-rw-r--. 1 postgres postgres 31093580 Aug 27 10:18 mrsdb.sql
drwx------. 3 ec2-user ec2-user 95 Aug 27 10:21 ec2-user
[postgres@postgres-source-server home]$ psql -U postgres -d mrsdb -f mrsdb.sql
SET
SET
SET
SET
SET
...
...
ALTER TABLE
ALTER TABLE
[postgres@postgres-source-server home]$
[postgres@postgres-source-server home]$ psql -U postgres
psql (13.4)
Type "help" for help.
postgres=# CREATE DATABASE blob;
CREATE DATABASE
postgres=# \c blob
You are now connected to database "blob" as user "postgres".
blob=# create table blob_table(id int, image bytea);
CREATE TABLE
blob=# \dt
List of relations
Schema | Name | Type | Owner
--------+------------+-------+----------
public | blob_table | table | postgres
(1 row)
blob=# \d+ blob_table
Table "public.blob_table"
Column | Type | Collation | Nullable | Default | Storage | Stats target | Description
--------+---------+-----------+----------+---------+----------+------
id | integer | | | | plain | |
image | bytea | | | | extended | |
Access method: heap
blob=# create or replace function bytea_import(p_path text, p_result out bytea)
language plpgsql as $$
declare
l_oid oid;
r record;
begin
p_result := '';
select lo_import(p_path) into l_oid;
for r in ( select data
from pg_largeobject
where loid = l_oid
order by pageno ) loop
p_result = p_result || r.data;
end loop;
perform lo_unlink(l_oid);
end;$$;
CREATE FUNCTION
blob=# \d blob_table
Table "public.blob_table"
Column | Type | Collation | Nullable | Default
--------+---------+-----------+----------+---------
id | integer | | |
image | bytea | | |
blob=# insert into blob_table(image) select bytea_import('/home/Capture.PNG');
INSERT 0 1
blob=# insert into blob_table(image) select bytea_import('/home/explain.PNG');
INSERT 0 1
blob=# insert into blob_table(image) select bytea_import('/home/explain_analyze.PNG');
INSERT 0 1
blob=# select count(1) from blob_table;
count
-------
3
(1 row)
blob=# \d+
List of relations
Schema | Name | Type | Owner | Persistence | Size | Description
--------+------------+-------+----------+-------------+--------+-----
public | blob_table | table | postgres | permanent | 976 kB |
(1 row)
blob=# \d+ blob_table
Table "public.blob_table"
Column | Type | Collation | Nullable | Default | Storage | Stats target | Description
--------+---------+-----------+----------+---------+----------+------
id | integer | | | | plain | |
image | bytea | | | | extended | |
Access method: heap
I have images like this
[postgres@postgres-source-server home]$ pwd
/home
[postgres@postgres-source-server home]$ ls -lr
-rw-rw-r--. 1 postgres postgres 319167 Aug 22 01:44 explain.PNG
-rw-rw-r--. 1 postgres postgres 470550 Aug 22 01:43 explain_analyze.PNG
-rw-rw-r--. 1 postgres postgres 118410 Aug 20 15:52 Capture.PNG
---------------------------------------------------------------------
Step - 5 Create Super User / Remote user for PostgreSQL and MySQL Database Server
---------------------------------------------------------------------
Adiitional information about users creation in PostgreSQL and MySQL:-
---------------------------------------------------------------------
PostgreSQL :-
-------------
Create the AWS DMS user or any user to connect to your target database, and grant Superuser or the necessary individual privileges (or use the master username for RDS).
CREATE USER postgresql_dms_user WITH PASSWORD 'password';
ALTER USER postgresql_dms_user WITH SUPERUSER;
Create a user for AWS SCT.
Or
Privileges for PostgreSQL as a source database The privileges required for PostgreSQL as a source are listed following:
CONNECT ON DATABASE <database_name>
USAGE ON SCHEMA <database_name>
SELECT ON ALL TABLES IN SCHEMA <database_name>
SELECT ON ALL SEQUENCES IN SCHEMA <database_name>
MySQL :-
---------
CREATE USER 'migration_user'@'%' IDENTIFIED BY 'Root@123';
GRANT ALTER, CREATE, DROP, INDEX, INSERT, UPDATE, DELETE,SELECT ON *.* TO 'migration_user'@'%';
GRANT ALL PRIVILEGES ON *.* TO 'migration_user'@'%';
flush privileges;
In PostgreSQL :-
----------------------
[postgres@postgres-source-server ~]$ psql -U postgres
psql (13.4)
Type "help" for help.
postgres=# \du
List of roles
Role name | Attributes | Member of
-----------+---------------------------------------------------------
postgres | Superuser, Create role, Create DB, Replication, Bypass RLS | {}
postgres=# select usename from pg_user;
usename
----------
postgres
(1 row)
postgres=# CREATE ROLE migrate_user WITH LOGIN SUPERUSER PASSWORD 'migrate_user';
CREATE ROLE
postgres=# select usename from pg_user;
usename
--------------
postgres
migrate_user
(2 rows)
postgres=# \du
List of roles
Role name | Attributes | Member of
--------------+------------------------------------------------------
migrate_user | Superuser | {}
postgres | Superuser, Create role, Create DB, Replication, Bypass RLS | {}
In MySQL :-
--------------
mysql> select user,host,authentication_string from mysql.user;
+------------------+-----------+-------------------------------------
| user | host | authentication_string |
+------------------+-----------+-------------------------------------
| mysql.infoschema | localhost | $A$005$THISISACOMBINATIONOFINVALIDSALTANDPASSWORDTHATMUSTNEVERBRBEUSED |
| mysql.session | localhost | $A$005$THISISACOMBINATIONOFINVALIDSALTANDPASSWORDTHATMUSTNEVERBRBEUSED |
| mysql.sys | localhost | $A$005$THISISACOMBINATIONOFINVALIDSALTANDPASSWORDTHATMUSTNEVERBRBEUSED |
| root | localhost | DG*pwbcdktmeDwPnE/qCRaKsqiq8mjwoEWU2Gs4OQhD6zkV. |
+------------------+-----------+-------------------------------------
4 rows in set (0.00 sec)
mysql> CREATE user 'root'@'%' IDENTIFIED BY 'Root@123';
Query OK, 0 rows affected (0.01 sec)
mysql> GRANT ALL PRIVILEGES ON *.* TO 'root'@'%' WITH GRANT OPTION;
Query OK, 0 rows affected (0.00 sec)
mysql> FLUSH PRIVILEGES;
Query OK, 0 rows affected (0.00 sec)
mysql> select user,host,authentication_string from mysql.user;
+------------------+-----------+-------------------------------------
| user | host | authentication_string |
+------------------+-----------+-------------------------------------
| root | % | $A$005$;AZ*{"Ktco(YxeTfLv06bgc3IjI3FswaPnSgI6v.wEOxxUFgL9QeMcN8W4S. |
| mysql.infoschema | localhost | $A$005$THISISACOMBINATIONOFINVALIDSALTANDPASSWORDTHATMUSTNEVERBRBEUSED |
| mysql.session | localhost | $A$005$THISISACOMBINATIONOFINVALIDSALTANDPASSWORDTHATMUSTNEVERBRBEUSED |
| mysql.sys | localhost | $A$005$THISISACOMBINATIONOFINVALIDSALTANDPASSWORDTHATMUSTNEVERBRBEUSED |
| root | localhost | DG*pwbcdktmeDwPnE/qCRaKsqiq8mjwoEWU2Gs4OQhD6zkV. |
+------------------+-----------+-------------------------------------
5 rows in set (0.00 sec)
mysql> CREATE user 'migrate_user'@'%' IDENTIFIED BY 'Migrate@123';
Query OK, 0 rows affected (0.00 sec)
mysql> GRANT ALL PRIVILEGES ON *.* TO 'migrate_user'@'%' WITH GRANT OPTION;
Query OK, 0 rows affected (0.00 sec)
mysql> FLUSH PRIVILEGES;
Query OK, 0 rows affected (0.00 sec)
mysql> select user,host,authentication_string from mysql.user;
+------------------+-----------+-------------------------------------
| user | host | authentication_string |
+------------------+-----------+-------------------------------------
| migrate_user | % | $A$005$fZa?LsYT3M|cd`uMvrIdCKILiHx19gAQkJHGtIqo1EhxaeacUHGW6qIe0 |
| root | % | $A$005$;AZ*{"Ktco(YxeTfLv06bgc3IjI3FswaPnSgI6v.wEOxxUFgL9QeMcN8W4S. |
| mysql.infoschema | localhost | $A$005$THISISACOMBINATIONOFINVALIDSALTANDPASSWORDTHATMUSTNEVERBRBEUSED |
| mysql.session | localhost | $A$005$THISISACOMBINATIONOFINVALIDSALTANDPASSWORDTHATMUSTNEVERBRBEUSED |
| mysql.sys | localhost | $A$005$THISISACOMBINATIONOFINVALIDSALTANDPASSWORDTHATMUSTNEVERBRBEUSED |
| root | localhost | DG*pwbcdktmeDwPnE/qCRaKsqiq8mjwoEWU2Gs4OQhD6zkV. |
+------------------+-----------+-------------------------------------
6 rows in set (0.00 sec)
Or
Success and Final for user Creation :-
-------------------------------------------
Final User creation in Postgresql :-
---------------------------------------
postgres=# select usename from pg_user;
usename
----------
mrs1_pg
postgres
(2 rows)
postgres=# \du+
List of roles
Role name | Attributes | Member of | Description
-----------+---------------------------------------------------------
mrs1_pg | | {} |
postgres | Superuser, Create role, Create DB, Replication, Bypass RLS | {} |
postgres=# \du
List of roles
Role name | Attributes | Member of
-----------+---------------------------------------------------------
mrs1_pg | | {}
postgres | Superuser, Create role, Create DB, Replication, Bypass RLS | {}
postgres=# \l
List of databases
Name | Owner | Encoding | Collate | Ctype | Access privileges
--------------+----------+----------+-------------+-------------+----
blob | postgres | UTF8 | en_US.UTF-8 | en_US.UTF-8 |
dvdrental | postgres | UTF8 | en_US.UTF-8 | en_US.UTF-8 |
kidsdb | postgres | UTF8 | en_US.UTF-8 | en_US.UTF-8 |
migration_db | postgres | UTF8 | en_US.UTF-8 | en_US.UTF-8 |
mrsdb | postgres | UTF8 | en_US.UTF-8 | en_US.UTF-8 | =Tc/postgres +
| | | | | postgres=CTc/postgres+
| | | | | mrs1_pg=CTc/postgres
postgres | postgres | UTF8 | en_US.UTF-8 | en_US.UTF-8 |
template0 | postgres | UTF8 | en_US.UTF-8 | en_US.UTF-8 | =c/postgres +
| | | | | postgres=CTc/postgres
template1 | postgres | UTF8 | en_US.UTF-8 | en_US.UTF-8 | =c/postgres +
| | | | | postgres=CTc/postgres
test_db | postgres | UTF8 | en_US.UTF-8 | en_US.UTF-8 |
(9 rows)
postgres=# CREATE USER migration_user WITH PASSWORD 'migration_user';
CREATE ROLE
postgres=# GRANT CONNECT ON DATABASE mrsdb TO migration_user;
GRANT
postgres=# GRANT USAGE ON SCHEMA public TO migration_user;
GRANT
postgres=# GRANT SELECT ON ALL TABLES IN SCHEMA public TO migration_user;
GRANT
postgres=# GRANT SELECT ON ALL SEQUENCES IN SCHEMA public TO migration_user;
GRANT
In MySQL :-
----------------
mysql> select user,host from mysql.user;
+------------------+-----------+
| user | host |
+------------------+-----------+
| root | % |
| mysql.infoschema | localhost |
| mysql.session | localhost |
| mysql.sys | localhost |
| root | localhost |
+------------------+-----------+
5 rows in set (0.00 sec)
mysql> CREATE USER 'migration_user'@'%' IDENTIFIED BY 'Root@123';
Query OK, 0 rows affected (0.00 sec)
mysql> GRANT ALTER, CREATE, DROP, INDEX, INSERT, UPDATE, DELETE,SELECT ON *.* TO 'migration_user'@'%';
Query OK, 0 rows affected (0.01 sec)
mysql> GRANT ALL PRIVILEGES ON *.* TO 'migration_user'@'%';
Query OK, 0 rows affected (0.00 sec)
mysql> FLUSH PRIVILEGES;
Query OK, 0 rows affected (0.00 sec)
mysql> select user,host from mysql.user;
+------------------+-----------+
| user | host |
+------------------+-----------+
| migration_user | % |
| root | % |
| mysql.infoschema | localhost |
| mysql.session | localhost |
| mysql.sys | localhost |
| root | localhost |
+------------------+-----------+
6 rows in set (0.00 sec)
mysql> show grants for 'migration_user'@'%';
+--------------------------------------------------------------------
| Grants for migration_user@% |
+--------------------------------------------------------------------
| GRANT SELECT, INSERT, UPDATE, DELETE, CREATE, DROP, RELOAD, SHUTDOWN, PROCESS, FILE, REFERENCES, INDEX, ALTER, SHOW DATABASES, SUPER, CREATE TEMPORARY TABLES, LOCK TABLES, EXECUTE, REPLICATION SLAVE, REPLICATION CLIENT, CREATE VIEW, SHOW VIEW, CREATE ROUTINE, ALTER ROUTINE, CREATE USER, EVENT, TRIGGER, CREATE TABLESPACE, CREATE ROLE, DROP ROLE ON *.* TO `migration_user`@`%` |
| GRANT APPLICATION_PASSWORD_ADMIN,AUDIT_ADMIN,BACKUP_ADMIN,BINLOG_ADMIN,BINLOG_ENCRYPTION_ADMIN,CLONE_ADMIN,CONNECTION_ADMIN,ENCRYPTION_KEY_ADMIN,FLUSH_OPTIMIZER_COSTS,FLUSH_STATUS,FLUSH_TABLES,FLUSH_USER_RESOURCES,GROUP_REPLICATION_ADMIN,INNODB_REDO_LOG_ARCHIVE,INNODB_REDO_LOG_ENABLE,PERSIST_RO_VARIABLES_ADMIN,REPLICATION_APPLIER,REPLICATION_SLAVE_ADMIN,RESOURCE_GROUP_ADMIN,RESOURCE_GROUP_USER,ROLE_ADMIN,SERVICE_CONNECTION_ADMIN,SESSION_VARIABLES_ADMIN,SET_USER_ID,SHOW_ROUTINE,SYSTEM_USER,SYSTEM_VARIABLES_ADMIN,TABLE_ENCRYPTION_ADMIN,XA_RECOVER_ADMIN ON *.* TO `migration_user`@`%` |
+--------------------------------------------------------------------
2 rows in set (0.01 sec)
---------------------------------------------------------------------
Step - 6 Moving / Generating DDL's of PostgreSQL Through AWS Schema Conversion Tool / MySQL Workbench Tool
---------------------------------------------------------------------
(i) First install AWS Schema conversion tool locally by using below link
https://aws.amazon.com/dms/schema-conversion-tool/
(ii) If we are doing PostgreSQL to MySQL , we need to download and install postgres JDBC driver and MySQL JDBC driver from the below links to connect to the source and Target DB's .
PostgreSQL Drivers :- https://docs.aws.amazon.com/SchemaConversionTool/latest/userguide/CHAP_Installing.html#CHAP_Installing.JDBCDrivers
https://jdbc.postgresql.org/download.html#current
MySQL JDBC Drivers :- https://dev.mysql.com/downloads/
(iii) If we are using AWS schema conversion tool , once it is moved the all objectives / all objectives with data , take backup of only DDL's for all Schemas and Tables by using the below commands.
mysqldump -u root -p --no-data blob > blob.sql
mysqldump -u root -p --no-data mrsdb > mrsdb.sql
mysqldump -u root -p --no-data migration_db > migration_db.sql
mysqldump -u root -p --no-data dvdrental > dvdrental.sql
mysqldump -u root -p --no-data test_db > test_db.sql
mysqldump -u root -p --no-data kidsdb > kidsdb.sql
-------------------------------------------------------------------
Step - 7 Moving / Generating DDL's of PostgreSQL Through Full converter Tool / MySQL Workbench Tool and Workbench Issues and Solutions
-------------------------------------------------------------------
(i) Install Full Converter s/w [ 14 days Free Trail ] and move the data from PostgreSQL to MySQL , then use the below commands to take backup of Schemas only and DDL's of all Schemas by using below commands.
Export DDL's for mrsdb by using mysqldump after completed FULL converter tool with data ..
[root@mysql-target-server ~]# mysqldump -u root -p --no-data mrsdb > mrsdb.sql ====> Including indexes backup
Enter password:
[root@mysql-target-server ~]#
[root@mysql-target-server ec2-user]# mysqldump -u root -p --no-data migration_db > migration_db.sql
Enter password:
[root@mysql-target-server ec2-user]#
[root@mysql-target-server ec2-user]# mysqldump -u root -p --no-data blob > blob.sql
Enter password:
[root@mysql-target-server ec2-user]#
[root@mysql-target-server ec2-user]# mysqldump -u root -p --no-data dvdrental > dvdrental.sql
Enter password:
[root@mysql-target-server ec2-user]#
mrsdb :-
847 Tables
2661 Indexes
837 Foreign Keys
migration_db :-
Tables =2
dvdrental :-
Tables = 15
indexes = 28
FK = 14
(ii) While doing Migration from PostgreSQL to MySQL through Workbench , knows issues and solution which i am going to provide here ..
Near Schema Selection step from Workbech Migration Wizard :-
Error information :-
Starting...
Connect to source DBMS...
Connecting...
Connecting to postgresql@DRIVER=PostgreSQL ANSI;SERVER=50.16.135.236;PORT=5432...
Opening ODBC connection to DRIVER=PostgreSQL ANSI;SERVER=50.16.135.236;PORT=5432;DATABASE=kidsdb;UID=postgres;UseDeclareFetch=1;...
Connected
Connect to source DBMS done
Reverse engineer selected schemas....
Reverse engineering public from kidsdb
- Reverse engineering catalog information
Traceback (most recent call last):
File "C:\Program Files\MySQL\MySQL Workbench 8.0\workbench\wizard_progress_page_widget.py", line 190, in thread_work
self.func()
File "C:\Program Files\MySQL\MySQL Workbench 8.0\modules\migration_schema_selection.py", line 176, in task_reveng
self.main.plan.migrationSource.reverseEngineer()
File "C:\Program Files\MySQL\MySQL Workbench 8.0\modules\migration.py", line 357, in reverseEngineer
self.state.sourceCatalog = self._rev_eng_module.reverseEngineer(self.connection, self.selectedCatalogName, self.selectedSchemataNames, self.state.applicationData)
SystemError: ProgrammingError("('42703', '[42703] ERROR: column "min_value" does not exist;\nError while executing the query (1) (SQLExecDirectW)')"): error calling Python module function DbPostgresqlRE.reverseEngineer
ERROR: Reverse engineer selected schemas: ProgrammingError("('42703', '[42703] ERROR: column "min_value" does not exist;\nError while executing the query (1) (SQLExecDirectW)')"): error calling Python module function DbPostgresqlRE.reverseEngineer
Failed
Solution : -
-----------------
Mysql Workbench installation folder - in file: WORKBENCH_INSTALLATION_PATH/modules/db_postgresql_re_grt.py I have changed this (line 74):
From
seq_details_query = """SELECT min_value, max_value, start_value, increment_by, last_value, is_cycled, cache_value FROM "%s"."%s" """
To this (see some additional changes):
seq_details_query = """SELECT ps.min_value, ps.max_value, ps.start_value, ps.increment_by, "%s".last_value, ps.cycle as is_cycled, ps.cache_size as cache_value FROM "%s"."%s" JOIN pg_sequences ps ON ps.sequencename = '%s' """
Also do not forget to add additional parameters to query execution (line 80):
... seq_details_query % (seq_name, schema.name, seq_name, seq_name) ...
And the process went smoothly. The case is, that Postgres from version 13 has some data about sequences stored in different table.
(iii) While doing Migration from PostgreSQL to MySQL through Workbench , knows issues and solution which i am going to provide here ..
Near Reverse Engineer Source step from Workbech Migration Wizard :-
Error info :-
Traceback (most recent call last):
File "C:\Program Files\MySQL\MySQL Workbench 8.0\workbench\wizard_progress_page_widget.py", line 190, in thread_work
self.func()
File "C:\Program Files\MySQL\MySQL Workbench 8.0\modules\migration_object_migration.py", line 110, in task_migrate
self.main.plan.migrate()
File "C:\Program Files\MySQL\MySQL Workbench 8.0\modules\migration.py", line 503, in migrate
self.migrationTarget.catalog = self.migrationSource.migration.migrateCatalog(self.state, self.migrationSource.catalog)
SystemError: TypeError("'>' not supported between instances of 'NoneType' and 'int'"): error calling Python module function DbPostgresqlMigration.migrateCatalog
ERROR: Migrate Selected Objects: TypeError("'>' not supported between instances of 'NoneType' and 'int'"): error calling Python module function DbPostgresqlMigration.migrateCatalog
Failed
Solution :-
Delete the below line
prefix_length_limit = min(referenced_index_col.length, MYSQL_MAX_INDEX_KEY_LENGTH_INNODB_UTF8) if referenced_index_col.length > 0 else MYSQL_MAX_INDEX_KEY_LENGTH_INNODB_UTF8
Add the beliow line as per link
prefix_length_limit = MYSQL_MAX_INDEX_KEY_LENGTH_INNODB_UTF8
File name = db_generic_migration_grt.py
Location = C:\Program Files\MySQL\MySQL Workbench 8.0\modules
https://stackoverflow.com/questions/66103613/mysql-workbench-migration-wizard-erre-not-supported-between-instances-of-int
But here i am facing same issue after did above step too ,
---------------------------------------------------------------------
Step - 8 Final DDL and Schema definations gathered from Workbench Tool
---------------------------------------------------------------------
CREATE DATABASE `kidsdb` /*!40100 DEFAULT CHARACTER SET utf8mb4 COLLATE utf8mb4_0900_ai_ci */ /*!80016 DEFAULT ENCRYPTION='N' */;
CREATE DATABASE `blob` /*!40100 DEFAULT CHARACTER SET utf8mb4 COLLATE utf8mb4_0900_ai_ci */ /*!80016 DEFAULT ENCRYPTION='N' */;
CREATE DATABASE `migration_db` /*!40100 DEFAULT CHARACTER SET utf8mb4 COLLATE utf8mb4_0900_ai_ci */ /*!80016 DEFAULT ENCRYPTION='N' */;
CREATE DATABASE `mrsdb` /*!40100 DEFAULT CHARACTER SET utf8mb4 COLLATE utf8mb4_0900_ai_ci */ /*!80016 DEFAULT ENCRYPTION='N' */;
CREATE DATABASE `test_db` /*!40100 DEFAULT CHARACTER SET utf8mb4 COLLATE utf8mb4_0900_ai_ci */ /*!80016 DEFAULT ENCRYPTION='N' */;
CREATE DATABASE `dvdrental` /*!40100 DEFAULT CHARACTER SET utf8mb4 COLLATE utf8mb4_0900_ai_ci */ /*!80016 DEFAULT ENCRYPTION='N' */;
Finally , i was using to gather DDL of Tables , Views , Functions , Triggers , Sequences , Procedures for above databases from PostgreSQL to MySQL by using AWS Schema Conversion Tool , MySQL Workbench Tool.
Here is my entire scripts :-
[root@mysql-target-server final_scripts]# pwd
/home/ec2-user/final/final_scripts
[root@mysql-target-server final_scripts]# ls -ltr
-rw-rw-r--. 1 ec2-user ec2-user 2637 Aug 31 01:27 create_schemas.sql
-rw-r--r--. 1 root root 1178 Aug 31 01:29 blob_db_ddl_table_function.sql
-rw-r--r--. 1 root root 354 Aug 31 01:45 kidsdb_db_ddl_table.sql
-rw-r--r--. 1 root root 4458 Aug 31 01:49 test_db_ddl_table.sql
-rw-r--r--. 1 root root 1984 Aug 31 02:03 migration_db_ddl_table.sql
-rw-r--r--. 1 root root 1817925 Aug 31 02:07 mrsdb_tables_views_ddl.sql
-rw-r--r--. 1 root root 30362 Aug 31 02:18 dvdrental_tables_views_functions_sequences_ddl.sql
[root@mysql-target-server final_scripts]#
---------------------------------------------------------------------
Step - 9 Take a backup of DDL's for all Databases in PostgresSQL [ For Safer Side ]
---------------------------------------------------------------------
Backup in PostgreSQL Database :- [ Only DDL's of all Databases ]
[postgres@postgres-source-server tmp]$ cd backup_ddl/
[postgres@postgres-source-server backup_ddl]$ pwd
/tmp/backup_ddl
[postgres@postgres-source-server backup_ddl]$ ls -ltr
[postgres@postgres-source-server backup_ddl]$ pg_dump --username=postgres -s -d dvdrental --file=dvdrental.sql
[postgres@postgres-source-server backup_ddl]$ pg_dump --username=postgres -s -d blob --file=blob.sql
[postgres@postgres-source-server backup_ddl]$ pg_dump --username=postgres -s -d kidsdb --file=kidsdb.sql
[postgres@postgres-source-server backup_ddl]$ pg_dump --username=postgres -s -d migration_db --file=migration_db.sql
[postgres@postgres-source-server backup_ddl]$ pg_dump --username=postgres -s -d mrsdb --file=mrsdb.sql
[postgres@postgres-source-server backup_ddl]$ pg_dump --username=postgres -s -d test_db --file=test_db.sql
[postgres@postgres-source-server backup_ddl]$ pg_dumpall --globals-only --file=users.sql
[postgres@postgres-source-server backup_ddl]$ ls -ltr
-rw-r--r--. 1 postgres postgres 38537 Aug 27 12:02 dvdrental.sql
-rw-r--r--. 1 postgres postgres 1347 Aug 27 12:02 blob.sql
-rw-r--r--. 1 postgres postgres 4444 Aug 27 12:03 kidsdb.sql
-rw-r--r--. 1 postgres postgres 1829 Aug 27 12:03 migration_db.sql
-rw-r--r--. 1 postgres postgres 1873326 Aug 27 12:03 mrsdb.sql
-rw-r--r--. 1 postgres postgres 884 Aug 27 12:09 users.sql
-rw-r--r--. 1 postgres postgres 1050 Aug 27 12:15 test_db.sql
[postgres@postgres-source-server backup_ddl]$
dvdrental :-
Views = 7 Completed Success
Functions = 8 Success
Trigger = 1 = Not Success
Type= 1 = Not Success
Domain = 1 = Not success
Agreegate = 1 = Not success
---------------------------------------------------------------------
Step - 10 Most Imp Queries while doing Migration from PostgreSQL to MySQL
---------------------------------------------------------------------
In PostgreSQL :-
------------------
(i) List of Databases.
SELECT datname FROM pg_database;
(ii) List of Users.
SELECT usename from pg_user;
(iii) List of User Permissions :-
\du+
(iv) List of tables with row count of each table :-
Before running this command , we should connect to the Database .
postgres=#\c dvdrental
dvdrental=# select n.nspname as table_schema, c.relname as table_name, c.reltuples as rows from pg_class c join pg_namespace n on n.oid = c.relnamespace where c.relkind = 'r' and n.nspname not in ('information_schema','pg_catalog') order by c.reltuples desc;
(v) List of Tables , MV , Index , Sequences , Views , Types in given database
Before running this command , we should connect to the Database .
postgres=#\c dvdrental
dvdrental=# select nsp.nspname as SchemaName,cls.relname as ObjectName,rol.rolname as ObjectOwner,case cls.relkind
when 'r' then 'TABLE'
when 'm' then 'MATERIALIZED_VIEW'
when 'i' then 'INDEX'
when 'S' then 'SEQUENCE'
when 'v' then 'VIEW'
when 'c' then 'TYPE'
else cls.relkind::text
end as ObjectType from pg_class cls
join pg_roles rol on rol.oid = cls.relowner
join pg_namespace nsp on nsp.oid = cls.relnamespace where nsp.nspname not in ('information_schema', 'pg_catalog') and nsp.nspname not like 'pg_toast%' and rol.rolname = 'postgres'
order by ObjectType, 2;
(vi) List of objectivies individually
Before running this command , we should connect to the Database .
postgres=#\c dvdrental
dvdrental=#
\dv ==> List of Views
\dt or SELECT * FROM pg_catalog.pg_tables WHERE schemaname != 'pg_catalog' AND schemaname != 'information_schema'; ==> List of Tables
\df ==> List of Functions
\dD ==> List of Domains
\ds ==> List of sequences
\da ==> List of agreegate Functions
select event_object_schema as table_schema,event_object_table as table_name,trigger_schema, trigger_name,string_agg(event_manipulation, ',') as event,action_timing as activation,action_condition as condition,action_statement as definition from information_schema.triggers group by 1,2,3,4,6,7,8 order by table_schema, table_name; ===> List of Triggers
or
select DISTINCT trigger_name,event_object_schema as table_schema ,event_object_table as table_name,trigger_schema from information_schema.triggers;
or
select DISTINCT trigger_name from information_schema.triggers;
In MySQL :-
-------------
https://docs.aws.amazon.com/dms/latest/sbs/chaprdsoracle2aurora.steps.verifydatamigration.html
(i) To get list of Tables in entire Instance Level.
SELECT TABLE_SCHEMA,TABLE_NAME,TABLE_ROWS FROM INFORMATION_SCHEMA.TABLES WHERE TABLE_SCHEMA not in ('sys', 'information_schema','mysql', 'performance_schema') and TABLE_TYPE='BASE TABLE' order by 1;
Or
SELECT TABLE_NAME,TABLE_ROWS FROM INFORMATION_SCHEMA.TABLES WHERE TABLE_SCHEMA = 'migration_db' and TABLE_TYPE='BASE TABLE' order by 1;
(ii) To get list of views in entire Instance Level.
SELECT TABLE_NAME FROM information_schema.`TABLES` WHERE table_schema not in ('sys', 'information_schema','mysql', 'performance_schema') and TABLE_TYPE LIKE 'VIEW' ;
Or
SELECT TABLE_NAME FROM information_schema.`TABLES` WHERE TABLE_TYPE LIKE 'VIEW' AND TABLE_SCHEMA LIKE 'mrsdb';
(iii) To get list of Routines / Procedures in entire Instance Level.
select routine_schema as database_name,routine_name,routine_type as type,data_type as return_type from information_schema.routines where routine_schema not in ('sys', 'information_schema','mysql', 'performance_schema');
or
SHOW PROCEDURE STATUS WHERE Db = 'databasename';
Or
select * from information_schema.ROUTINES where ROUTINE_SCHEMA='YOUR DATABASE NAME' and ROUTINE_TYPE='FUNCTION';
(iv) To get list of Functions in entire Instance Level.
SHOW FUNCTION STATUS WHERE Db = 'db_name';
(v) To get list of triggeres in entire Instance Level.
select TRIGGER_SCHEMA,TRIGGER_NAME from information_schema.triggers
Or
select TRIGGER_SCHEMA,TRIGGER_NAME from information_schema.triggers where TRIGGER_SCHEMA='migration';
(vi) To get list of indexes in entire Instance level
SELECT DISTINCT TABLE_NAME, INDEX_NAME FROM INFORMATION_SCHEMA.STATISTICS WHERE table_schema not in ('sys', 'information_schema','mysql', 'performance_schema') and TABLE_SCHEMA =`schema_name`;
Or
SELECT DISTINCT TABLE_NAME, INDEX_NAME FROM INFORMATION_SCHEMA.STATISTICS WHERE table_schema not in ('sys', 'information_schema','mysql', 'performance_schema');
(vii) To get list of seconday indexes in entire Instance Level.
SELECT DISTINCT TABLE_NAME, INDEX_NAME,NON_UNIQUE FROM INFORMATION_SCHEMA.STATISTICS WHERE INDEX_NAME <> 'PRIMARY';
Or
SELECT DISTINCT TABLE_NAME, INDEX_NAME,NON_UNIQUE FROM INFORMATION_SCHEMA.STATISTICS WHERE TABLE_SCHEMA = 'migration' and INDEX_NAME <> 'PRIMARY';
(viii) To get list of Foreign keys in entire Instance Level.
SELECT TABLE_NAME, COLUMN_NAME, CONSTRAINT_NAME, REFERENCED_TABLE_NAME, REFERENCED_COLUMN_NAME FROM INFORMATION_SCHEMA.KEY_COLUMN_USAGE;
Or
SELECT TABLE_NAME, COLUMN_NAME, CONSTRAINT_NAME, REFERENCED_TABLE_NAME, REFERENCED_COLUMN_NAME FROM INFORMATION_SCHEMA. KEY_COLUMN_USAGE WHERE REFERENCED_TABLE_SCHEMA ='mrsdb';
(ix) Use the following command exports tables and index definitions.
mysqldump --no-data --no-create-db --single_transaction -u root -p migration --skip-triggers > mysql_tables_indexes.sql
(x) Use following command to exports routines (stored procedures, functions, and triggers) into the file routines.sql:
mysqldump -u root --routines --no-create-info --no-data --no-create-db --skip-opt -p migration > routines.sql
(xi) Run the following query to check the relationship in tables; this query checks the departments with employees greater than 10. From Oracle to MySQL , We can use this...
SELECT B.DEPARTMENT_NAME,COUNT(*) FROM HR.EMPLOYEES A,HR.DEPARTMENTS B WHERE A.DEPARTMENT_ID=B.DEPARTMENT_ID GROUP BY B.DEPARTMENT_NAME HAVING COUNT(*) > 10 ORDER BY 1;
(xii) Views Backup
mysql -u root -p INFORMATION_SCHEMA --skip-column-names --batch -e "select table_name from tables where table_type = 'VIEW' and table_schema = 'public'" | xargs mysqldump -u root -p public > mrsdb_views.sql
---------------------------------------------------------------------Step - 11 Configure client pacakges for PostgreSQL and MySQL to connect from jump host to Source and Target DB Servers
---------------------------------------------------------------------
Install mysql client pacakges in jump host..
[root@jump-host ~]# hostname
jump-host
[root@jump-host ~]# hostname -I
172.31.27.241
[root@jump-host ~]# yum search mysql
Updating Subscription Management repositories.
Unable to read consumer identity
This system is not registered to Red Hat Subscription Management. You can use subscription-manager to register.
Last metadata expiration check: 0:00:34 ago on Tue 31 Aug 2021 05:30:16 AM UTC.
============ Name & Suy Matched: mysql ==============================
mysql.x86_64 : MySQL client programs and shared libraries
apr-util-mysql.x86_64 : APR utility library MySQL DBD driver
dovecot-mysql.x86_64 : MySQL back end for dovecot
freeradius-mysql.x86_64 : MySQL support for freeradius
grafana-mysql.x86_64 : Grafana mysql datasource
mysql-common.x86_64 : The shared files required for MySQL server and client
mysql-devel.x86_64 : Files for development of MySQL applications
mysql-errmsg.x86_64 : The error messages files required by MySQL server
mysql-libs.x86_64 : The shared libraries required for MySQL clients
mysql-selinux.noarch : SELinux policy modules for MySQL and MariaDB packages
mysql-server.x86_64 : The MySQL server and related files
mysql-test.x86_64 : The test suite distributed with MySQL
pcp-pmda-mysql.x86_64 : Performance Co-Pilot (PCP) metrics for MySQL
perl-DBD-MySQL.x86_64 : A MySQL interface for Perl
php-mysqlnd.x86_64 : A module for PHP applications that use MySQL databases
postfix-mysql.x86_64 : Postfix MySQL map support
python2-PyMySQL.noarch : Pure-Python MySQL client library
python3-PyMySQL.noarch : Pure-Python MySQL client library
python38-PyMySQL.noarch : Pure-Python MySQL client library
python39-PyMySQL.noarch : Pure-Python MySQL client library
qt5-qtbase-mysql.x86_64 : MySQL driver for Qt5's SQL classes
qt5-qtbase-mysql.i686 : MySQL driver for Qt5's SQL classes
rsyslog-mysql.x86_64 : MySQL support for rsyslog
rubygem-mysql2.x86_64 : A simple, fast Mysql library for Ruby, binding to libmysql
rubygem-mysql2-doc.noarch : Documentation for rubygem-mysql2
=================================================================== Summary Matched: mysql =================================================
mariadb-devel.x86_64 : Files for development of MariaDB/MySQL applications
mariadb-java-client.noarch : Connects applications developed in Java to MariaDB and MySQL databases
mariadb-server-utils.x86_64 : Non-essential server utilities for MariaDB/MySQL applications
[root@jump-host ~]# yum install mysql.x86_64
Updating Subscription Management repositories.
Unable to read consumer identity
This system is not registered to Red Hat Subscription Management. You can use subscription-manager to register.
Last metadata expiration check: 0:01:14 ago on Tue 31 Aug 2021 05:30:16 AM UTC.
Dependencies resolved.
=====================================================================
Package Architecture Version Repository Size
=====================================================================
Installing:
mysql x86_64 8.0.21-1.module+el8.2.0+7855+47abd494 rhel-8-appstream-rhui-rpms 12 M
Installing dependencies:
mariadb-connector-c-config noarch 3.1.11-2.el8_3 rhel-8-appstream-rhui-rpms 15 k
mysql-common x86_64 8.0.21-1.module+el8.2.0+7855+47abd494 rhel-8-appstream-rhui-rpms 148 k
Enabling module streams:
mysql 8.0
Transaction Summary
=====================================================================
Install 3 Packages
Total download size: 12 M
Installed size: 63 M
Is this ok [y/N]: y
Downloading Packages:
(1/3): mariadb-connector-c-config-3.1.11-2.el8_3.noarch.rpm 79 kB/s | 15 kB 00:00
(2/3): mysql-common-8.0.21-1.module+el8.2.0+7855+47abd494.x86_64.rpm 752 kB/s | 148 kB 00:00
(3/3): mysql-8.0.21-1.module+el8.2.0+7855+47abd494.x86_64.rpm 32 MB/s | 12 MB 00:00
---------------------------------------------------------------------
Total 24 MB/s | 12 MB 00:00
Running transaction check
Transaction check succeeded.
Running transaction test
Transaction test succeeded.
Running transaction
Preparing : 1/1
Installing : mariadb-connector-c-config-3.1.11-2.el8_3.noarch 1/3
Installing : mysql-common-8.0.21-1.module+el8.2.0+7855+47abd494.x86_64 2/3
Installing : mysql-8.0.21-1.module+el8.2.0+7855+47abd494.x86_64 3/3
Running scriptlet: mysql-8.0.21-1.module+el8.2.0+7855+47abd494.x86_64 3/3
Verifying : mariadb-connector-c-config-3.1.11-2.el8_3.noarch 1/3
Verifying : mysql-8.0.21-1.module+el8.2.0+7855+47abd494.x86_64 2/3
Verifying : mysql-common-8.0.21-1.module+el8.2.0+7855+47abd494.x86_64 3/3
Installed products updated.
Installed:
mariadb-connector-c-config-3.1.11-2.el8_3.noarch mysql-8.0.21-1.module+el8.2.0+7855+47abd494.x86_64
mysql-common-8.0.21-1.module+el8.2.0+7855+47abd494.x86_64
Complete!
[root@jump-host ~]# rpm -qa | grep -i mysql
mysql-common-8.0.21-1.module+el8.2.0+7855+47abd494.x86_64
mysql-8.0.21-1.module+el8.2.0+7855+47abd494.x86_64
[root@jump-host ~]# which mysql
/usr/bin/mysql
[root@jump-host ~]# mysql --version
mysql Ver 8.0.21 for Linux on x86_64 (Source distribution)
Connectivity from jump host to MySQL Database [ Target Database ]
---------------------------------------------------------------------
[root@jump-host ~]# mysql -u root -p -h 172.31.17.136
Enter password: Root@123
Welcome to the MySQL monitor. Commands end with ; or \g.Your MySQL connection id is 9.Server version: 8.0.26 MySQL Community Server - GPL Copyright (c) 2000, 2020, Oracle and/or its affiliates. All rights reserved.Oracle is a registered trademark of Oracle Corporation and/or its affiliates. Other names may be trademarks of their respective owners.Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.
mysql> select user(),database();
+------------------------------------+------------+
| user() | database() |
+------------------------------------+------------+
| root@ip-172-31-27-241.ec2.internal | NULL |
+------------------------------------+------------+
1 row in set (0.00 sec)
mysql> system hostname;
jump-host
mysql> system hostname -I
172.31.27.241
mysql>
Install mysql client pacakges in jump host..
[root@jump-host ~]# yum search postgresql
Updating Subscription Management repositories.
Unable to read consumer identity
This system is not registered to Red Hat Subscription Management. You can use subscription-manager to register.
Last metadata expiration check: 0:16:39 ago on Tue 31 Aug 2021 05:30:16 AM UTC.
==================================== Name & Summary Matched: postgresql =============================================================
postgresql.x86_64 : PostgreSQL client programs
freeradius-postgresql.x86_64 : Postgresql support for freeradius
pcp-pmda-postgresql.x86_64 : Performance Co-Pilot (PCP) metrics for PostgreSQL
postgresql-contrib.x86_64 : Extension modules distributed with PostgreSQL
postgresql-docs.x86_64 : Extra documentation for PostgreSQL
postgresql-jdbc.noarch : JDBC driver for PostgreSQL
postgresql-jdbc-javadoc.noarch : API docs for postgresql-jdbc
postgresql-odbc.x86_64 : PostgreSQL ODBC driver
postgresql-plperl.x86_64 : The Perl procedural language for PostgreSQL
postgresql-plpython3.x86_64 : The Python3 procedural language for PostgreSQL
postgresql-pltcl.x86_64 : The Tcl procedural language for PostgreSQL
postgresql-server.x86_64 : The programs needed to create and run a PostgreSQL server
postgresql-server-devel.x86_64 : PostgreSQL development header files and libraries
postgresql-static.x86_64 : Statically linked PostgreSQL libraries
postgresql-test.x86_64 : The test suite distributed with PostgreSQL
postgresql-test-rpm-macros.x86_64 : Convenience RPM macros for build-time testing against PostgreSQL server
postgresql-upgrade.x86_64 : Support for upgrading from the previous major release of PostgreSQL
qt5-qtbase-postgresql.x86_64 : PostgreSQL driver for Qt5's SQL classes
qt5-qtbase-postgresql.i686 : PostgreSQL driver for Qt5's SQL classes
================================================================== Name Matched: postgresql ==================================================================
postgresql-odbc-tests.x86_64 : Testsuite files for psqlodbc
postgresql-upgrade-devel.x86_64 : Support for build of extensions required for upgrade process
================================================================ Summary Matched: postgresql =================================================================
apr-util-pgsql.x86_64 : APR utility library PostgreSQL DBD driver
libpgtypes.i686 : Map PostgreSQL database types to C equivalents
libpgtypes.x86_64 : Map PostgreSQL database types to C equivalents
libpq.x86_64 : PostgreSQL client library
libpq.i686 : PostgreSQL client library
libpq-devel.i686 : Development files for building PostgreSQL client tools
libpq-devel.x86_64 : Development files for building PostgreSQL client tools
perl-DBD-Pg.x86_64 : A PostgreSQL interface for perl
php-pgsql.x86_64 : A PostgreSQL database module for PHP
postfix-pgsql.x86_64 : Postfix PostgreSQL map support
python-psycopg2-doc.x86_64 : Documentation for psycopg python PostgreSQL database adapter
python2-psycopg2.x86_64 : A PostgreSQL database adapter for Python 2
python2-psycopg2-debug.x86_64 : A PostgreSQL database adapter for Python 2 (debug build)
python2-psycopg2-tests.x86_64 : A testsuite for A PostgreSQL database adapter for Python 2
python3-psycopg2.x86_64 : A PostgreSQL database adapter for Python 3
python38-psycopg2.x86_64 : A PostgreSQL database adapter for Python 3
python38-psycopg2-doc.x86_64 : Documentation for psycopg python PostgreSQL database adapter
python38-psycopg2-tests.x86_64 : A testsuite for A PostgreSQL database adapter for Python 2
python39-psycopg2.x86_64 : A PostgreSQL database adapter for Python 3
python39-psycopg2-doc.x86_64 : Documentation for psycopg python PostgreSQL database adapter
python39-psycopg2-tests.x86_64 : A testsuite for A PostgreSQL database adapter for Python 2
rubygem-pg.x86_64 : A Ruby interface to the PostgreSQL RDBMS
[root@jump-host ~]# yum install postgresql.x86_64
Updating Subscription Management repositories.
Unable to read consumer identity
This system is not registered to Red Hat Subscription Management. You can use subscription-manager to register.
Last metadata expiration check: 0:17:50 ago on Tue 31 Aug 2021 05:30:16 AM UTC.
Dependencies resolved.
=====================================================================
Package Architecture Version Repository Size
=====================================================================
Installing:
postgresql x86_64 10.17-1.module+el8.4.0+11249+895597ab rhel-8-appstream-rhui-rpms 1.5 M
Installing dependencies:
libpq x86_64 13.3-1.el8_4 rhel-8-appstream-rhui-rpms 197 k
Enabling module streams:
postgresql 10
Transaction Summary
=====================================================================
Install 2 Packages
Total download size: 1.7 M
Installed size: 6.2 M
Is this ok [y/N]: y
Downloading Packages:
(1/2): libpq-13.3-1.el8_4.x86_64.rpm 1.3 MB/s | 197 kB 00:00
(2/2): postgresql-10.17-1.module+el8.4.0+11249+895597ab.x86_64.rpm 7.7 MB/s | 1.5 MB 00:00
---------------------------------------------------------------------
Total 7.3 MB/s | 1.7 MB 00:00
Running transaction check
Transaction check succeeded.
Running transaction test
Transaction test succeeded.
Running transaction
Preparing : 1/1
Installing : libpq-13.3-1.el8_4.x86_64 1/2
Installing : postgresql-10.17-1.module+el8.4.0+11249+895597ab.x86_64 2/2
Running scriptlet: postgresql-10.17-1.module+el8.4.0+11249+895597ab.x86_64 2/2
Verifying : postgresql-10.17-1.module+el8.4.0+11249+895597ab.x86_64 1/2
Verifying : libpq-13.3-1.el8_4.x86_64 2/2
Installed products updated.
Installed:
libpq-13.3-1.el8_4.x86_64 postgresql-10.17-1.module+el8.4.0+11249+895597ab.x86_64
Complete!
[root@jump-host ~]# which psql
/usr/bin/psql
[root@jump-host ~]# psql --version
psql (PostgreSQL) 10.17
[root@jump-host ~]# psql -U postgres -h 172.31.30.141
psql (10.17, server 13.4)
WARNING: psql major version 10, server major version 13.
Some psql features might not work.
Type "help" for help.
postgres=# \l
List of databases
Name | Owner | Encoding | Collate | Ctype | Access privileges
--------------+----------+----------+-------------+-------------+----
blob | postgres | UTF8 | en_US.UTF-8 | en_US.UTF-8 |
dvdrental | postgres | UTF8 | en_US.UTF-8 | en_US.UTF-8 |
kidsdb | postgres | UTF8 | en_US.UTF-8 | en_US.UTF-8 |
migration_db | postgres | UTF8 | en_US.UTF-8 | en_US.UTF-8 |
mrsdb | postgres | UTF8 | en_US.UTF-8 | en_US.UTF-8 | =Tc/postgres +
| | | | | postgres=CTc/postgres +
| | | | | mrs1_pg=CTc/postgres +
| | | | | migration_user=c/postgres
postgres | postgres | UTF8 | en_US.UTF-8 | en_US.UTF-8 |
template0 | postgres | UTF8 | en_US.UTF-8 | en_US.UTF-8 | =c/postgres +
| | | | | postgres=CTc/postgres
template1 | postgres | UTF8 | en_US.UTF-8 | en_US.UTF-8 | =c/postgres +
| | | | | postgres=CTc/postgres
test_db | postgres | UTF8 | en_US.UTF-8 | en_US.UTF-8 |
(9 rows)
postgres=# \q
[root@jump-host ~]#
Important Points :-
--------------------------
(i) In Target Database [ MySQL ] , should change the max_allowed_packet = 16 MB to 1GB and Restart the MySQL DB service .
(ii) In Source Database [ PostgreSQL ] , should change the below lines in pg_hba.conf file and restart the PostgreSQL Database Service.
# Allow users to establish connections from any host
host all all 0.0.0.0/0 trust
or
host all all 0.0.0.0/0 md5
Enable/ change the parameter --> listen_address = 'localhost' to '*' in postgresql.conf file and restart the DB.
port = 5432 [ Enable this Parameter ]
---------------------------------------------------------------------
Step - 12 Prepare configuration files for PostgreSQL and MySQL Databases from Sqline configuration files in Jump host
----------------------------------------------------------
(i) Create folders based on database names ..i.e
[root@jump-host sqlinesdata31773]# pwd
/home/ec2-user/sqlinesdata31773
[root@jump-host sqlinesdata31773]# mkdir test_db
[root@jump-host sqlinesdata31773]# mkdir migration_db
[root@jump-host sqlinesdata31773]# mkdir mrsdb
[root@jump-host sqlinesdata31773]# mkdir dvdrental
[root@jump-host sqlinesdata31773]# mkdir blob
[root@jump-host sqlinesdata31773]# mkdir kidsdb
[root@jump-host sqlinesdata31773]# pwd
/home/ec2-user/sqlinesdata31773
[root@jump-host sqlinesdata31773]# ls -ltr
-rw-r--r--. 1 root root 361 Feb 8 2018 sqlines_twhere.txt
-rw-r--r--. 1 root root 579 Feb 8 2018 sqlines_tsel.txt
-rw-r--r--. 1 root root 498 Feb 8 2018 sqlines_tsel_all.txt
-rw-r--r--. 1 root root 192 Feb 8 2018 sqlines_tmap.txt
-rw-r--r--. 1 root root 273 Feb 8 2018 sqlines_tf.txt
-rw-r--r--. 1 root root 316 Feb 8 2018 sqlines_qf.txt
-rw-r--r--. 1 root root 124 Feb 8 2018 sqlines_dtmap.txt
-rw-r--r--. 1 root root 202 Feb 8 2018 sqlines_cnsmap.txt
-rw-r--r--. 1 root root 401 Feb 8 2018 sqlines_cmap.txt
-rw-r--r--. 1 root root 4877 Feb 8 2018 sqldata.cfg
-rw-r--r--. 1 root root 188 Feb 8 2018 readme.txt
-rw-r--r--. 1 root root 11303 Feb 8 2018 license.txt
-rwxr-xr-x. 1 root root 11670344 Feb 8 2018 libQtGui.so
-rwxr-xr-x. 1 root root 2701504 Feb 8 2018 libQtCore.so
-rwxr-xr-x. 1 root root 1465537 Feb 12 2018 sqldata
-rw-r--r--. 1 root root 218 Aug 27 01:59 sqldata.log
drwxr-xr-x. 2 root root 6 Aug 31 06:00 test_db
drwxr-xr-x. 2 root root 6 Aug 31 06:00 migration_db
drwxr-xr-x. 2 root root 6 Aug 31 06:01 mrsdb
drwxr-xr-x. 2 root root 6 Aug 31 06:01 dvdrental
drwxr-xr-x. 2 root root 6 Aug 31 06:01 blob
drwxr-xr-x. 2 root root 6 Aug 31 06:01 kidsdb
[root@jump-host sqlinesdata31773]#
(ii) Just copy of sample sqldata.cnf file from sqlinesdata-3.1.773_x86_64_linux folder and keep this in newly created in all db folders .
i.e
[root@jump-host sqlinesdata31773]# pwd
/home/ec2-user/sqlinesdata31773
[root@jump-host sqlinesdata31773]# ls -ltr
-rw-r--r--. 1 root root 361 Feb 8 2018 sqlines_twhere.txt
-rw-r--r--. 1 root root 579 Feb 8 2018 sqlines_tsel.txt
-rw-r--r--. 1 root root 498 Feb 8 2018 sqlines_tsel_all.txt
-rw-r--r--. 1 root root 192 Feb 8 2018 sqlines_tmap.txt
-rw-r--r--. 1 root root 273 Feb 8 2018 sqlines_tf.txt
-rw-r--r--. 1 root root 316 Feb 8 2018 sqlines_qf.txt
-rw-r--r--. 1 root root 124 Feb 8 2018 sqlines_dtmap.txt
-rw-r--r--. 1 root root 202 Feb 8 2018 sqlines_cnsmap.txt
-rw-r--r--. 1 root root 401 Feb 8 2018 sqlines_cmap.txt
-rw-r--r--. 1 root root 4877 Feb 8 2018 sqldata.cfg
-rw-r--r--. 1 root root 188 Feb 8 2018 readme.txt
-rw-r--r--. 1 root root 11303 Feb 8 2018 license.txt
-rwxr-xr-x. 1 root root 11670344 Feb 8 2018 libQtGui.so
-rwxr-xr-x. 1 root root 2701504 Feb 8 2018 libQtCore.so
-rwxr-xr-x. 1 root root 1465537 Feb 12 2018 sqldata
-rw-r--r--. 1 root root 218 Aug 27 01:59 sqldata.log
drwxr-xr-x. 2 root root 6 Aug 31 06:00 test_db
drwxr-xr-x. 2 root root 6 Aug 31 06:00 migration_db
drwxr-xr-x. 2 root root 6 Aug 31 06:01 mrsdb
drwxr-xr-x. 2 root root 6 Aug 31 06:01 dvdrental
drwxr-xr-x. 2 root root 6 Aug 31 06:01 blob
drwxr-xr-x. 2 root root 6 Aug 31 06:01 kidsdb
[root@jump-host sqlinesdata31773]# cp sqldata.cfg /home/ec2-user/sqlinesdata31773/test_db/sqldata_test_db.cnf
[root@jump-host sqlinesdata31773]# cp sqldata.cfg /home/ec2-user/sqlinesdata31773/migration_db/sqldata_migration_db.cnf
[root@jump-host sqlinesdata31773]# cp sqldata.cfg /home/ec2-user/sqlinesdata31773/mrsdb/sqldata_mrsdb.cnf
[root@jump-host sqlinesdata31773]# cp sqldata.cfg /home/ec2-user/sqlinesdata31773/dvdrental/sqldata_dvdrental.cnf
[root@jump-host sqlinesdata31773]# cp sqldata.cfg /home/ec2-user/sqlinesdata31773/blob/sqldata_blob.cnf
[root@jump-host sqlinesdata31773]# cp sqldata.cfg /home/ec2-user/sqlinesdata31773/kidsdb/sqldata_kidsdb.cnf
(iii) Now change the parameters in all cnf files which we created above step..
-------------------
test_db :-
-------------------
[root@jump-host test_db]# pwd
/home/ec2-user/sqlinesdata31773/test_db
[root@jump-host test_db]# cat sqldata_test_db.cnf
-- SQLines Data options (contact us at support@sqlines.com to request a new option)
-sd=pg,postgres/postgres@3.229.186.86:5432,test_db
-td=mysql,root/Root@123@50.19.50.121,test_db
-t=*.*
-ss=6
-out=/home/ec2-user/sqlinesdata31773/test_db/
-log=test_db.log
-- DDL Options
--------------
-- Set yes to to migrate table definitions, or no to not migrate
-- DDL statements executed depends on -topt option (CREATE; DROP and CREATE TABLE, or TRUNCATE TABLE)
-ddl_tables=no
-- Set yes to to migrate constraints (this is default), or no to not migrate
-constraints=no
-- Set yes to to migrate indexes (this is default), or no to not migrate
-indexes=no
-- Data Options
---------------
-- Set yes to to migrate data (this is default), or no to not migrate
-data=yes
-------------------
migration_db :-
-------------------
[root@jump-host migration_db]# pwd
/home/ec2-user/sqlinesdata31773/migration_db
[root@jump-host migration_db]# cat sqldata_migration_db.cnf
-- SQLines Data options (contact us at support@sqlines.com to request a new option)
-sd=pg,postgres/postgres@3.229.186.86:5432,migration_db
-td=mysql,root/Root@123@50.19.50.121,migration_db
-t=*.*
-ss=6
-out=/home/ec2-user/sqlinesdata31773/migration_db/
-log=migration_db.log
-- DDL Options
--------------
-- Set yes to to migrate table definitions, or no to not migrate
-- DDL statements executed depends on -topt option (CREATE; DROP and CREATE TABLE, or TRUNCATE TABLE)
-ddl_tables=no
-- Set yes to to migrate constraints (this is default), or no to not migrate
-constraints=no
-- Set yes to to migrate indexes (this is default), or no to not migrate
-indexes=no
-- Data Options
---------------
-- Set yes to to migrate data (this is default), or no to not migrate
-data=yes
-------------------
mrsdb :-
-------------------
[root@jump-host mrsdb]# pwd
/home/ec2-user/sqlinesdata31773/mrsdb
[root@jump-host mrsdb]# ls -ltr
-rw-r--r--. 1 root root 5046 Aug 31 07:12 sqldata_mrsdb.cnf
[root@jump-host mrsdb]# cat sqldata_mrsdb.cnf
-- SQLines Data options (contact us at support@sqlines.com to request a new option)
-sd=pg,postgres/postgres@3.229.186.86:5432,mrsdb
-td=mysql,root/Root@123@50.19.50.121,mrsdb
-t=*.*
-ss=6
-out=/home/ec2-user/sqlinesdata31773/mrsdb/
-log=mrsdb.log
-- DDL Options
--------------
-- Set yes to to migrate table definitions, or no to not migrate
-- DDL statements executed depends on -topt option (CREATE; DROP and CREATE TABLE, or TRUNCATE TABLE)
-ddl_tables=no
-- Set yes to to migrate constraints (this is default), or no to not migrate
-constraints=no
-- Set yes to to migrate indexes (this is default), or no to not migrate
-indexes=no
-- Data Options
---------------
-- Set yes to to migrate data (this is default), or no to not migrate
-data=yes
-------------------
dvdrental :-
-------------------
[root@jump-host dvdrental]# pwd
/home/ec2-user/sqlinesdata31773/dvdrental
[root@jump-host dvdrental]# ls -ltr
-rw-r--r--. 1 root root 4877 Aug 31 06:05 sqldata_dvdrental.cnf
[root@jump-host dvdrental]# cat sqldata_dvdrental.cnf
-- SQLines Data options (contact us at support@sqlines.com to request a new option)
-sd=pg,postgres/postgres@3.229.186.86:5432,dvdrental
-td=mysql,root/Root@123@50.19.50.121,dvdrental
-t=*.*
-ss=6
-out=/home/ec2-user/sqlinesdata31773/dvdrental/
-log=dvdrental.log
-- DDL Options
--------------
-- Set yes to to migrate table definitions, or no to not migrate
-- DDL statements executed depends on -topt option (CREATE; DROP and CREATE TABLE, or TRUNCATE TABLE)
-ddl_tables=no
-- Set yes to to migrate constraints (this is default), or no to not migrate
-constraints=no
-- Set yes to to migrate indexes (this is default), or no to not migrate
-indexes=no
-- Data Options
---------------
-- Set yes to to migrate data (this is default), or no to not migrate
-data=yes
-------------------
blob :-
-------------------
[root@jump-host blob]# pwd
/home/ec2-user/sqlinesdata31773/blob
[root@jump-host blob]# ls -ltr
-rw-r--r--. 1 root root 4877 Aug 31 06:05 sqldata_blob.cnf
[root@jump-host blob]# cat sqldata_blob.cnf
-- SQLines Data options (contact us at support@sqlines.com to request a new option)
-sd=pg,postgres/postgres@3.229.186.86:5432,blob
-td=mysql,root/Root@123@50.19.50.121,blob
-t=*.*
-ss=6
-out=/home/ec2-user/sqlinesdata31773/blob/
-log=blob.log
-- DDL Options
--------------
-- Set yes to to migrate table definitions, or no to not migrate
-- DDL statements executed depends on -topt option (CREATE; DROP and CREATE TABLE, or TRUNCATE TABLE)
-ddl_tables=no
-- Set yes to to migrate constraints (this is default), or no to not migrate
-constraints=no
-- Set yes to to migrate indexes (this is default), or no to not migrate
-indexes=no
-- Data Options
---------------
-- Set yes to to migrate data (this is default), or no to not migrate
-data=yes
-------------------
kidsdb :-
-------------------
[root@jump-host kidsdb]# pwd
/home/ec2-user/sqlinesdata31773/kidsdb
[root@jump-host kidsdb]# ls -ltr
-rw-r--r--. 1 root root 4877 Aug 31 06:05 sqldata_kidsdb.cnf
[root@jump-host kidsdb]# cat sqldata_kidsdb.cnf
-- SQLines Data options (contact us at support@sqlines.com to request a new option)
-sd=pg,postgres/postgres@3.229.186.86:5432,kidsdb
-td=mysql,root/Root@123@50.19.50.121,kidsdb
-t=*.*
-ss=6
-out=/home/ec2-user/sqlinesdata31773/kidsdb/
-log=kids.log
-- DDL Options
--------------
-- Set yes to to migrate table definitions, or no to not migrate
-- DDL statements executed depends on -topt option (CREATE; DROP and CREATE TABLE, or TRUNCATE TABLE)
-ddl_tables=no
-- Set yes to to migrate constraints (this is default), or no to not migrate
-constraints=no
-- Set yes to to migrate indexes (this is default), or no to not migrate
-indexes=no
-- Data Options
---------------
-- Set yes to to migrate data (this is default), or no to not migrate
-data=yes
(iv) Verify whether it's connecting to source and target databases by using sqline ...
To just test connections you can specify non-existing table in -t option. Then the tool will connect and terminate.
[root@jump-host sqlinesdata31773]# pwd
/home/ec2-user/sqlinesdata31773
[root@jump-host sqlinesdata31773]# ./sqldata -sd=pg,postgres/postgres@3.229.186.86,mrsdb -td=mysql,root@172.31.17.136,test_db -t=test
SQLines Data 3.1.773 x86_64 Linux - Database Migration Tool.
Copyright (c) 2018 SQLines. All Rights Reserved.
Connecting to databases (372 ms)
PostgreSQL 13.4 on x86_64-pc-linux-gnu, compiled by gcc (GCC) 8.4.1 20200928 (Red Hat 8.4.1-1), 64-bit (Ok, 372 ms)
MySQL - Error: libmysqlclient.so: cannot open shared object file: No such file or directory (Failed, 0 ms)
So here libmysqlclient pacakge issue .
Install mysql-libs and reconnect again ...
[root@jump-host sqlinesdata31773]# yum install mysql-libs.x86_64
[root@jump-host sqlinesdata31773]# yum install mysql-devel.x86_64
[root@jump-host sqlinesdata31773]# rpm -qa | grep -i mysql
mysql-common-8.0.21-1.module+el8.2.0+7855+47abd494.x86_64
mysql-libs-8.0.21-1.module+el8.2.0+7855+47abd494.x86_64
mysql-8.0.21-1.module+el8.2.0+7855+47abd494.x86_64
mysql-devel-8.0.21-1.module+el8.2.0+7855+47abd494.x86_64
[root@jump-host sqlinesdata31773]# pwd
/home/ec2-user/sqlinesdata31773
[root@jump-host sqlinesdata31773]# ./sqldata -sd=pg,postgres/postgres@3.229.186.86,mrsdb -td=mysql,root/Root@123@50.19.50.121,test_db -t=test
SQLines Data 3.1.773 x86_64 Linux - Database Migration Tool.
Copyright (c) 2018 SQLines. All Rights Reserved.
Connecting to databases (84 ms)
PostgreSQL 13.4 on x86_64-pc-linux-gnu, compiled by gcc (GCC) 8.4.1 20200928 (Red Hat 8.4.1-1), 64-bit (Ok, 84 ms)
MySQL 8.0.26 MySQL Community Server - GPL x86_64 Linux 1.2.11 (Ok, 11 ms)
Reading the database schema
[root@jump-host sqlinesdata31773]#
So connections are established ...so working fine ..
---------------------------------------------------------------------
Now with sqlinesdata-3.3.171_x86_64_linux.tar.gz Version :- Please use this version to migrate the data
---------------------------------------------------------------------
[root@jump-host sqlinesdata33171]# mkdir test_db
[root@jump-host sqlinesdata33171]# mkdir mrsdb
[root@jump-host sqlinesdata33171]# mkdir migration_db
[root@jump-host sqlinesdata33171]# mkdir dvdrental
[root@jump-host sqlinesdata33171]# mkdir blob
[root@jump-host sqlinesdata33171]# mkdir kidsdb
[root@jump-host sqlinesdata33171]# pwd
/home/ec2-user/sqlinesdata33171
[root@jump-host sqlinesdata33171]# ls -ltr
-rw-r--r--. 1 root root 7212 Aug 31 12:47 sqldata.cfg
-rw-r--r--. 1 root root 361 Sep 5 20:27 sqlines_twhere.txt
-rw-r--r--. 1 root root 579 Sep 5 20:27 sqlines_tsel.txt
-rw-r--r--. 1 root root 498 Sep 5 20:27 sqlines_tsel_all.txt
-rw-r--r--. 1 root root 192 Sep 5 20:27 sqlines_tmap.txt
-rw-r--r--. 1 root root 273 Sep 5 20:27 sqlines_tf.txt
-rw-r--r--. 1 root root 316 Sep 5 20:27 sqlines_qf.txt
-rw-r--r--. 1 root root 190 Sep 5 20:27 sqlines_dtmap.txt
-rw-r--r--. 1 root root 202 Sep 5 20:27 sqlines_cnsmap.txt
-rw-r--r--. 1 root root 401 Sep 5 20:27 sqlines_cmap.txt
-rw-r--r--. 1 root root 188 Sep 5 20:27 readme.txt
-rw-r--r--. 1 root root 118 Sep 5 20:27 license.txt
-rwxr-xr-x. 1 root root 11670344 Sep 5 20:27 libQtGui.so
-rwxr-xr-x. 1 root root 2701504 Sep 5 20:27 libQtCore.so
-rwxr-xr-x. 1 root root 2051152 Sep 5 20:29 sqldata
-rw-r--r--. 1 root root 272 Sep 6 00:45 sqldata.log
-rw-r--r--. 1 root root 410 Sep 6 00:54 sqldata_ddl.sql
-rw-r--r--. 1 root root 1410 Sep 6 00:54 test_db_user_table_171_06-09-2021.log
drwxr-xr-x. 2 root root 6 Sep 6 01:05 test_db
drwxr-xr-x. 2 root root 6 Sep 6 01:05 mrsdb
drwxr-xr-x. 2 root root 6 Sep 6 01:05 migration_db
drwxr-xr-x. 2 root root 6 Sep 6 01:05 dvdrental
drwxr-xr-x. 2 root root 6 Sep 6 01:06 blob
drwxr-xr-x. 2 root root 6 Sep 6 01:06 kidsdb
[root@jump-host sqlinesdata33171]# pwd
/home/ec2-user/sqlinesdata33171
This is for test_db database :-
-----------------------------------------------
[root@jump-host sqlinesdata33171]# cp sqldata.cfg test_db/
[root@jump-host sqlinesdata33171]# cd test_db
[root@jump-host test_db]# pwd
/home/ec2-user/sqlinesdata33171/test_db
[root@jump-host test_db]# ls -ltr
-rw-r--r--. 1 root root 7212 Sep 6 01:09 sqldata.cfg
[root@jump-host test_db]# cat sqldata.cfg ================> Default cnf file
-- SQLines Data options (contact us at support@sqlines.com to request a new option)
--- Please note that all configuration options can be specified in the command line as well.
-- DDL Options
--------------
-- Set yes to to migrate table definitions, or no to not migrate
-- DDL statements executed depends on -topt option (CREATE; DROP and CREATE TABLE, or TRUNCATE TABLE)
-ddl_tables=
-- Set yes to to migrate constraints (this is default), or no to not migrate
-- Please note that if -topt=truncate or -topt=none is specified, constraints are not migrated
-constraints=
-- Set yes to to migrate indexes (this is default), or no to not migrate
-- Please note that if -topt=truncate or -topt=none is specified, indexes are not migrated
-indexes=
-- Data Options
---------------
-- Set yes to to migrate data (this is default), or no to not migrate
-data=
-- The maximum number of rows in read/write batch, by default it is defined by the buffer size divided by the table row size
-batch_max_rows=10000
-- Work with LOBs as VARCHAR columns to increase performance (use if LOB columns contain short values less than a few MB i.e.), default is no
-fetch_lob_as_varchar=
-- Fixed size buffer in bytes to read LOB values by binding, not by reading part by part (can cause truncation error if the buffer is less than
-- the maximum LOB value, default is to read LOBs by separate calls
-lob_bind_buffer=10000000
-- When converting from ASCII or UTF16/UCS-2 character sets in the source database to UTF8 i.e. in the target database depending on
-- the actual data you may need greater storage size. And vice versa converting in opposite direction you may require smaller storage size.
-- This parameter specifies the length change ratio. If the source length is 100, and ratio is 1.1 then the target length will be 110
-char_length_ratio=
[root@jump-host test_db]# mv sqldata.cfg sqldata_test_db.cfg
[root@jump-host test_db]# pwd
/home/ec2-user/sqlinesdata33171/test_db
[root@jump-host test_db]# ls -ltr
-rw-r--r--. 1 root root 7212 Sep 6 01:09 sqldata_test_db.cfg
[root@jump-host test_db]# vi sqldata_test_db.cfg
[root@jump-host test_db]# cat sqldata_test_db.cfg
-- SQLines Data options (contact us at support@sqlines.com to request a new option)
--- Please note that all configuration options can be specified in the command line as well.
-sd=pg,postgres/postgres@3.229.186.86:5432,test_db
-td=mysql,root/Root@123@18.234.112.125,test_db
-- t=*.*
-t=*.*
-ss=6
-- out=/home/ec2-user/sqlinesdata31773/test_db/
-- log=test_db.log
-- DDL Options
--------------
-- Set yes to to migrate table definitions, or no to not migrate
-- DDL statements executed depends on -topt option (CREATE; DROP and CREATE TABLE, or TRUNCATE TABLE)
-ddl_tables=no
-- Set yes to to migrate constraints (this is default), or no to not migrate
-- Please note that if -topt=truncate or -topt=none is specified, constraints are not migrated
-constraints=no
-- Set yes to to migrate indexes (this is default), or no to not migrate
-- Please note that if -topt=truncate or -topt=none is specified, indexes are not migrated
-indexes=no
-- Data Options
---------------
-- Set yes to to migrate data (this is default), or no to not migrate
-data=yes
[root@jump-host test_db]#
---------------------------------------------------------------------
Step - 13 Prepare the Row count script for Source Database and Target Database in jump host [ PostgreSQL --> MySQL ]
----------------------------------------------------------------
[root@jump-host ec2-user]# vi row_count_comparison.sh
[root@jump-host ec2-user]# ls -ltr
-rw-rw-r--. 1 root root 15872000 Mar 14 2019 sqlinesdata31773_x86_64_linux.tar
drwxr-xr-x. 8 root root 4096 Aug 31 11:33 sqlinesdata31773
-rw-r--r--. 1 root root 2398 Aug 31 12:02 row_count_comparison.sh
[root@jump-host ec2-user]# chmod +x row_count_comparison.sh
[root@jump-host ec2-user]# ls -ltr
-rw-rw-r--. 1 root root 15872000 Mar 14 2019 sqlinesdata31773_x86_64_linux.tar
drwxr-xr-x. 8 root root 4096 Aug 31 11:33 sqlinesdata31773
-rwxr-xr-x. 1 root root 2398 Aug 31 12:02 row_count_comparison.sh
Required Directories to create before running the shell script
[root@jump-host ec2-user]# cd /home/ec2-user/
[root@jump-host ec2-user]# pwd
/home/ec2-user
[root@jump-host ec2-user]# mkdir tmp_dir
[root@jump-host output_dir]# cd
[root@jump-host ~]# cd /home/ec2-user/
[root@jump-host ec2-user]# pwd
/home/ec2-user
[root@jump-host ec2-user]# ls -ltr
-rw-rw-r--. 1 root root 15872000 Mar 14 2019 sqlinesdata31773_x86_64_linux.tar
drwxr-xr-x. 8 root root 4096 Aug 31 11:33 sqlinesdata31773
-rw-r--r--. 1 root root 0 Aug 31 15:52 table_list.txt
-rwxr-xr-x. 1 root root 2876 Aug 31 15:53 row_count_comparison.sh
drwxr-xr-x. 2 root root 6 Aug 31 15:53 tmp_dir
drwxr-xr-x. 2 root root 6 Aug 31 15:54 output_dir
[root@jump-host ec2-user]#
[root@jump-host ec2-user]# ./row_count_comparison.sh
Arguments counts doesn't match!
USAGE : ./scriptname <DB_NAME> <POSTGRES_SCHEMA_NAME> <MYSQL_SCHEMA_NAME>
[root@jump-host ec2-user]# pwd
/home/ec2-user
[root@jump-host ec2-user]# ls -ltr
-rw-rw-r--. 1 root root 15872000 Mar 14 2019 sqlinesdata31773_x86_64_linux.tar
drwxr-xr-x. 8 root root 4096 Aug 31 11:33 sqlinesdata31773
-rwxr-xr-x. 1 root root 5206 Sep 1 10:34 row_count_comparison.sh
drwxr-xr-x. 2 root root 73 Sep 1 15:17 tmp_dir
drwxr-xr-x. 2 root root 152 Sep 1 15:19 output_dir
[root@jump-host ec2-user]# ./row_count_comparison.sh test_db test_db test_db
Postgres Schema entered : test_db
MySQL Schema/Database entered : test_db
Check the .CSV files under /home/ec2-user/output_dir/ for results
[root@jump-host ec2-user]#
[root@jump-host ec2-user]# cd output_dir/
[root@jump-host output_dir]# pwd
/home/ec2-user/output_dir
[root@jump-host output_dir]# ls -ltr
-rw-r--r--. 1 root root 206 Sep 1 15:24 Row_count_test_db.csv
-rw-r--r--. 1 root root 206 Sep 1 15:24 Row_count_test_db_2021-09-01-1524.csv
[root@jump-host tmp_dir]# pwd
/home/ec2-user/tmp_dir
[root@jump-host tmp_dir]# ls -ltr
-rw-r--r--. 1 root root 159 Sep 1 15:24 tb_count.log
-rw-r--r--. 1 root root 839 Sep 1 15:24 complete_count.log
-rw-r--r--. 1 root root 0 Sep 1 15:24 error_log.txt
[root@jump-host output_dir]# cat Row_count_test_db_2021-09-01-1524.csv
SCHEMA_NAME.TABLE_NAME;Postgres_R_Count;MySQL_R_Count;DIFF
test_db.comments;10;10;0
test_db.genres;5;5;0
test_db.kids_audit;2;2;0
test_db.kids_name;6;6;0
test_db.posts;50;50;0
test_db.users;100000;100000;0
[root@jump-host output_dir]#
Sample Script is here :-
-----------------------------
[root@jump-host ec2-user]# cat temp.sh
#!/bin/bash
LOG_DIR=/home/ec2-user/tmp_dir
SCR_HOME=/home/ec2-user
RESULT_DIR=/home/ec2-user/output_dir > $LOG_DIR/complete_count.log
#export PGPASSWORD=postgres
MYSQL_HOME=/usr/bin
if [ $# -lt 3 ] ; then
echo "Arguments counts doesn't match!"
echo "USAGE : ./scriptname <DB_NAME> <POSTGRES_SCHEMA_NAME> <MYSQL_SCHEMA_NAME>"
exit 0
fi
if [ ! -d tmp_dir ]; then
mkdir tmp_dir
fi
if [ ! -d output_dir ]; then
mkdir output_dir
fi
POSTGRES_USER=postgres
POSTGRES_PASSWORD=postgres
HOST_NAME=3.229.186.86
MYSQL_USER=root
MYSQL_PASS=Root@123
MYSQL_HOST=54.235.4.24
schema_diff()
{
P_DBNAME=$1
P_SCHEMA_NAME=$2
M_SCHEMA_NAME=$3
#echo $SCHEMA_NAME
echo "Postgres Schema entered : $P_SCHEMA_NAME"
echo "MySQL Schema/Database entered : $M_SCHEMA_NAME"
$MYSQL_HOME/mysql -u$MYSQL_USER -p$MYSQL_PASS -h$MYSQL_HOST $M_SCHEMA_NAME -N -s -e "show tables;" 2>&1|grep -v " [Warning]" > $SCR_HOME/table_list.txt
> $LOG_DIR/tb_count.log
echo "SCHEMA_NAME.TABLE_NAME;Postgres_R_Count;MySQL_R_Count;DIFF" > $SCR_HOME/output_dir/Row_count_$P_SCHEMA_NAME.csv
for i in `cat $SCR_HOME/table_list.txt`
do
echo "MySQL: $M_SCHEMA_NAME.$i" >> $LOG_DIR/complete_count.log
$MYSQL_HOME/mysql -u$MYSQL_USER -p$MYSQL_PASS -h$MYSQL_HOST $M_SCHEMA_NAME -e "select count(*) from $i;" 2>&1|grep -v " [Warning]" >> $LOG_DIR/tb_count.log
echo "Postgres: $P_SCHEMA_NAME.$i" >> $LOG_DIR/complete_count.log
psql -U $POSTGRES_USER -p$POSTGRES_PASSWORD -h$HOST_NAME -d $P_SCHEMA_NAME --port 5432 -t -q -c"select '$i:'||':'|| count(*) as cnt from $i;" >> $LOG_DIR/tb_count.log 2>&1
cat $LOG_DIR/tb_count.log >> $LOG_DIR/complete_count.log
grep -i error $LOG_DIR/tb_count.log > $LOG_DIR/error_log.txt
if [ `cat $LOG_DIR/error_log.txt|wc -l` -ge 1 ]; then
echo "Check the error, Table count failed for the table $i:"
cat $LOG_DIR/error_log.txt
> $LOG_DIR/tb_count.log
else
p_count=`cat $LOG_DIR/tb_count.log|tail -2|head -1|awk -F'::' '{print $2}' |tr -d '[:space:]'`
m_count=`cat $LOG_DIR/tb_count.log|tail -3|head -1|tr -d '[:space:]'`
diff=`expr $p_count - $m_count|tr -d '[:space:]'`
#echo Count on Oracle for table $i : $o_count
#echo Count on postgresql for table $i : $m_count
#echo Diff for table $i : $diff
echo "$P_SCHEMA_NAME.$i;$p_count;$m_count;$diff" >> $SCR_HOME/output_dir/Row_count_$P_SCHEMA_NAME.csv
fi
done
cat $SCR_HOME/output_dir/Row_count_$P_SCHEMA_NAME.csv > $SCR_HOME/output_dir/Row_count_"$P_SCHEMA_NAME"_`date +%Y-%m-%d-%H%M`.csv
#done < $SCR_HOME/schema.lst
}
schema_diff $1 $2 $3
echo "Check the .CSV files under $SCR_HOME/output_dir/ for results"
[root@jump-host ec2-user]#
-----------------------------------------------------------------------
Step - 14 Prepare the Truncate tables for Target Database [ MySQL ]
-----------------------------------------------------------------------
[root@mysql-target-server ec2-user]# pwd
/home/ec2-user
[root@mysql-target-server ec2-user]# mkdir Truncate_Scripts
[root@mysql-target-server ec2-user]# cd Truncate_Scripts/
[root@mysql-target-server Truncate_Scripts]# pwd
/home/ec2-user/Truncate_Scripts
(i) By using below SQL statement , to generate the truncate tables for all Databases ...
mysql -u root -p -s
SELECT CONCAT('TRUNCATE TABLE ',table_schema,'.',TABLE_NAME, ';') FROM INFORMATION_SCHEMA.TABLES where table_schema in ('dvdrental');
SELECT CONCAT('TRUNCATE TABLE ',table_schema,'.',TABLE_NAME, ';') FROM INFORMATION_SCHEMA.TABLES where table_schema in ('kidsdb');
SELECT CONCAT('TRUNCATE TABLE ',table_schema,'.',TABLE_NAME, ';') FROM INFORMATION_SCHEMA.TABLES where table_schema in ('migration_db');
SELECT CONCAT('TRUNCATE TABLE ',table_schema,'.',TABLE_NAME, ';') FROM INFORMATION_SCHEMA.TABLES where table_schema in ('mrsdb');
SELECT CONCAT('TRUNCATE TABLE ',table_schema,'.',TABLE_NAME, ';') FROM INFORMATION_SCHEMA.TABLES where table_schema in ('test_db');
SELECT CONCAT('TRUNCATE TABLE ',table_schema,'.',TABLE_NAME, ';') FROM INFORMATION_SCHEMA.TABLES where table_schema in ('blob');
(ii) Here is the all .sql files for each database trncate scripts.
[root@mysql-target-server Truncate_Scripts]# pwd
/home/ec2-user/Truncate_Scripts
[root@mysql-target-server Truncate_Scripts]# ls -ltr
-rw-r--r--. 1 root root 858 Sep 1 16:50 truncate_dvdrental.sql
-rw-r--r--. 1 root root 87 Sep 1 16:54 kidsdb.sql
-rw-r--r--. 1 root root 127 Sep 1 16:55 migration_db.sql
-rw-r--r--. 1 root root 249 Sep 1 16:56 test_db.sql
-rw-r--r--. 1 root root 88 Sep 1 16:56 blob.sql
-rw-r--r--. 1 root root 34615 Sep 1 16:58 mrsdb.sql
[root@mysql-target-server Truncate_Scripts]#
Sample truncate sql statements are :-
[root@mysql-target-server Truncate_Scripts]# cat test_db.sql
SET FOREIGN_KEY_CHECKS=0;
TRUNCATE TABLE test_db.comments;
TRUNCATE TABLE test_db.genres;
TRUNCATE TABLE test_db.kids_audit;
TRUNCATE TABLE test_db.kids_name;
TRUNCATE TABLE test_db.posts;
TRUNCATE TABLE test_db.users;
SET FOREIGN_KEY_CHECKS=1;
[root@mysql-target-server Truncate_Scripts]#
Possible Error while Creating the tables / Truncating the Data :-
---------------------------------------------------------------------
Error like: ERROR 1217 (23000): Cannot delete or update a parent row: a foreign key constraint fails
Solution :- Disable Foreign Key: -------> SET FOREIGN_KEY_CHECKS=0;
Enable Foreign Key: -------> SET FOREIGN_KEY_CHECKS=1;
log_bin_trust_function_creators = OFF should enable while creating functions [ SET GLOBAL log_bin_trust_function_creators = ON; ]
set global local_infile=true; [ By default it will be OFF ]
---------------------------------------------------------------------
Step - 15 Run the Create Schema / Create DDL Objectivies [ Tables , Views , SP , Trigger , Functions ..etc in Target Database [ MySQL ]
--------------------------------------------------------------------
The below folder i prepared all DDL's and validation scripts in my jump host server like this ....
[root@jump-host Final_Scripts]# pwd
/home/ec2-user/Final_Scripts
[root@jump-host Final_Scripts]# ls -ltr
drwxrwxr-x. 2 ec2-user ec2-user 130 Sep 2 02:00 Truncate_Scripts
drwxrwxr-x. 2 ec2-user ec2-user 256 Sep 2 02:03 All_Schema_Objectives_DDL_Databases
drwxrwxr-x. 2 ec2-user ec2-user 52 Sep 2 02:07 Validation_Scripts
[root@jump-host Final_Scripts]# cd Truncate_Scripts/
[root@jump-host Truncate_Scripts]# pwd
/home/ec2-user/Final_Scripts/Truncate_Scripts
[root@jump-host Truncate_Scripts]# ls -ltr
-rw-rw-r--. 1 ec2-user ec2-user 858 Sep 1 16:50 truncate_dvdrental.sql
-rw-rw-r--. 1 ec2-user ec2-user 87 Sep 1 16:54 kidsdb.sql
-rw-rw-r--. 1 ec2-user ec2-user 127 Sep 1 16:55 migration_db.sql
-rw-rw-r--. 1 ec2-user ec2-user 249 Sep 1 16:56 test_db.sql
-rw-rw-r--. 1 ec2-user ec2-user 88 Sep 1 16:56 blob.sql
-rw-rw-r--. 1 ec2-user ec2-user 34615 Sep 1 16:58 mrsdb.sql
[root@jump-host Truncate_Scripts]#
[root@jump-host Final_Scripts]# cd All_Schema_Objectives_DDL_Databases/
[root@jump-host All_Schema_Objectives_DDL_Databases]# pwd
/home/ec2-user/Final_Scripts/All_Schema_Objectives_DDL_Databases
[root@jump-host All_Schema_Objectives_DDL_Databases]# ls -ltr
-rw-rw-r--. 1 ec2-user ec2-user 2637 Aug 31 01:27 create_schemas.sql
-rw-rw-r--. 1 ec2-user ec2-user 1178 Aug 31 01:29 blob_db_ddl_table_function.sql
-rw-rw-r--. 1 ec2-user ec2-user 354 Aug 31 01:45 kidsdb_db_ddl_table.sql
-rw-rw-r--. 1 ec2-user ec2-user 4458 Aug 31 01:49 test_db_ddl_table.sql
-rw-rw-r--. 1 ec2-user ec2-user 1984 Aug 31 02:03 migration_db_ddl_table.sql
-rw-rw-r--. 1 ec2-user ec2-user 1817925 Aug 31 02:07 mrsdb_tables_views_ddl.sql
-rw-rw-r--. 1 ec2-user ec2-user 30362 Aug 31 02:18 dvdrental_tables_views_functions_sequences_ddl.sql
[root@jump-host All_Schema_Objectives_DDL_Databases]#
[root@jump-host Final_Scripts]# cd Validation_Scripts/
[root@jump-host Validation_Scripts]# pwd
/home/ec2-user/Final_Scripts/Validation_Scripts
[root@jump-host Validation_Scripts]# ls -ltr
-rwxr-xr-x. 1 root root 2820 Sep 1 15:30 row_count_comparison_final_consider.sh
[root@jump-host Validation_Scripts]#
(i) If we have any sample data in target Database [ MySQL ] Please truncate the data in databases .. by using below method ..
[root@jump-host Truncate_Scripts]# pwd
/home/ec2-user/Final_Scripts/Truncate_Scripts
[root@jump-host Truncate_Scripts]# ls -ltr
-rw-rw-r--. 1 ec2-user ec2-user 858 Sep 1 16:50 truncate_dvdrental.sql
-rw-rw-r--. 1 ec2-user ec2-user 87 Sep 1 16:54 kidsdb.sql
-rw-rw-r--. 1 ec2-user ec2-user 127 Sep 1 16:55 migration_db.sql
-rw-rw-r--. 1 ec2-user ec2-user 249 Sep 1 16:56 test_db.sql
-rw-rw-r--. 1 ec2-user ec2-user 88 Sep 1 16:56 blob.sql
-rw-rw-r--. 1 ec2-user ec2-user 34615 Sep 1 16:58 mrsdb.sql
[root@jump-host Truncate_Scripts]# mysql -u root -p -h 3.90.66.114 < truncate_dvdrental.sql
Enter password:
[root@jump-host Truncate_Scripts]#
[root@jump-host Truncate_Scripts]# mysql -u root -p -h 3.90.66.114 < kidsdb.sql
Enter password:
[root@jump-host Truncate_Scripts]#
[root@jump-host Truncate_Scripts]# mysql -u root -p -h 3.90.66.114 < test_db.sql
Enter password:
[root@jump-host Truncate_Scripts]#
[root@jump-host Truncate_Scripts]# mysql -u root -p -h 3.90.66.114 < migration_db.sql
Enter password:
[root@jump-host Truncate_Scripts]#
[root@jump-host Truncate_Scripts]# mysql -u root -p -h 3.90.66.114 < mrsdb.sql
Enter password:
[root@jump-host Truncate_Scripts]#
[root@jump-host Truncate_Scripts]# mysql -u root -p -h 3.90.66.114 < blob.sql
Enter password:
[root@jump-host Truncate_Scripts]#
Till now we truncated all tables data if we have any in target database ...
(ii) Creating databases / Schemas in Target Database MySQL :-
[root@jump-host All_Schema_Objectives_DDL_Databases]# pwd
/home/ec2-user/Final_Scripts/All_Schema_Objectives_DDL_Databases
[root@jump-host All_Schema_Objectives_DDL_Databases]# ls -ltr
-rw-rw-r--. 1 ec2-user ec2-user 1178 Aug 31 01:29 blob_db_ddl_table_function.sql
-rw-rw-r--. 1 ec2-user ec2-user 354 Aug 31 01:45 kidsdb_db_ddl_table.sql
-rw-rw-r--. 1 ec2-user ec2-user 4458 Aug 31 01:49 test_db_ddl_table.sql
-rw-rw-r--. 1 ec2-user ec2-user 1984 Aug 31 02:03 migration_db_ddl_table.sql
-rw-rw-r--. 1 ec2-user ec2-user 1817925 Aug 31 02:07 mrsdb_tables_views_ddl.sql
-rw-rw-r--. 1 ec2-user ec2-user 30362 Aug 31 02:18 dvdrental_tables_views_functions_sequences_ddl.sql
-rw-rw-r--. 1 ec2-user ec2-user 2645 Sep 2 12:57 create_schemas.sql
[root@jump-host All_Schema_Objectives_DDL_Databases]# mysql -u root -p -h 3.90.66.114 < create_schemas.sql
Enter password:
[root@jump-host All_Schema_Objectives_DDL_Databases]#
[root@jump-host All_Schema_Objectives_DDL_Databases]# mysql -u root -p -h 3.90.66.114 -e "show databases;"
Enter password:
+-------------------------+
| Database |
+-------------------------+
| aws_postgresql_ext |
| aws_postgresql_ext_data |
| blob |
| dvdrental |
| information_schema |
| kidsdb |
| migration_db |
| mrsdb |
| mysql |
| performance_schema |
| sys |
| test_db |
+-------------------------+
[root@jump-host All_Schema_Objectives_DDL_Databases]#
So all Databases has been created by using DDL of schemas..
(iii) Now create the DDL of all tables in all Databases for Target Database [ MySQL ]
[root@jump-host All_Schema_Objectives_DDL_Databases]# pwd
/home/ec2-user/Final_Scripts/All_Schema_Objectives_DDL_Databases
[root@jump-host All_Schema_Objectives_DDL_Databases]# ls -ltr
total 1832
-rw-rw-r--. 1 ec2-user ec2-user 354 Aug 31 01:45 kidsdb_db_ddl_table.sql
-rw-rw-r--. 1 ec2-user ec2-user 4458 Aug 31 01:49 test_db_ddl_table.sql
-rw-rw-r--. 1 ec2-user ec2-user 1984 Aug 31 02:03 migration_db_ddl_table.sql
-rw-rw-r--. 1 ec2-user ec2-user 1817925 Aug 31 02:07 mrsdb_tables_views_ddl.sql
-rw-rw-r--. 1 ec2-user ec2-user 30362 Aug 31 02:18 dvdrental_tables_views_functions_sequences_ddl.sql
-rw-rw-r--. 1 ec2-user ec2-user 2645 Sep 2 12:57 create_schemas.sql
-rw-rw-r--. 1 ec2-user ec2-user 1178 Sep 2 13:07 blob_db_ddl_table_function.sql
[root@jump-host All_Schema_Objectives_DDL_Databases]# mysql -u root -p -h 3.90.66.114 < blob_db_ddl_table_function.sql
Enter password:
[root@jump-host All_Schema_Objectives_DDL_Databases]#
[root@jump-host All_Schema_Objectives_DDL_Databases]# mysql -u root -p -h 3.90.66.114 < test_db_ddl_table.sql
Enter password:
[root@jump-host All_Schema_Objectives_DDL_Databases]#
[root@jump-host All_Schema_Objectives_DDL_Databases]# mysql -u root -p -h 3.90.66.114 < kidsdb_db_ddl_table.sql
Enter password:
[root@jump-host All_Schema_Objectives_DDL_Databases]#
[root@jump-host All_Schema_Objectives_DDL_Databases]# mysql -u root -p -h 3.90.66.114 < migration_db_ddl_table.sql
Enter password:
[root@jump-host All_Schema_Objectives_DDL_Databases]#
[root@jump-host All_Schema_Objectives_DDL_Databases]# mysql -u root -p -h 3.90.66.114 < mrsdb_tables_views_ddl.sql
Enter password:
[root@jump-host All_Schema_Objectives_DDL_Databases]#
[root@jump-host All_Schema_Objectives_DDL_Databases]# mysql -u root -p -h 3.90.66.114 < dvdrental_ddl_tables_new.sql
Enter password:
[root@jump-host All_Schema_Objectives_DDL_Databases]#
[root@jump-host All_Schema_Objectives_DDL_Databases]# mysql -u root -p -h 3.90.66.114 < dvdrental_index_creation.sql
Enter password:
[root@jump-host All_Schema_Objectives_DDL_Databases]#
[root@jump-host All_Schema_Objectives_DDL_Databases]# mysql -u root -p -h 3.90.66.114 dvdrental < dvdrental_triggeres.sql
Enter password:
[root@jump-host All_Schema_Objectives_DDL_Databases]#
[root@jump-host All_Schema_Objectives_DDL_Databases]# mysql -u root -p -h 3.90.66.114 dvdrental < dvdrental_ddl_views.sql
Enter password:
[root@jump-host All_Schema_Objectives_DDL_Databases]#
[root@jump-host All_Schema_Objectives_DDL_Databases]# mysql -u root -p -h 3.90.66.114 dvdrental < dvdrental_functions.sql
Enter password:
[root@jump-host All_Schema_Objectives_DDL_Databases]#
[root@jump-host All_Schema_Objectives_DDL_Databases]# mysql -u root -p -h 3.90.66.114 dvdrental < dvdrental_procedures.sql
Enter password:
[root@jump-host All_Schema_Objectives_DDL_Databases]#
[root@jump-host ~]# mysql -u root -p -h 3.90.66.114 -e "show databases;"
Enter password:
+-------------------------+
| Database |
+-------------------------+
| aws_postgresql_ext |
| aws_postgresql_ext_data |
| blob |
| dvdrental |
| information_schema |
| kidsdb |
| migration_db |
| mrsdb |
| mysql |
| performance_schema |
| public |
| sys |
| test_db |
+-------------------------+
[root@jump-host ~]#
So far DDL of Schemas and DDL of tables / views are created in Target Database [ MySQL ]
------------------------------------------------------------------------------
Step - 16 Start the Data Migration from PostgreSQL to MySQL Database by using
SQline Tool
------------------------------------------------------------------------------
Now time to migrate the data by using below steps ..
[root@jump-host test_db]# pwd
/home/ec2-user/sqlinesdata31773/test_db
[root@jump-host test_db]# ls -ltr
-rw-r--r--. 1 root root 5112 Sep 2 17:00 sqldata_test_db.cnf
[root@jump-host test_db]# nohup /home/ec2-user/sqlinesdata31773/sqldata -cfg=sqldata_test_db.cnf -log=test_db_$(date +"%d-%m-%Y").log &
[1] 2978
[root@jump-host test_db]# nohup: ignoring input and appending output to 'nohup.out'
[1]+ Done nohup /home/ec2-user/sqlinesdata31773/sqldata -cfg=sqldata_test_db.cnf -log=test_db_$(date +"%d-%m-%Y").log
[root@jump-host test_db]# ls -ltr
-rw-r--r--. 1 root root 5112 Sep 2 17:00 sqldata_test_db.cnf
-rw-r--r--. 1 root root 508 Sep 2 17:01 test_db_02-09-2021.log
-rw-------. 1 root root 366 Sep 2 17:01 nohup.out
[root@jump-host test_db]# cat nohup.out
SQLines Data 3.1.773 x86_64 Linux - Database Migration Tool.
Copyright (c) 2018 SQLines. All Rights Reserved.
Connecting to databases (44 ms)
PostgreSQL 13.4 on x86_64-pc-linux-gnu, compiled by gcc (GCC) 8.4.1 20200928 (Red Hat 8.4.1-1), 64-bit (Ok, 44 ms)
MySQL 8.0.26 MySQL Community Server - GPL x86_64 Linux 1.2.11 (Ok, 18 ms)
Reading the database schema
[root@jump-host test_db]# cat test_db_02-09-2021.log
SQLines Data 3.1.773 x86_64 Linux - Database Migration Tool.
Copyright (c) 2018 SQLines. All Rights Reserved.
Current timestamp: 2021:09:02 17:01:08.216
Current directory: /home/ec2-user/sqlinesdata31773/test_db
Configuration file: sqldata_test_db.cnf
Connecting to databases (44 ms)
PostgreSQL 13.4 on x86_64-pc-linux-gnu, compiled by gcc (GCC) 8.4.1 20200928 (Red Hat 8.4.1-1), 64-bit (Ok, 44 ms)
MySQL 8.0.26 MySQL Community Server - GPL x86_64 Linux 1.2.11 (Ok, 18 ms)
Reading the database schema
[root@jump-host test_db]# cat sqldata_test_db.cnf
-- SQLines Data options (contact us at support@sqlines.com to request a new option)
-sd=pg,postgres/postgres@3.229.186.86:5432,test_db
-- td=mysql, root@3.90.66.114,test_db
-td=mysql,root/Root@123@3.90.66.114,test_db
-- t=*.*
-t=*.*
-ss=6
-- out=/home/ec2-user/sqlinesdata31773/test_db/
-- log=test_db.log
-- DDL Options
--------------
-- Set yes to to migrate table definitions, or no to not migrate
-- DDL statements executed depends on -topt option (CREATE; DROP and CREATE TABLE, or TRUNCATE TABLE)
-ddl_tables=no
-- Set yes to to migrate constraints (this is default), or no to not migrate
-constraints=no
-- Set yes to to migrate indexes (this is default), or no to not migrate
-indexes=no
-- Data Options
---------------
-- Set yes to to migrate data (this is default), or no to not migrate
-data=yes
-- The maximum number of rows in read/write batch, by default it is defined by the buffer size divided by the table row size
-batch_max_rows=10000
-- Work with LOBs as VARCHAR columns to increase performance (use if LOB columns contain short values less than a few MB i.e.), default is no
-fetch_lob_as_varchar=
-- Fixed size buffer in bytes to read LOB values by binding not by reading part by part (can cause truncation error if the buffer is less than
-- the maximum LOB value, default is to read LOBs by separate calls
-lob_bind_buffer=1000000
-- When converting from ASCII or UTF16/UCS-2 character sets in the source database to UTF8 i.e. in the target database depending on
-- the actual data you may need greater storage size. And vice versa converting in opposite direction you may require smaller storage size.
-- This parameter specifies the length change ratio. If the source length is 100, and ratio is 1.1 then the target length will be 110
-char_length_ratio=
-- MariaDB options
------------------
-- Path to MariaDB library including the file name. For example, for MariaDB Connector C on Windows:
-- -mariadb_lib=C:\Program Files\MariaDB\MariaDB Connector C 64-bit\lib\libmariadb.dll
-- By default, on Windows the tool tries to load libmariadb.dll library from PATH; if not found the tool tries to use MySQL connector libmysql.dll
-mariadb_lib=
-- Disable or enable binary logging for the connection (the client must have the SUPER privilege for this operation). By default, MariaDB default is used.
-mariadb_set_sql_log_bin=0
-- Set global max_allowed_packet option to the specified value (use only values in multiples of 1024, for example, 1073741824 for 1GB)
-- Use this option when you receive 'MySQL has gone away' error during the data load
-mariadb_max_allowed_packet=
-- MySQL options
----------------
-- Set the character set for the connection (utf8 i.e.)
-mysql_set_character_set=utf8mb4
-- Set the collation for the connection
-mysql_set_collation=utf8mb4_bin
-- Run SET FOREIGN_KEY_CHECKS=value at the beginning of each session, not executed if no value set (MySQL as target)
-mysql_set_foreign_key_checks=0
-- Collate used for data validation. Use _bin or _cs collates to order values ABCabc instead of AaBbCc
-mysql_validation_collate=latin1_bin
-- SQL Server options
---------------------
-- Codepage for input data (BCP -C option)
-bcp_codepage=
-- Oracle options
-----------------
-- Path to Oracle OCI library including the file name. For example, for Oracle on Windows:
-- -oci_lib=C:\oraclexe\app\oracle\product\11.2.0\server\bin\oci.dll
-- By default, on Windows the tool tries to load oci.dll library from PATH; if not found the tool tries to search Windows registry to find Oracle client installations
-- See also http://www.sqlines.com/sqldata_oracle_connection
-oci_lib=
-- NLS_LANG setting to use for Oracle connection
-oracle_nls_lang=
-- PostgreSQL options
---------------------
-- Set client encoding (you can also use PGCLIENTENCODING environment variable)
-pg_client_encoding=latin1
-- Sybase ASE options
---------------------
-- Set to yes to use encrypted password handshakes with the server
-sybase_encrypted_password=
-- Sybase ASA (Sybase SQL Anywhere) options
-------------------------------------------
-- Set to yes to extract all character data as 2-byte Unicode (UTF-16/UCS-2)
-sybase_asa_char_as_wchar=
-- Informix options
-------------------
-- Set CLIENT_LOCALE for connection (Note that Setnet32 and environment variable has no effect on Informix ODBC driver)
-informix_client_locale=
-- Validation Options
---------------------
-- Maximum number of found not equal rows per table after which the validation stops for this table, by default no limit set
-validation_not_equal_max_rows=
-- Number of digits in the fractional part (milliseconds, microseconds i.e.) of datetime/timestamp values to validate. By default, all digits are compared
-validation_datetime_fraction=
-- Misc Options
---------------
-- Set to yes to generate trace file sqldata.trc with debug information, default is no
-trace=
-- Set to yes to create dump files containing data for tables
-trace_data=
-- Set to yes to create dump files containing differences in data found during data validation
-trace_diff_data=
Note :-
In my case sqlinesdata31773 Version having some issue, so i was using with sqlinesdata-3.3.171_x86_64_linux.tar.gz version and try to load the data.
Tried with above version :-
------------------------------
[root@jump-host sqlinesdata-3.3.171_x86_64_linux]# pwd
/home/ec2-user/sqlinesdata-3.3.171_x86_64_linux
[root@jump-host sqlinesdata-3.3.171_x86_64_linux]# ls -ltr
-rw-r--r--. 1 root root 7212 Aug 31 12:47 sqldata.cfg
-rw-r--r--. 1 root root 361 Sep 5 20:27 sqlines_twhere.txt
-rw-r--r--. 1 root root 579 Sep 5 20:27 sqlines_tsel.txt
-rw-r--r--. 1 root root 498 Sep 5 20:27 sqlines_tsel_all.txt
-rw-r--r--. 1 root root 192 Sep 5 20:27 sqlines_tmap.txt
-rw-r--r--. 1 root root 273 Sep 5 20:27 sqlines_tf.txt
-rw-r--r--. 1 root root 316 Sep 5 20:27 sqlines_qf.txt
-rw-r--r--. 1 root root 190 Sep 5 20:27 sqlines_dtmap.txt
-rw-r--r--. 1 root root 202 Sep 5 20:27 sqlines_cnsmap.txt
-rw-r--r--. 1 root root 401 Sep 5 20:27 sqlines_cmap.txt
-rw-r--r--. 1 root root 188 Sep 5 20:27 readme.txt
-rw-r--r--. 1 root root 118 Sep 5 20:27 license.txt
-rwxr-xr-x. 1 root root 11670344 Sep 5 20:27 libQtGui.so
-rwxr-xr-x. 1 root root 2701504 Sep 5 20:27 libQtCore.so
-rwxr-xr-x. 1 root root 2051152 Sep 5 20:29 sqldata
-rw-r--r--. 1 root root 272 Sep 6 00:45 sqldata.log
[root@jump-host sqlinesdata-3.3.171_x86_64_linux]# ./sqldata -sd=pg,postgres/postgres@3.229.186.86,test_db -td=mysql,root/Root@123@18.234.112.125,test_db -t=users -log=test_db_user_table_171_$(date +"%d-%m-%Y").log
SQLines Data 3.3.171 x86_64 Linux - Database Migration Tool.
Copyright (c) 2021 SQLines. All Rights Reserved.
FOR EVALUATION USE ONLY, MAY SKIP 1 ROW FROM A TABLE
Connecting to databases (105 ms)
PostgreSQL 13.4 on x86_64-pc-linux-gnu, compiled by gcc (GCC) 8.4.1 20200928 (Red Hat 8.4.1-1), 64-bit (Ok, 105 ms)
MySQL 8.0.26 MySQL Community Server - GPL x86_64 Linux 1.2.11 (Ok, 59 ms)
Reading the database schema (1 table, 28 ms)
Transferring database (1 session):
public.users - Started (1 of 1, session 1)
public.users - Open cursor (100000 rows read, 63 ms, session 1)
public.users - Drop target table (10 ms, session 1)
public.users - Create target table (26 ms, session 1)
SHOW WARNINGS: LOAD DATA LOCAL INFILE 'sqldata.in_memory' IGNORE INTO TABLE public.`users` CHARACTER SET utf8mb4
Level: Error; Code: 3948; Message: Loading local data is disabled; this must be enabled on both the client and server sides
public.users - Data transfer failed
Loading local data is disabled; this must be enabled on both the client and server sides
Rows read: 100000 (1587302 rows/sec)
Rows written: 0 (0 rows/sec, 0 bytes, 0 bytes/sec)
Transfer time: 104 ms (63 ms read, 0 ms write)
Failed tables (1 table):
1. public.users - Loading local data is disabled; this must be enabled on both the client and server sides
public.users
Please contact us at support@sqlines.com for any assistance.
Summary:
Tables: 1 (0 Ok, 1 failed)
Target DDL: 2 (2 Ok, 0 failed)
Rows read: 100000
Rows written: 0 (100000 row difference)
Transfer time: 106 ms (0 rows/sec, 0 bytes, 0 bytes/sec)
Logs:
Execution log: test_db_user_table_171_06-09-2021.log
DDL SQL statements: sqldata_ddl.sql
[root@jump-host sqlinesdata-3.3.171_x86_64_linux]#
So here known possible issue with " local_infile " parameter in Target MySQL Database side..
By default this parameter will be OFF ,we should enable .
[root@jump-host sqlinesdata-3.3.171_x86_64_linux]# mysql -u root -p -h 18.234.112.125
Enter password:
Welcome to the MySQL monitor. Commands end with ; or \g.Your MySQL connection id is 295.Server version: 8.0.26 MySQL Community Server - GPL
Copyright (c) 2000, 2020, Oracle and/or its affiliates. All rights reserved.Oracle is a registered trademark of Oracle Corporation and/or its
affiliates. Other names may be trademarks of their respective
owners.Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.
mysql> show variables like 'local_infile%';
+---------------+-------+
| Variable_name | Value |
+---------------+-------+
| local_infile | OFF |
+---------------+-------+
1 row in set (0.00 sec)
mysql> set global local_infile=true;
Query OK, 0 rows affected (0.00 sec)
mysql> show variables like 'local_infile%';
+---------------+-------+
| Variable_name | Value |
+---------------+-------+
| local_infile | ON |
+---------------+-------+
1 row in set (0.00 sec)
Now start the test_db Database migration by using below steps ...
nohup /home/ec2-user/sqlinesdata33171/sqldata -cfg=sqldata_test_db.cnf -log=test_db_$(date +"%d-%m-%Y").log &
For single table working fine :-
--------------------------------------
[root@jump-host sqlinesdata33171]# pwd
/home/ec2-user/sqlinesdata33171
[root@jump-host sqlinesdata33171]# ls -ltr
-rw-r--r--. 1 root root 7212 Aug 31 12:47 sqldata.cfg
-rw-r--r--. 1 root root 361 Sep 5 20:27 sqlines_twhere.txt
-rw-r--r--. 1 root root 579 Sep 5 20:27 sqlines_tsel.txt
-rw-r--r--. 1 root root 498 Sep 5 20:27 sqlines_tsel_all.txt
-rw-r--r--. 1 root root 192 Sep 5 20:27 sqlines_tmap.txt
-rw-r--r--. 1 root root 273 Sep 5 20:27 sqlines_tf.txt
-rw-r--r--. 1 root root 316 Sep 5 20:27 sqlines_qf.txt
-rw-r--r--. 1 root root 190 Sep 5 20:27 sqlines_dtmap.txt
-rw-r--r--. 1 root root 202 Sep 5 20:27 sqlines_cnsmap.txt
-rw-r--r--. 1 root root 401 Sep 5 20:27 sqlines_cmap.txt
-rw-r--r--. 1 root root 188 Sep 5 20:27 readme.txt
-rw-r--r--. 1 root root 118 Sep 5 20:27 license.txt
-rwxr-xr-x. 1 root root 11670344 Sep 5 20:27 libQtGui.so
-rwxr-xr-x. 1 root root 2701504 Sep 5 20:27 libQtCore.so
-rwxr-xr-x. 1 root root 2051152 Sep 5 20:29 sqldata
drwxr-xr-x. 2 root root 6 Sep 6 01:05 mrsdb
drwxr-xr-x. 2 root root 6 Sep 6 01:05 migration_db
drwxr-xr-x. 2 root root 6 Sep 6 01:05 dvdrental
drwxr-xr-x. 2 root root 6 Sep 6 01:06 blob
drwxr-xr-x. 2 root root 6 Sep 6 01:06 kidsdb
drwxr-xr-x. 2 root root 33 Sep 6 05:16 test_db
[root@jump-host sqlinesdata33171]# ./sqldata -sd=pg,postgres/postgres@3.229.186.86,test_db -td=mysql,root/Root@123@18.234.112.125,test_db -smap=public:test_db -ddl_tables=no -constraints=no -indexes=no -data=yes -t=posts -log=test_db_posts_table_171_$(date +"%d-%m-%Y").log
SQLines Data 3.3.171 x86_64 Linux - Database Migration Tool.
Copyright (c) 2021 SQLines. All Rights Reserved.
FOR EVALUATION USE ONLY, MAY SKIP 1 ROW FROM A TABLE
Connecting to databases (195 ms)
PostgreSQL 13.4 on x86_64-pc-linux-gnu, compiled by gcc (GCC) 8.4.1 20200928 (Red Hat 8.4.1-1), 64-bit (Ok, 195 ms)
MySQL 8.0.26 MySQL Community Server - GPL x86_64 Linux 1.2.11 (Ok, 9 ms)
Reading the database schema (1 table, 2 ms)
Transferring database (1 session):
public.posts - Started (1 of 1, session 1)
public.posts - Open cursor (50 rows read, 18 ms, session 1)
SHOW WARNINGS: LOAD DATA LOCAL INFILE 'sqldata.in_memory' IGNORE INTO TABLE test_db.`posts` CHARACTER SET utf8mb4
Level: Warning; Code: 1062; Message: Duplicate entry '1' for key 'posts.PRIMARY'
Level: Warning; Code: 1062; Message: Duplicate entry '2' for key 'posts.PRIMARY'
Level: Warning; Code: 1062; Message: Duplicate entry '3' for key 'posts.PRIMARY'
Level: Warning; Code: 1062; Message: Duplicate entry '4' for key 'posts.PRIMARY'
Level: Warning; Code: 1062; Message: Duplicate entry '5' for key 'posts.PRIMARY'
Level: Warning; Code: 1062; Message: Duplicate entry '6' for key 'posts.PRIMARY'
Level: Warning; Code: 1062; Message: Duplicate entry '7' for key 'posts.PRIMARY'
Level: Warning; Code: 1062; Message: Duplicate entry '8' for key 'posts.PRIMARY'
Level: Warning; Code: 1062; Message: Duplicate entry '9' for key 'posts.PRIMARY'
Level: Warning; Code: 1062; Message: Duplicate entry '10' for key 'posts.PRIMARY'
Level: Warning; Code: 1062; Message: Duplicate entry '11' for key 'posts.PRIMARY'
Level: Warning; Code: 1062; Message: Duplicate entry '12' for key 'posts.PRIMARY'
Level: Warning; Code: 1062; Message: Duplicate entry '13' for key 'posts.PRIMARY'
Level: Warning; Code: 1062; Message: Duplicate entry '14' for key 'posts.PRIMARY'
Level: Warning; Code: 1062; Message: Duplicate entry '15' for key 'posts.PRIMARY'
Level: Warning; Code: 1062; Message: Duplicate entry '16' for key 'posts.PRIMARY'
Level: Warning; Code: 1062; Message: Duplicate entry '17' for key 'posts.PRIMARY'
Level: Warning; Code: 1062; Message: Duplicate entry '18' for key 'posts.PRIMARY'
Level: Warning; Code: 1062; Message: Duplicate entry '19' for key 'posts.PRIMARY'
Level: Warning; Code: 1062; Message: Duplicate entry '20' for key 'posts.PRIMARY'
Level: Warning; Code: 1062; Message: Duplicate entry '21' for key 'posts.PRIMARY'
Level: Warning; Code: 1062; Message: Duplicate entry '22' for key 'posts.PRIMARY'
Level: Warning; Code: 1062; Message: Duplicate entry '23' for key 'posts.PRIMARY'
Level: Warning; Code: 1062; Message: Duplicate entry '24' for key 'posts.PRIMARY'
Level: Warning; Code: 1062; Message: Duplicate entry '25' for key 'posts.PRIMARY'
Level: Warning; Code: 1062; Message: Duplicate entry '26' for key 'posts.PRIMARY'
Level: Warning; Code: 1062; Message: Duplicate entry '27' for key 'posts.PRIMARY'
Level: Warning; Code: 1062; Message: Duplicate entry '28' for key 'posts.PRIMARY'
Level: Warning; Code: 1062; Message: Duplicate entry '29' for key 'posts.PRIMARY'
Level: Warning; Code: 1062; Message: Duplicate entry '30' for key 'posts.PRIMARY'
Level: Warning; Code: 1062; Message: Duplicate entry '31' for key 'posts.PRIMARY'
Level: Warning; Code: 1062; Message: Duplicate entry '32' for key 'posts.PRIMARY'
Level: Warning; Code: 1062; Message: Duplicate entry '33' for key 'posts.PRIMARY'
Level: Warning; Code: 1062; Message: Duplicate entry '34' for key 'posts.PRIMARY'
Level: Warning; Code: 1062; Message: Duplicate entry '35' for key 'posts.PRIMARY'
Level: Warning; Code: 1062; Message: Duplicate entry '36' for key 'posts.PRIMARY'
Level: Warning; Code: 1062; Message: Duplicate entry '37' for key 'posts.PRIMARY'
Level: Warning; Code: 1062; Message: Duplicate entry '38' for key 'posts.PRIMARY'
Level: Warning; Code: 1062; Message: Duplicate entry '39' for key 'posts.PRIMARY'
Level: Warning; Code: 1062; Message: Duplicate entry '40' for key 'posts.PRIMARY'
Level: Warning; Code: 1062; Message: Duplicate entry '41' for key 'posts.PRIMARY'
Level: Warning; Code: 1062; Message: Duplicate entry '42' for key 'posts.PRIMARY'
Level: Warning; Code: 1062; Message: Duplicate entry '43' for key 'posts.PRIMARY'
Level: Warning; Code: 1062; Message: Duplicate entry '44' for key 'posts.PRIMARY'
Level: Warning; Code: 1062; Message: Duplicate entry '45' for key 'posts.PRIMARY'
Level: Warning; Code: 1062; Message: Duplicate entry '46' for key 'posts.PRIMARY'
Level: Warning; Code: 1062; Message: Duplicate entry '47' for key 'posts.PRIMARY'
Level: Warning; Code: 1062; Message: Duplicate entry '48' for key 'posts.PRIMARY'
Level: Warning; Code: 1062; Message: Duplicate entry '49' for key 'posts.PRIMARY'
Level: Warning; Code: 1062; Message: Duplicate entry '50' for key 'posts.PRIMARY'
public.posts - Data transfer complete (session 1)
Rows read: 50 (2778 rows/sec)
Rows written: 50 (50000 rows/sec, 1.1 KB, 1.1 MB/sec)
Transfer time: 21 ms (18 ms read, 1 ms write)
Summary:
Tables: 1 (1 Ok, 0 failed)
Target DDL: 0 (0 Ok, 0 failed)
Rows read: 50
Rows written: 50
Transfer time: 22 ms (2273 rows/sec, 1.1 KB, 50.9 KB/sec)
Logs:
Execution log: test_db_posts_table_171_06-09-2021.log
DDL SQL statements: sqldata_ddl.sql
[root@jump-host sqlinesdata33171]# mysql -u root -p -h 18.234.112.125
Enter password:
Welcome to the MySQL monitor. Commands end with ; or \g.Your MySQL connection id is 9251.Server version: 8.0.26 MySQL Community Server - GPL
Copyright (c) 2000, 2020, Oracle and/or its affiliates. All rights reserved.Oracle is a registered trademark of Oracle Corporation and/or its
affiliates. Other names may be trademarks of their respective owners.Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.
mysql> use test_db;
Database changed
mysql> show tables;
+-------------------+
| Tables_in_test_db |
+-------------------+
| comments |
| genres |
| kids_audit |
| kids_name |
| posts |
+-------------------+
5 rows in set (0.00 sec)
mysql> select count(1) from posts;
+----------+
| count(1) |
+----------+
| 50 |
+----------+
1 row in set (0.00 sec)
So test_db.posts data has been migrated ...
In postgres DB side also having 50 Rows ..
test_db=# select count(1) from posts;
count
-------
50
(1 row)
test_db=#
So single table working fine .....
mysql> SELECT SCHEMA_NAME,DEFAULT_CHARACTER_SET_NAME,DEFAULT_COLLATION_NAME from information_schema.SCHEMATA;
+-------------------------+----------------------------+-------------
| SCHEMA_NAME | DEFAULT_CHARACTER_SET_NAME | DEFAULT_COLLATION_NAME |
+-------------------------+----------------------------+-------------
| mysql | utf8mb4 | utf8mb4_0900_ai_ci |
| information_schema | utf8 | utf8_general_ci |
| performance_schema | utf8mb4 | utf8mb4_0900_ai_ci |
| sys | utf8mb4 | utf8mb4_0900_ai_ci |
| kidsdb | utf8mb4 | utf8mb4_0900_ai_ci |
| blob | utf8mb4 | utf8mb4_0900_ai_ci |
| migration_db | utf8mb4 | utf8mb4_0900_ai_ci |
| mrsdb | utf8mb4 | utf8mb4_0900_ai_ci |
| test_db | utf8mb4 | utf8mb4_0900_ai_ci |
| aws_postgresql_ext | utf8mb4 | utf8mb4_0900_ai_ci |
| aws_postgresql_ext_data | utf8mb4 | utf8mb4_0900_ai_ci |
| public | utf8mb4 | utf8mb4_0900_ai_ci |
| dvdrental | utf8mb4 | utf8mb4_0900_ai_ci |
+-------------------------+----------------------------+-------------
13 rows in set (0.00 sec)
The below method / steps can follow final migration of data :-
---------------------------------------------------------------------
Gathering Constraints ADD / DROP :-
---------------------------------------
(i)
ADD Constraints :-
---------------------
SELECT concat('ALTER TABLE ', TABLE_NAME, ' ADD CONSTRAINT ', CONSTRAINT_NAME, ' FOREIGN KEY ( ',COLUMN_NAME,' ) REFERENCES ',REFERENCED_TABLE_NAME,' (',REFERENCED_COLUMN_NAME,' ) ;') FROM information_schema.key_column_usage WHERE CONSTRAINT_SCHEMA = 'blob' AND referenced_table_name IS NOT NULL;
SELECT concat('ALTER TABLE ', TABLE_NAME, ' ADD CONSTRAINT ', CONSTRAINT_NAME, ' FOREIGN KEY ( ',COLUMN_NAME,' ) REFERENCES ',REFERENCED_TABLE_NAME,' (',REFERENCED_COLUMN_NAME,' ) ;') FROM information_schema.key_column_usage WHERE CONSTRAINT_SCHEMA = 'dvdrental' AND referenced_table_name IS NOT NULL;
Empty set (0.01 sec)
SELECT concat('ALTER TABLE ', TABLE_NAME, ' ADD CONSTRAINT ', CONSTRAINT_NAME, ' FOREIGN KEY ( ',COLUMN_NAME,' ) REFERENCES ',REFERENCED_TABLE_NAME,' (',REFERENCED_COLUMN_NAME,' ) ;') FROM information_schema.key_column_usage WHERE CONSTRAINT_SCHEMA = 'kidsdb' AND referenced_table_name IS NOT NULL;
SELECT concat('ALTER TABLE ', TABLE_NAME, ' ADD CONSTRAINT ', CONSTRAINT_NAME, ' FOREIGN KEY ( ',COLUMN_NAME,' ) REFERENCES ',REFERENCED_TABLE_NAME,' (',REFERENCED_COLUMN_NAME,' ) ;') FROM information_schema.key_column_usage WHERE CONSTRAINT_SCHEMA = 'migration_db' AND referenced_table_name IS NOT NULL;
SELECT concat('ALTER TABLE ', TABLE_NAME, ' ADD CONSTRAINT ', CONSTRAINT_NAME, ' FOREIGN KEY ( ',COLUMN_NAME,' ) REFERENCES ',REFERENCED_TABLE_NAME,' (',REFERENCED_COLUMN_NAME,' ) ;') FROM information_schema.key_column_usage WHERE CONSTRAINT_SCHEMA = 'test_db' AND referenced_table_name IS NOT NULL;
SELECT concat('ALTER TABLE ', TABLE_NAME, ' ADD CONSTRAINT ', CONSTRAINT_NAME, ' FOREIGN KEY ( ',COLUMN_NAME,' ) REFERENCES ',REFERENCED_TABLE_NAME,' (',REFERENCED_COLUMN_NAME,' ) ;') FROM information_schema.key_column_usage WHERE CONSTRAINT_SCHEMA = 'mrsdb' AND referenced_table_name IS NOT NULL;
DROP Constraints :-
--------------------------
SELECT concat('ALTER TABLE ', TABLE_NAME, ' DROP FOREIGN KEY ', CONSTRAINT_NAME, ';') FROM information_schema.key_column_usage WHERE CONSTRAINT_SCHEMA = 'test_db' AND referenced_table_name IS NOT NULL; concat('ALTER TABLE ', TABLE_NAME, ' DROP FOREIGN KEY ', CONSTRAINT_NAME, ';')
SELECT concat('ALTER TABLE ', TABLE_NAME, ' DROP FOREIGN KEY ', CONSTRAINT_NAME, ';') FROM information_schema.key_column_usage WHERE CONSTRAINT_SCHEMA = 'migration_db' AND referenced_table_name IS NOT NULL;
SELECT concat('ALTER TABLE ', TABLE_NAME, ' DROP FOREIGN KEY ', CONSTRAINT_NAME, ';') FROM information_schema.key_column_usage WHERE CONSTRAINT_SCHEMA = 'kidsdb' AND referenced_table_name IS NOT NULL;
SELECT concat('ALTER TABLE ', TABLE_NAME, ' DROP FOREIGN KEY ', CONSTRAINT_NAME, ';') FROM information_schema.key_column_usage WHERE CONSTRAINT_SCHEMA = 'dvdrental' AND referenced_table_name IS NOT NULL;
SELECT concat('ALTER TABLE ', TABLE_NAME, ' DROP FOREIGN KEY ', CONSTRAINT_NAME, ';') FROM information_schema.key_column_usage WHERE CONSTRAINT_SCHEMA = 'blob' AND referenced_table_name IS NOT NULL;
SELECT concat('ALTER TABLE ', TABLE_NAME, ' DROP FOREIGN KEY ', CONSTRAINT_NAME, ';') FROM information_schema.key_column_usage WHERE CONSTRAINT_SCHEMA = 'mrsdb' AND referenced_table_name IS NOT NULL;
(ii)
[root@jump-host constraints]# pwd
/home/ec2-user/Final_Scripts/constraints
[root@jump-host constraints]# ls -ltr
-rw-r--r--. 1 root root 109157 Sep 6 08:02 constraints_test_db_mrsdb_add.sql
-rw-r--r--. 1 root root 56394 Sep 6 12:02 constraints_test_db_mrsdb_drop.sql
[root@jump-host constraints]#
So , here i have prepared my Add / DROP Constraints on mrsdb and test_db databases in Target Database [ MySQL ]
SELECT Concat('TRUNCATE TABLE ',table_schema,'.',TABLE_NAME, ';') FROM INFORMATION_SCHEMA.TABLES where table_schema in ('test_db');
(iii) First dropping the constraints of test_db Databases ...
[root@jump-host ~]# mysql -u root -p -h 18.234.112.125 < /home/ec2-user/Final_Scripts/constraints/constraints_test_db_mrsdb_drop.sql
Enter password:
[root@jump-host ~]#
Here i have dropped the constraints of mrsdb and test_db databases ...
(iv)
[root@jump-host mrsdb]# pwd
/home/ec2-user/sqlinesdata33171/mrsdb
[root@jump-host mrsdb]# ls -ltr
-rw-r--r--. 1 root root 7417 Sep 6 12:21 sqldata_mrsdb.cnf
[root@jump-host mrsdb]# cat sqldata_mrsdb.cnf
-- SQLines Data options (contact us at support@sqlines.com to request a new option)
--- Please note that all configuration options can be specified in the command line as well.
-sd=pg,postgres/postgres@3.229.186.86:5432,mrsdb
-td=mysql,root/Root@123@18.234.112.125,mrsdb
-t=*.*
-ss=6
-smap=public:mrsdb
-out=/home/ec2-user/sqlinesdata33171/mrsdb/
-log=mrsdb.log
-- DDL Options
--------------
-- Set yes to to migrate table definitions, or no to not migrate
-- DDL statements executed depends on -topt option (CREATE; DROP and CREATE TABLE, or TRUNCATE TABLE)
-ddl_tables=no
-- Set yes to to migrate constraints (this is default), or no to not migrate
-- Please note that if -topt=truncate or -topt=none is specified, constraints are not migrated
-constraints=no
-- Set yes to to migrate indexes (this is default), or no to not migrate
-- Please note that if -topt=truncate or -topt=none is specified, indexes are not migrated
-indexes=no
-- Data Options
---------------
-- Set yes to to migrate data (this is default), or no to not migrate
-data=yes
[root@jump-host sqlinesdata33171]# pwd
/home/ec2-user/sqlinesdata33171
[root@jump-host sqlinesdata33171]# ls -ltr
-rw-r--r--. 1 root root 7212 Aug 31 12:47 sqldata.cfg
-rw-r--r--. 1 root root 361 Sep 5 20:27 sqlines_twhere.txt
-rw-r--r--. 1 root root 579 Sep 5 20:27 sqlines_tsel.txt
-rw-r--r--. 1 root root 498 Sep 5 20:27 sqlines_tsel_all.txt
-rw-r--r--. 1 root root 192 Sep 5 20:27 sqlines_tmap.txt
-rw-r--r--. 1 root root 273 Sep 5 20:27 sqlines_tf.txt
-rw-r--r--. 1 root root 316 Sep 5 20:27 sqlines_qf.txt
-rw-r--r--. 1 root root 190 Sep 5 20:27 sqlines_dtmap.txt
-rw-r--r--. 1 root root 202 Sep 5 20:27 sqlines_cnsmap.txt
-rw-r--r--. 1 root root 401 Sep 5 20:27 sqlines_cmap.txt
-rw-r--r--. 1 root root 188 Sep 5 20:27 readme.txt
-rw-r--r--. 1 root root 118 Sep 5 20:27 license.txt
-rwxr-xr-x. 1 root root 11670344 Sep 5 20:27 libQtGui.so
-rwxr-xr-x. 1 root root 2701504 Sep 5 20:27 libQtCore.so
-rwxr-xr-x. 1 root root 2051152 Sep 5 20:29 sqldata
drwxr-xr-x. 2 root root 6 Sep 6 01:05 migration_db
drwxr-xr-x. 2 root root 6 Sep 6 01:05 dvdrental
drwxr-xr-x. 2 root root 6 Sep 6 01:06 blob
drwxr-xr-x. 2 root root 6 Sep 6 01:06 kidsdb
drwxr-xr-x. 2 root root 33 Sep 6 06:17 test_db
drwxr-xr-x. 2 root root 31 Sep 6 12:22 mrsdb
[root@jump-host sqlinesdata33171]# ./sqldata -cfg=/home/ec2-user/sqlinesdata33171/mrsdb/sqldata_mrsdb.cnf -log=mrsdb_$(date +"%d-%m-%Y").log &
[1] 4222
[root@jump-host sqlinesdata33171]#
SQLines Data 3.3.171 x86_64 Linux - Database Migration Tool.
Copyright (c) 2021 SQLines. All Rights Reserved.
FOR EVALUATION USE ONLY, MAY SKIP 1 ROW FROM A TABLE
Connecting to databases (325 ms)
PostgreSQL 13.4 on x86_64-pc-linux-gnu, compiled by gcc (GCC) 8.4.1 20200928 (Red Hat 8.4.1-1), 64-bit (Ok, 325 ms)
MySQL 8.0.26 MySQL Community Server - GPL x86_64 Linux 1.2.11 (Ok, 10 ms)
Reading the database schema (847 tables, 572 ms)
Transferring database (6 concurrent sessions):
public.po_bgattributedefinition - Started (1 of 847, session 1)
public.po_collectionentry_3 - Started (2 of 847, session 2)
public.po_collectionentry_1 - Started (3 of 847, session 3)
public.po_bgattributedefinition - Open cursor (0 rows read, 246 ms, session 1)
public.po_bgattributedefinition - Data transfer complete (session 1)
....
....
427. public.po_useraccountsecuritycon (4 rows read, 3 rows written, 1 rows difference)
428. public.po_workflo1 (4 rows read, 3 rows written, 1 rows difference)
429. public.po_sourcedatastatistics (1 rows read, 0 rows written, 1 rows difference)
Please contact us at support@sqlines.com for any assistance.
Summary:
Tables: 847 (847 Ok, 0 failed)
Target DDL: 0 (0 Ok, 0 failed)
Rows read: 175343
Rows written: 174914 (429 row difference)
Transfer time: 14.7 sec (11866 rows/sec, 15.4 MB, 1.0 MB/sec)
Logs:
Execution log: mrsdb_06-09-2021.log
DDL SQL statements: sqldata_ddl.sql
^C
[1]+ Done ./sqldata -cfg=/home/ec2-user/sqlinesdata33171/mrsdb/sqldata_mrsdb.cnf -log=mrsdb_$(date +"%d-%m-%Y").log
[root@jump-host sqlinesdata33171]#
[root@jump-host sqlinesdata33171]# cd mrsdb
[root@jump-host mrsdb]# pwd
/home/ec2-user/sqlinesdata33171/mrsdb
[root@jump-host mrsdb]# ls -ltr
-rw-r--r--. 1 root root 7417 Sep 6 12:21 sqldata_mrsdb.cnf
-rw-r--r--. 1 root root 329586 Sep 6 12:54 mrsdb_06-09-2021.log
[root@jump-host mrsdb]#
Since this is higher version , each table has 1 Row difference ...
mysql> show create table pc_dbobject\G
*************************** 1. row ***************************
Table: pc_dbobject
Create Table: CREATE TABLE `pc_dbobject` (
`pca_ormstate_id` bigint NOT NULL,
`pcd_dbobject_create` text,
`pcd_dbobject_drop` text,
KEY `pisc_nn_dbobject_ormstate` (`pca_ormstate_id`) USING BTREE,
CONSTRAINT `fk_pc_dbobject_pc_` FOREIGN KEY (`pca_ormstate_id`) REFERENCES `pc_ormstate` (`pco_ormstateid`)
) ENGINE=InnoDB DEFAULT CHARSET=utf8mb4 COLLATE=utf8mb4_0900_ai_ci
1 row in set (0.00 sec)
mysql> select count(1) from information_schema.KEY_COLUMN_USAGE where TABLE_schema = 'mrsdb';
+----------+
| count(1) |
+----------+
| 3657 |
+----------+
1 row in set (0.10 sec)
So constraints also working and automatically it's coming...
Now blob database with new version :-
--------------------------------------------
[root@jump-host Truncate_Scripts]# pwd
/home/ec2-user/Final_Scripts/Truncate_Scripts
[root@jump-host Truncate_Scripts]# ls -ltr
-rw-rw-r--. 1 ec2-user ec2-user 87 Sep 1 16:54 kidsdb.sql
-rw-rw-r--. 1 ec2-user ec2-user 127 Sep 1 16:55 migration_db.sql
-rw-rw-r--. 1 ec2-user ec2-user 249 Sep 1 16:56 test_db.sql
-rw-rw-r--. 1 ec2-user ec2-user 88 Sep 1 16:56 blob.sql
-rw-rw-r--. 1 ec2-user ec2-user 858 Sep 2 12:23 truncate_dvdrental.sql
-rw-rw-r--. 1 ec2-user ec2-user 34615 Sep 6 14:28 mrsdb.sql
[root@jump-host Truncate_Scripts]# mysql -u root -p -h 18.234.112.125 < blob.sql
Enter password:
[root@jump-host Truncate_Scripts]#
[root@jump-host sqlinesdata33171]# pwd
/home/ec2-user/sqlinesdata33171
[root@jump-host sqlinesdata33171]# ls -ltr
-rw-r--r--. 1 root root 7212 Aug 31 12:47 sqldata.cfg
-rw-r--r--. 1 root root 361 Sep 5 20:27 sqlines_twhere.txt
-rw-r--r--. 1 root root 579 Sep 5 20:27 sqlines_tsel.txt
-rw-r--r--. 1 root root 498 Sep 5 20:27 sqlines_tsel_all.txt
-rw-r--r--. 1 root root 192 Sep 5 20:27 sqlines_tmap.txt
-rw-r--r--. 1 root root 273 Sep 5 20:27 sqlines_tf.txt
-rw-r--r--. 1 root root 316 Sep 5 20:27 sqlines_qf.txt
-rw-r--r--. 1 root root 190 Sep 5 20:27 sqlines_dtmap.txt
-rw-r--r--. 1 root root 202 Sep 5 20:27 sqlines_cnsmap.txt
-rw-r--r--. 1 root root 401 Sep 5 20:27 sqlines_cmap.txt
-rw-r--r--. 1 root root 188 Sep 5 20:27 readme.txt
-rw-r--r--. 1 root root 118 Sep 5 20:27 license.txt
-rwxr-xr-x. 1 root root 11670344 Sep 5 20:27 libQtGui.so
-rwxr-xr-x. 1 root root 2701504 Sep 5 20:27 libQtCore.so
-rwxr-xr-x. 1 root root 2051152 Sep 5 20:29 sqldata
drwxr-xr-x. 2 root root 6 Sep 6 01:05 migration_db
drwxr-xr-x. 2 root root 6 Sep 6 01:05 dvdrental
drwxr-xr-x. 2 root root 6 Sep 6 01:06 kidsdb
drwxr-xr-x. 2 root root 33 Sep 6 06:17 test_db
drwxr-xr-x. 2 root root 92 Sep 6 15:56 mrsdb
drwxr-xr-x. 2 root root 30 Sep 6 16:13 blob
[root@jump-host sqlinesdata33171]# ./sqldata -cfg=/home/ec2-user/sqlinesdata33171/blob/sqldata_blob.cnf -log=blob_$(date +"%d-%m-%Y").log &
SQLines Data 3.3.171 x86_64 Linux - Database Migration Tool.
Copyright (c) 2021 SQLines. All Rights Reserved.
FOR EVALUATION USE ONLY, MAY SKIP 1 ROW FROM A TABLE
Connecting to databases (94 ms)
PostgreSQL 13.4 on x86_64-pc-linux-gnu, compiled by gcc (GCC) 8.4.1 20200928 (Red Hat 8.4.1-1), 64-bit (Ok, 94 ms)
MySQL 8.0.26 MySQL Community Server - GPL x86_64 Linux 1.2.11 (Ok, 8 ms)
Reading the database schema (1 table, 22 ms)
Transferring database (1 session):
public.blob_table - Started (1 of 1, session 1)
public.blob_table - Open cursor (3 rows read, 127 ms, session 1)
SHOW WARNINGS: LOAD DATA LOCAL INFILE 'sqldata.in_memory' IGNORE INTO TABLE blob.`blob_table` CHARACTER SET utf8mb4
Level: Error; Code: 1064; Message: You have an error in your SQL syntax; check the manual that corresponds to your MySQL server version for the right syntax to use near 'blob.`blob_table` CHARACTER SET utf8mb4' at line 1
public.blob_table - Data transfer failed
You have an error in your SQL syntax; check the manual that corresponds to your MySQL server version for the right syntax to use near 'blob.`blob_table` CHARACTER SET utf8mb4' at line 1
Rows read: 3 (24 rows/sec)
Rows written: 0 (0 rows/sec, 0 bytes, 0 bytes/sec)
Transfer time: 129 ms (127 ms read, 0 ms write)
Failed tables (1 table):
1. public.blob_table - You have an error in your SQL syntax; check the manual that corresponds to your MySQL server version for the right syntax to use near 'blob.`blob_table` CHARACTER SET utf8mb4' at line 1
public.blob_table
Please contact us at support@sqlines.com for any assistance.
Summary:
Tables: 1 (0 Ok, 1 failed)
Target DDL: 0 (0 Ok, 0 failed)
Rows read: 3
Rows written: 0 (3 row difference)
Transfer time: 130 ms (0 rows/sec, 0 bytes, 0 bytes/sec)
Since we are getting the above error in blob , so i was using MySQL Workbench Tool to migrate the blob database with data..
And report is here ..
---------------------------------------------------------------------
MySQL Workbench Migration Wizard Report
Date: Mon Sep 6 21:53:52 2021
Source: PostgreSQL 1.0.0
Target: MySQL 8.0.26
---------------------------------------------------------------------
I. Migration
1. Summary
Number of migrated schemas: 1
1. blob
Source Schema: blob
- Tables: 1
- Triggers: 0
- Views: 0
- Stored Procedures: 0
- Functions: 0
2. Migration Issues
3. Object Creation Issues
4. Migration Details
4.1. Table blob.blob_table (blob_table)
Columns:
- id INT
- image LONGBLOB
Foreign Keys:
Indices:
II. Data Copy
- `blob`.`blob_table`
Succeeded : copied 3 of 3 rows from "blob"."public"."blob_table"
Now check in target database ..
mysql> select count(1) from blob_table;
+----------+
| count(1) |
+----------+
| 3 |
+----------+
1 row in set (0.00 sec)
mysql> use test ;
Database changed
mysql> call COUNT_ROWS_COUNTS_BY_TABLE('blob');
+---------------------+--------------+
| TABLE_NAME | RECORD_COUNT |
+---------------------+--------------+
| `blob`.`blob_table` | 3 |
+---------------------+--------------+
1 row in set (0.01 sec)
+--------------------------+
| TOTAL_DATABASE_RECORD_CT |
+--------------------------+
| 3 |
+--------------------------+
1 row in set (0.01 sec)
Query OK, 0 rows affected (0.01 sec)
Validation for Row count match by using shell script :-
-----------------------------------------------------------
[root@jump-host Validation_Scripts]# pwd
/home/ec2-user/Final_Scripts/Validation_Scripts
[root@jump-host Validation_Scripts]# ls -ltr
-rwxr-xr-x. 1 root root 2824 Sep 6 16:28 row_count_comparison_final_consider.sh
[root@jump-host Validation_Scripts]# ./row_count_comparison_final_consider.sh blob blob blob
Postgres Schema entered : blob
MySQL Schema/Database entered : blob
Check the .CSV files under /home/ec2-user/output_dir/ for results
[root@jump-host Validation_Scripts]#
[root@jump-host Validation_Scripts]# cd /home/ec2-user/output_dir/
[root@jump-host output_dir]# pwd
/home/ec2-user/output_dir
[root@jump-host output_dir]# ls -ltr
-rw-r--r--. 1 root root 81 Sep 6 16:28 Row_count_blob.csv
-rw-r--r--. 1 root root 81 Sep 6 16:28 Row_count_blob_2021-09-06-1628.csv
[root@jump-host output_dir]# cat Row_count_blob_2021-09-06-1628.csv
SCHEMA_NAME.TABLE_NAME;Postgres_R_Count;MySQL_R_Count;DIFF
blob.blob_table;3;3;0
[root@jump-host output_dir]#
So migration is completed...
Now for dvdrental Database migration :-
------------------------------------------------
[root@jump-host sqlinesdata33171]# cd dvdrental/
[root@jump-host dvdrental]# pwd
/home/ec2-user/sqlinesdata33171/dvdrental
[root@jump-host dvdrental]# ls -ltr
-rw-r--r--. 1 root root 7439 Sep 6 16:34 sqldata_dvdrental.cnf
[root@jump-host dvdrental]#
[root@jump-host sqlinesdata33171]# pwd
/home/ec2-user/sqlinesdata33171
[root@jump-host sqlinesdata33171]# ls -ltr
-rw-r--r--. 1 root root 7212 Aug 31 12:47 sqldata.cfg
-rw-r--r--. 1 root root 361 Sep 5 20:27 sqlines_twhere.txt
-rw-r--r--. 1 root root 579 Sep 5 20:27 sqlines_tsel.txt
-rw-r--r--. 1 root root 498 Sep 5 20:27 sqlines_tsel_all.txt
-rw-r--r--. 1 root root 192 Sep 5 20:27 sqlines_tmap.txt
-rw-r--r--. 1 root root 273 Sep 5 20:27 sqlines_tf.txt
-rw-r--r--. 1 root root 316 Sep 5 20:27 sqlines_qf.txt
-rw-r--r--. 1 root root 190 Sep 5 20:27 sqlines_dtmap.txt
-rw-r--r--. 1 root root 202 Sep 5 20:27 sqlines_cnsmap.txt
-rw-r--r--. 1 root root 401 Sep 5 20:27 sqlines_cmap.txt
-rw-r--r--. 1 root root 188 Sep 5 20:27 readme.txt
-rw-r--r--. 1 root root 118 Sep 5 20:27 license.txt
-rwxr-xr-x. 1 root root 11670344 Sep 5 20:27 libQtGui.so
-rwxr-xr-x. 1 root root 2701504 Sep 5 20:27 libQtCore.so
-rwxr-xr-x. 1 root root 2051152 Sep 5 20:29 sqldata
drwxr-xr-x. 2 root root 6 Sep 6 01:05 migration_db
drwxr-xr-x. 2 root root 6 Sep 6 01:06 kidsdb
drwxr-xr-x. 2 root root 33 Sep 6 06:17 test_db
drwxr-xr-x. 2 root root 92 Sep 6 15:56 mrsdb
drwxr-xr-x. 2 root root 90 Sep 6 16:15 blob
drwxr-xr-x. 2 root root 35 Sep 6 16:34 dvdrental
[root@jump-host sqlinesdata33171]# ./sqldata -cfg=/home/ec2-user/sqlinesdata33171/dvdrental/sqldata_dvdrental.cnf -log=dvdrental_$(date +"%d-%m-%Y").log &
[1] 6478
[root@jump-host sqlinesdata33171]#
SQLines Data 3.3.171 x86_64 Linux - Database Migration Tool.
Copyright (c) 2021 SQLines. All Rights Reserved.
FOR EVALUATION USE ONLY, MAY SKIP 1 ROW FROM A TABLE
Connecting to databases (158 ms)
PostgreSQL 13.4 on x86_64-pc-linux-gnu, compiled by gcc (GCC) 8.4.1 20200928 (Red Hat 8.4.1-1), 64-bit (Ok, 158 ms)
MySQL 8.0.26 MySQL Community Server - GPL x86_64 Linux 1.2.11 (Ok, 28 ms)
Reading the database schema (15 tables, 252 ms)
Transferring database (6 concurrent sessions):
public.actor - Started (1 of 15, session 1)
public.store - Started (2 of 15, session 2)
public.actor - Open cursor (200 rows read, 66 ms, session 1)
public.store - Open cursor (2 rows read, 23 ms, session 2)
public.actor - Data transfer complete (session 1)
Rows read: 200 (3030 rows/sec)
Rows written: 199 (99500 rows/sec, 7.8 KB, 3.8 MB/sec)
Transfer time: 80 ms (66 ms read, 2 ms write)
public.address - Started (3 of 15, session 1)
public.store - Data transfer complete (session 2)
....
....
Read/write row count differences (15 tables):
1. public.actor (200 rows read, 199 rows written, 1 rows difference)
2. public.store (2 rows read, 1 rows written, 1 rows difference)
3. public.category (16 rows read, 15 rows written, 1 rows difference)
4. public.city (600 rows read, 599 rows written, 1 rows difference)
5. public.country (109 rows read, 108 rows written, 1 rows difference)
6. public.address (603 rows read, 602 rows written, 1 rows difference)
7. public.film_category (1000 rows read, 999 rows written, 1 rows difference)
8. public.language (6 rows read, 5 rows written, 1 rows difference)
9. public.customer (599 rows read, 598 rows written, 1 rows difference)
10. public.staff (2 rows read, 1 rows written, 1 rows difference)
11. public.film_actor (5462 rows read, 5461 rows written, 1 rows difference)
12. public.inventory (4581 rows read, 4580 rows written, 1 rows difference)
13. public.film (1000 rows read, 999 rows written, 1 rows difference)
14. public.rental (16044 rows read, 16043 rows written, 1 rows difference)
15. public.payment (14596 rows read, 14595 rows written, 1 rows difference)
Please contact us at support@sqlines.com for any assistance.
Summary:
Tables: 15 (15 Ok, 0 failed)
Target DDL: 0 (0 Ok, 0 failed)
Rows read: 44820
Rows written: 44805 (15 row difference)
Transfer time: 864 ms (51858 rows/sec, 2.4 MB, 2.8 MB/sec)
Logs:
Execution log: dvdrental_06-09-2021.log
DDL SQL statements: sqldata_ddl.sql
^C
[1]+ Done ./sqldata -cfg=/home/ec2-user/sqlinesdata33171/dvdrental/sqldata_dvdrental.cnf -log=dvdrental_$(date +"%d-%m-%Y").log
[root@jump-host sqlinesdata33171]#
As we know 1 row difference will be there ...So migration is completed ..checking Row count of tables
[root@jump-host Validation_Scripts]# pwd
/home/ec2-user/Final_Scripts/Validation_Scripts
[root@jump-host Validation_Scripts]# ls -ltr
-rwxr-xr-x. 1 root root 2824 Sep 6 16:28 row_count_comparison_final_consider.sh
drwxr-xr-x. 2 root root 6 Sep 6 16:28 tmp_dir
drwxr-xr-x. 2 root root 6 Sep 6 16:28 output_dir
[root@jump-host Validation_Scripts]# ./row_count_comparison_final_consider.sh dvdrental dvdrental dvdrental
Postgres Schema entered : dvdrental
MySQL Schema/Database entered : dvdrental
Check the .CSV files under /home/ec2-user/output_dir/ for results
[root@jump-host Validation_Scripts]#
[root@jump-host output_dir]# pwd
/home/ec2-user/output_dir
[root@jump-host output_dir]# ls -ltr
total 28
-rw-r--r--. 1 root root 206 Sep 1 15:24 Row_count_test_db_2021-09-01-1524.csv
-rw-r--r--. 1 root root 206 Sep 1 15:30 Row_count_test_db.csv
-rw-r--r--. 1 root root 206 Sep 1 15:30 Row_count_test_db_2021-09-01-1530.csv
-rw-r--r--. 1 root root 81 Sep 6 16:28 Row_count_blob.csv
-rw-r--r--. 1 root root 81 Sep 6 16:28 Row_count_blob_2021-09-06-1628.csv
-rw-r--r--. 1 root root 721 Sep 6 16:42 Row_count_dvdrental.csv
-rw-r--r--. 1 root root 721 Sep 6 16:42 Row_count_dvdrental_2021-09-06-1642.csv
[root@jump-host output_dir]# cat Row_count_dvdrental_2021-09-06-1642.csv
SCHEMA_NAME.TABLE_NAME;Postgres_R_Count;MySQL_R_Count;DIFF
dvdrental.actor;200;199;1
dvdrental.actor_info;200;199;1
dvdrental.address;603;602;1
dvdrental.category;16;15;1
dvdrental.city;600;599;1
dvdrental.country;109;108;1
dvdrental.customer;599;598;1
dvdrental.customer_list;599;596;3
dvdrental.film;1000;999;1
dvdrental.film_actor;5462;5461;1
dvdrental.film_category;1000;999;1
dvdrental.film_list;997;1;996
dvdrental.inventory;4581;4580;1
dvdrental.language;6;5;1
dvdrental.nicer_but_slower_film_list;997;1;996
dvdrental.payment;14596;14595;1
dvdrental.rental;16044;16043;1
dvdrental.sales_by_film_category;16;15;1
dvdrental.sales_by_store;2;1;1
dvdrental.staff;2;1;1
dvdrental.staff_list;2;1;1
dvdrental.store;2;1;1
[root@jump-host output_dir]#
Migration is completed for dvdrental..
Now migration for migration_db database :-
--------------------------------------------------
[root@jump-host migration_db]# pwd
/home/ec2-user/sqlinesdata33171/migration_db
[root@jump-host migration_db]# cat sqldata_migration_db.cnf
-- SQLines Data options (contact us at support@sqlines.com to request a new option)
-sd=pg,postgres/postgres@3.229.186.86:5432,migration_db
-td=mysql,root/Root@123@54.242.48.86,migration_db
-t=*.*
-ss=6
-smap=public:migration_db
-out=/home/ec2-user/sqlinesdata33171/migration_db/
-log=migration_db.log
-- DDL Options
--------------
-- Set yes to to migrate table definitions, or no to not migrate
-- DDL statements executed depends on -topt option (CREATE; DROP and CREATE TABLE, or TRUNCATE TABLE)
-ddl_tables=no
-- Set yes to to migrate constraints (this is default), or no to not migrate
-constraints=no
-- Set yes to to migrate indexes (this is default), or no to not migrate
-indexes=no
-- Data Options
---------------
-- Set yes to to migrate data (this is default), or no to not migrate
-data=yes
[root@jump-host migration_db]# pwd
/home/ec2-user/sqlinesdata33171/migration_db
[root@jump-host migration_db]# cd ..
[root@jump-host sqlinesdata33171]# ./sqldata -cfg=/home/ec2-user/sqlinesdata33171/migration_db/sqldata_migration_db.cnf -log=migration_db_$(date +"%d-%m-%Y").log &
[1] 1871
[root@jump-host sqlinesdata33171]#
SQLines Data 3.3.171 x86_64 Linux - Database Migration Tool.
Copyright (c) 2021 SQLines. All Rights Reserved.
FOR EVALUATION USE ONLY, MAY SKIP 1 ROW FROM A TABLE
Connecting to databases (117 ms)
PostgreSQL 13.4 on x86_64-pc-linux-gnu, compiled by gcc (GCC) 8.4.1 20200928 (Red Hat 8.4.1-1), 64-bit (Ok, 117 ms)
MySQL 8.0.26 MySQL Community Server - GPL x86_64 Linux 1.2.11 (Ok, 60 ms)
Reading the database schema (2 tables, 20 ms)
Transferring database (2 concurrent sessions):
public.users - Started (1 of 2, session 1)
public.employee - Started (2 of 2, session 2)
public.users - Open cursor (4186 rows read, 126 ms, session 1)
public.employee - Open cursor (246723 rows read, 723 ms, session 2)
public.users - Data transfer complete (session 1)
Rows read: 110000 (679012 rows/sec)
Rows written: 109999 (415091 rows/sec, 8.2 MB, 30.9 MB/sec)
Transfer time: 906 ms (162 ms read, 265 ms write)
public.employee - In progress (session 2)
Rows read: 986892 (1206469 rows/sec)
Rows written: 740168 (299663 rows/sec, 19.6 MB, 7.9 MB/sec)
Transfer time: 3.3 sec (818 ms read, 2.5 sec write)
public.employee - Data transfer complete (session 2)
Rows read: 1000000 (1221001 rows/sec)
Rows written: 999999 (233590 rows/sec, 26.5 MB, 6.2 MB/sec)
Transfer time: 5.2 sec (819 ms read, 4.3 sec write)
Read/write row count differences (2 tables):
1. public.users (110000 rows read, 109999 rows written, 1 rows difference)
2. public.employee (1000000 rows read, 999999 rows written, 1 rows difference)
Please contact us at support@sqlines.com for any assistance.
Summary:
Tables: 2 (2 Ok, 0 failed)
Target DDL: 0 (0 Ok, 0 failed)
Rows read: 1110000
Rows written: 1109998 (2 row difference)
Transfer time: 5.3 sec (210786 rows/sec, 34.7 MB, 6.6 MB/sec)
Logs:
Execution log: migration_db_07-09-2021.log
DDL SQL statements: sqldata_ddl.sql
[1]+ Done ./sqldata -cfg=/home/ec2-user/sqlinesdata33171/migration_db/sqldata_migration_db.cnf -log=migration_db_$(date +"%d-%m-%Y").log
[root@jump-host sqlinesdata33171]#
Row count validation :-
-----------------------------
[root@jump-host ~]# mysql -u root -p -h 54.242.48.86 -e "call test.COUNT_ROWS_COUNTS_BY_TABLE('migration_db');"
Enter password:
+---------------------------+--------------+
| TABLE_NAME | RECORD_COUNT |
+---------------------------+--------------+
| `migration_db`.`employee` | 999999 |
| `migration_db`.`users` | 109999 |
+---------------------------+--------------+
+--------------------------+
| TOTAL_DATABASE_RECORD_CT |
+--------------------------+
| 1109998 |
+--------------------------+
[root@jump-host ~]#
We know that 1 row difference will be there by using Sqline..
[root@jump-host Validation_Scripts]# pwd
/home/ec2-user/Final_Scripts/Validation_Scripts
[root@jump-host Validation_Scripts]# ls -ltr
-rwxr-xr-x. 1 root root 2824 Sep 6 16:28 row_count_comparison_final_consider.sh
drwxr-xr-x. 2 root root 6 Sep 6 16:28 tmp_dir
drwxr-xr-x. 2 root root 6 Sep 6 16:28 output_dir
[root@jump-host Validation_Scripts]# ./row_count_comparison_final_consider.sh migration_db migration_db migration_db
Postgres Schema entered : migration_db
MySQL Schema/Database entered : migration_db
Check the .CSV files under /home/ec2-user/output_dir/ for results
[root@jump-host Validation_Scripts]#
[root@jump-host output_dir]# pwd
/home/ec2-user/output_dir
[root@jump-host output_dir]# ls -ltr
-rw-r--r--. 1 root root 133 Sep 7 01:08 Row_count_migration_db.csv
-rw-r--r--. 1 root root 133 Sep 7 01:08 Row_count_migration_db_2021-09-07-0108.csv
[root@jump-host output_dir]# cat Row_count_migration_db_2021-09-07-0108.csv
SCHEMA_NAME.TABLE_NAME;Postgres_R_Count;MySQL_R_Count;DIFF
migration_db.employee;1000000;999999;1
migration_db.users;110000;109999;1
[root@jump-host output_dir]#
Done with migration_db database ...
From Workbench Tool to avoid 1 row difference ...
---------------------------------------------------------------------
MySQL Workbench Migration Wizard Report
Date: Tue Sep 7 08:31:03 2021
Source: PostgreSQL 1.0.0
Target: MySQL 8.0.26
---------------------------------------------------------------------
I. Migration
1. Summary
Number of migrated schemas: 1
1. migration_db
Source Schema: migration_db
- Tables: 2
- Triggers: 0
- Views: 0
- Stored Procedures: 0
- Functions: 0
2. Migration Issues
3. Object Creation Issues
4. Migration Details
4.1. Table migration_db.employee (employee)
Columns:
- id BIGINT
- name VARCHAR(120)
- salary BIGINT
Foreign Keys:
Indices:
- emp_pk (id)
4.2. Table migration_db.users (users)
Columns:
- id BIGINT
- hash_firstname LONGTEXT
- hash_lastname LONGTEXT
- gender VARCHAR(6)
Foreign Keys:
Indices:
- users_pkey (id)
II. Data Copy
- `migration_db`.`employee`
Succeeded : copied 1000000 of 1000000 rows from "migration_db"."public"."employee"
- `migration_db`.`users`
Succeeded : copied 110000 of 110000 rows from "migration_db"."public"."users"
Row count :-
-----------------
[root@jump-host ~]# mysql -u root -p -h 54.242.48.86 -e "call test.COUNT_ROWS_COUNTS_BY_TABLE('migration_db');"
Enter password:
+---------------------------+--------------+
| TABLE_NAME | RECORD_COUNT |
+---------------------------+--------------+
| `migration_db`.`employee` | 1000000 |
| `migration_db`.`users` | 110000 |
+---------------------------+--------------+
+--------------------------+
| TOTAL_DATABASE_RECORD_CT |
+--------------------------+
| 1110000 |
+--------------------------+
[root@jump-host ~]#
[root@jump-host Validation_Scripts]# pwd
/home/ec2-user/Final_Scripts/Validation_Scripts
[root@jump-host Validation_Scripts]# ls -ltr
drwxr-xr-x. 2 root root 6 Sep 6 16:28 tmp_dir
drwxr-xr-x. 2 root root 6 Sep 6 16:28 output_dir
-rwxr-xr-x. 1 root root 2821 Sep 7 01:07 row_count_comparison_final_consider.sh
[root@jump-host Validation_Scripts]# ./row_count_comparison_final_consider.sh migration_db migration_db migration_db
Postgres Schema entered : migration_db
MySQL Schema/Database entered : migration_db
Check the .CSV files under /home/ec2-user/output_dir/ for results
[root@jump-host Validation_Scripts]#
[root@jump-host output_dir]# pwd
/home/ec2-user/output_dir
[root@jump-host output_dir]# ls -ltr
-rw-r--r--. 1 root root 133 Sep 7 01:08 Row_count_migration_db_2021-09-07-0108.csv
-rw-r--r--. 1 root root 204 Sep 7 01:29 Row_count_test_db.csv
-rw-r--r--. 1 root root 204 Sep 7 01:29 Row_count_test_db_2021-09-07-0129.csv
-rw-r--r--. 1 root root 82 Sep 7 01:41 Row_count_kidsdb.csv
-rw-r--r--. 1 root root 82 Sep 7 01:41 Row_count_kidsdb_2021-09-07-0141.csv
-rw-r--r--. 1 root root 134 Sep 7 03:03 Row_count_migration_db.csv
-rw-r--r--. 1 root root 134 Sep 7 03:03 Row_count_migration_db_2021-09-07-0303.csv
[root@jump-host output_dir]# cat Row_count_migration_db_2021-09-07-0303.csv
SCHEMA_NAME.TABLE_NAME;Postgres_R_Count;MySQL_R_Count;DIFF
migration_db.employee;1000000;1000000;0
migration_db.users;110000;110000;0
[root@jump-host output_dir]#
Now migration for test_db database :-
--------------------------------------------------
[root@jump-host test_db]# pwd
/home/ec2-user/sqlinesdata33171/test_db
[root@jump-host test_db]# ls -ltr
-rw-r--r--. 1 root root 7427 Sep 7 01:16 sqldata_test_db.cnf
[root@jump-host test_db]# cat sqldata_test_db.cnf
-- SQLines Data options (contact us at support@sqlines.com to request a new option)
--- Please note that all configuration options can be specified in the command line as well.
-sd=pg,postgres/postgres@3.229.186.86:5432,test_db
-td=mysql,root/Root@123@54.242.48.86,test_db
-t=*.*
-ss=6
-smap=public:test_db
-out=/home/ec2-user/sqlinesdata33171/test_db/
-log=test_db.log
-- DDL Options
--------------
-- Set yes to to migrate table definitions, or no to not migrate
-- DDL statements executed depends on -topt option (CREATE; DROP and CREATE TABLE, or TRUNCATE TABLE)
-ddl_tables=no
-- Set yes to to migrate constraints (this is default), or no to not migrate
-- Please note that if -topt=truncate or -topt=none is specified, constraints are not migrated
-constraints=no
-- Set yes to to migrate indexes (this is default), or no to not migrate
-- Please note that if -topt=truncate or -topt=none is specified, indexes are not migrated
-indexes=no
-- Data Options
---------------
-- Set yes to to migrate data (this is default), or no to not migrate
-data=yes
[root@jump-host sqlinesdata33171]# pwd
/home/ec2-user/sqlinesdata33171
[root@jump-host sqlinesdata33171]# ls -ltr
-rw-r--r--. 1 root root 7212 Aug 31 12:47 sqldata.cfg
-rw-r--r--. 1 root root 361 Sep 5 20:27 sqlines_twhere.txt
-rw-r--r--. 1 root root 579 Sep 5 20:27 sqlines_tsel.txt
-rw-r--r--. 1 root root 498 Sep 5 20:27 sqlines_tsel_all.txt
-rw-r--r--. 1 root root 192 Sep 5 20:27 sqlines_tmap.txt
-rw-r--r--. 1 root root 273 Sep 5 20:27 sqlines_tf.txt
-rw-r--r--. 1 root root 316 Sep 5 20:27 sqlines_qf.txt
-rw-r--r--. 1 root root 190 Sep 5 20:27 sqlines_dtmap.txt
-rw-r--r--. 1 root root 202 Sep 5 20:27 sqlines_cnsmap.txt
-rw-r--r--. 1 root root 401 Sep 5 20:27 sqlines_cmap.txt
-rw-r--r--. 1 root root 188 Sep 5 20:27 readme.txt
-rw-r--r--. 1 root root 118 Sep 5 20:27 license.txt
-rwxr-xr-x. 1 root root 11670344 Sep 5 20:27 libQtGui.so
-rwxr-xr-x. 1 root root 2701504 Sep 5 20:27 libQtCore.so
-rwxr-xr-x. 1 root root 2051152 Sep 5 20:29 sqldata
drwxr-xr-x. 2 root root 6 Sep 6 01:06 kidsdb
drwxr-xr-x. 2 root root 92 Sep 6 15:56 mrsdb
drwxr-xr-x. 2 root root 90 Sep 6 16:15 blob
drwxr-xr-x. 2 root root 67 Sep 6 16:40 dvdrental
drwxr-xr-x. 2 root root 73 Sep 7 01:03 migration_db
drwxr-xr-x. 2 root root 96 Sep 7 01:19 test_db
[root@jump-host sqlinesdata33171]# ./sqldata -cfg=/home/ec2-user/sqlinesdata33171/test_db/sqldata_test_db.cnf -log=test_db_$(date +"%d-%m-%Y").log &
[1] 2522
[root@jump-host sqlinesdata33171]#
SQLines Data 3.3.171 x86_64 Linux - Database Migration Tool.
Copyright (c) 2021 SQLines. All Rights Reserved.
FOR EVALUATION USE ONLY, MAY SKIP 1 ROW FROM A TABLE
Connecting to databases (15 ms)
PostgreSQL 13.4 on x86_64-pc-linux-gnu, compiled by gcc (GCC) 8.4.1 20200928 (Red Hat 8.4.1-1), 64-bit (Ok, 11 ms)
MySQL 8.0.26 MySQL Community Server - GPL x86_64 Linux 1.2.11 (Ok, 15 ms)
Reading the database schema (6 tables, 3 ms)
Transferring database (6 concurrent sessions):
public.users - Started (1 of 6, session 1)
public.posts - Started (2 of 6, session 2)
public.posts - Open cursor (50 rows read, 6 ms, session 2)
SHOW WARNINGS: LOAD DATA LOCAL INFILE 'sqldata.in_memory' IGNORE INTO TABLE test_db.`posts` CHARACTER SET utf8mb4
Level: Warning; Code: 1062; Message: Duplicate entry '2' for key 'posts.PRIMARY'
Level: Warning; Code: 1062; Message: Duplicate entry '3' for key 'posts.PRIMARY'
...
...
Read/write row count differences (6 tables):
1. public.posts (50 rows read, 49 rows written, 1 rows difference)
2. public.comments (10 rows read, 9 rows written, 1 rows difference)
3. public.kids_audit (2 rows read, 1 rows written, 1 rows difference)
4. public.genres (5 rows read, 4 rows written, 1 rows difference)
5. public.kids_name (6 rows read, 5 rows written, 1 rows difference)
6. public.users (100000 rows read, 99999 rows written, 1 rows difference)
Please contact us at support@sqlines.com for any assistance.
Summary:
Tables: 6 (6 Ok, 0 failed)
Target DDL: 0 (0 Ok, 0 failed)
Rows read: 100073
Rows written: 100067 (6 row difference)
Transfer time: 1.2 sec (82157 rows/sec, 2.7 MB, 2.2 MB/sec)
Logs:
Execution log: test_db_07-09-2021.log
DDL SQL statements: sqldata_ddl.sql
[1]+ Done ./sqldata -cfg=/home/ec2-user/sqlinesdata33171/test_db/sqldata_test_db.cnf -log=test_db_$(date +"%d-%m-%Y").log
[root@jump-host sqlinesdata33171]#
Rows comparision :-
--------------------
[root@jump-host Validation_Scripts]# pwd
/home/ec2-user/Final_Scripts/Validation_Scripts
[root@jump-host Validation_Scripts]# ls -ltr
drwxr-xr-x. 2 root root 6 Sep 6 16:28 tmp_dir
drwxr-xr-x. 2 root root 6 Sep 6 16:28 output_dir
-rwxr-xr-x. 1 root root 2821 Sep 7 01:07 row_count_comparison_final_consider.sh
[root@jump-host Validation_Scripts]# ./row_count_comparison_final_consider.sh test_db test_db test_db
Postgres Schema entered : test_db
MySQL Schema/Database entered : test_db
Check the .CSV files under /home/ec2-user/output_dir/ for results
[root@jump-host Validation_Scripts]#
[root@jump-host output_dir]# pwd
/home/ec2-user/output_dir
[root@jump-host output_dir]# ls -ltr
-rw-r--r--. 1 root root 133 Sep 7 01:08 Row_count_migration_db.csv
-rw-r--r--. 1 root root 133 Sep 7 01:08 Row_count_migration_db_2021-09-07-0108.csv
-rw-r--r--. 1 root root 204 Sep 7 01:29 Row_count_test_db.csv
-rw-r--r--. 1 root root 204 Sep 7 01:29 Row_count_test_db_2021-09-07-0129.csv
[root@jump-host output_dir]# cat Row_count_test_db_2021-09-07-0129.csv
SCHEMA_NAME.TABLE_NAME;Postgres_R_Count;MySQL_R_Count;DIFF
test_db.comments;10;9;1
test_db.genres;5;4;1
test_db.kids_audit;2;1;1
test_db.kids_name;6;5;1
test_db.posts;50;49;1
test_db.users;100000;99999;1
[root@jump-host output_dir]#
[root@jump-host ~]# mysql -u root -p -h 54.242.48.86 -e "call test.COUNT_ROWS_COUNTS_BY_TABLE('test_db');"
Enter password:
+------------------------+--------------+
| TABLE_NAME | RECORD_COUNT |
+------------------------+--------------+
| `test_db`.`comments` | 9 |
| `test_db`.`genres` | 4 |
| `test_db`.`kids_audit` | 1 |
| `test_db`.`kids_name` | 5 |
| `test_db`.`posts` | 49 |
| `test_db`.`users` | 99999 |
+------------------------+--------------+
+--------------------------+
| TOTAL_DATABASE_RECORD_CT |
+--------------------------+
| 100067 |
+--------------------------+
[root@jump-host ~]#
test_db migration is completed....
Now migration for kidsdb database :-
--------------------------------------------------
[root@jump-host kidsdb]# pwd
/home/ec2-user/sqlinesdata33171/kidsdb
[root@jump-host kidsdb]# ls -ltr
-rw-r--r--. 1 root root 5078 Sep 7 01:35 sqldata_kidsdb.cnf
[root@jump-host kidsdb]# cat sqldata_kidsdb.cnf
-- SQLines Data options (contact us at support@sqlines.com to request a new option)
-sd=pg,postgres/postgres@3.229.186.86:5432,kidsdb
-td=mysql,root/Root@123@54.242.48.86,kidsdb
-t=*.*
-ss=6
-smap=public:kidsdb
-out=/home/ec2-user/sqlinesdata33171/kids_db/
-log=kidsdb.log
-- DDL Options
--------------
-- Set yes to to migrate table definitions, or no to not migrate
-- DDL statements executed depends on -topt option (CREATE; DROP and CREATE TABLE, or TRUNCATE TABLE)
-ddl_tables=no
-- Set yes to to migrate constraints (this is default), or no to not migrate
-constraints=no
-- Set yes to to migrate indexes (this is default), or no to not migrate
-indexes=no
-- Data Options
---------------
-- Set yes to to migrate data (this is default), or no to not migrate
-data=yes
[root@jump-host sqlinesdata33171]# pwd
/home/ec2-user/sqlinesdata33171
[root@jump-host sqlinesdata33171]# ls -ltr
-rw-r--r--. 1 root root 7212 Aug 31 12:47 sqldata.cfg
-rw-r--r--. 1 root root 361 Sep 5 20:27 sqlines_twhere.txt
-rw-r--r--. 1 root root 579 Sep 5 20:27 sqlines_tsel.txt
-rw-r--r--. 1 root root 498 Sep 5 20:27 sqlines_tsel_all.txt
-rw-r--r--. 1 root root 192 Sep 5 20:27 sqlines_tmap.txt
-rw-r--r--. 1 root root 273 Sep 5 20:27 sqlines_tf.txt
-rw-r--r--. 1 root root 316 Sep 5 20:27 sqlines_qf.txt
-rw-r--r--. 1 root root 190 Sep 5 20:27 sqlines_dtmap.txt
-rw-r--r--. 1 root root 202 Sep 5 20:27 sqlines_cnsmap.txt
-rw-r--r--. 1 root root 401 Sep 5 20:27 sqlines_cmap.txt
-rw-r--r--. 1 root root 188 Sep 5 20:27 readme.txt
-rw-r--r--. 1 root root 118 Sep 5 20:27 license.txt
-rwxr-xr-x. 1 root root 11670344 Sep 5 20:27 libQtGui.so
-rwxr-xr-x. 1 root root 2701504 Sep 5 20:27 libQtCore.so
-rwxr-xr-x. 1 root root 2051152 Sep 5 20:29 sqldata
drwxr-xr-x. 2 root root 92 Sep 6 15:56 mrsdb
drwxr-xr-x. 2 root root 90 Sep 6 16:15 blob
drwxr-xr-x. 2 root root 67 Sep 6 16:40 dvdrental
drwxr-xr-x. 2 root root 73 Sep 7 01:03 migration_db
drwxr-xr-x. 2 root root 96 Sep 7 01:19 test_db
drwxr-xr-x. 2 root root 32 Sep 7 01:35 kidsdb
[root@jump-host sqlinesdata33171]# ./sqldata -cfg=/home/ec2-user/sqlinesdata33171/kidsdb/sqldata_kidsdb.cnf -log=kidsdb_$(date +"%d-%m-%Y").log &
[1] 2736
[root@jump-host sqlinesdata33171]#
SQLines Data 3.3.171 x86_64 Linux - Database Migration Tool.
Copyright (c) 2021 SQLines. All Rights Reserved.
FOR EVALUATION USE ONLY, MAY SKIP 1 ROW FROM A TABLE
Connecting to databases (73 ms)
PostgreSQL 13.4 on x86_64-pc-linux-gnu, compiled by gcc (GCC) 8.4.1 20200928 (Red Hat 8.4.1-1), 64-bit (Ok, 73 ms)
MySQL 8.0.26 MySQL Community Server - GPL x86_64 Linux 1.2.11 (Ok, 27 ms)
Reading the database schema (1 table, 20 ms)
Transferring database (1 session):
public.employees - Started (1 of 1, session 1)
public.employees - Open cursor (1 row read, 3 ms, session 1)
public.employees - Data transfer complete (session 1)
Rows read: 1 (333 rows/sec)
Rows written: 0 (0 rows/sec, 0 bytes, 0 bytes/sec)
Transfer time: 7 ms (3 ms read, 2 ms write)
Read/write row count differences (1 table):
1. public.employees (1 rows read, 0 rows written, 1 rows difference)
Please contact us at support@sqlines.com for any assistance.
Summary:
Tables: 1 (1 Ok, 0 failed)
Target DDL: 0 (0 Ok, 0 failed)
Rows read: 1
Rows written: 0 (1 row difference)
Transfer time: 7 ms (0 rows/sec, 0 bytes, 0 bytes/sec)
Logs:
Execution log: kidsdb_07-09-2021.log
DDL SQL statements: sqldata_ddl.sql
[1]+ Done ./sqldata -cfg=/home/ec2-user/sqlinesdata33171/kidsdb/sqldata_kidsdb.cnf -log=kidsdb_$(date +"%d-%m-%Y").log
[root@jump-host sqlinesdata33171]#
kids DB migration completed..
Row count validation :-
--------------------------
[root@jump-host Validation_Scripts]# pwd
/home/ec2-user/Final_Scripts/Validation_Scripts
[root@jump-host Validation_Scripts]# ls -ltr
drwxr-xr-x. 2 root root 6 Sep 6 16:28 tmp_dir
drwxr-xr-x. 2 root root 6 Sep 6 16:28 output_dir
-rwxr-xr-x. 1 root root 2821 Sep 7 01:07 row_count_comparison_final_consider.sh
[root@jump-host Validation_Scripts]# ./row_count_comparison_final_consider.sh kidsdb kidsdb kidsdb
Postgres Schema entered : kidsdb
MySQL Schema/Database entered : kidsdb
Check the .CSV files under /home/ec2-user/output_dir/ for results
[root@jump-host Validation_Scripts]#
[root@jump-host Validation_Scripts]# cd /home/ec2-user/output_dir/
[root@jump-host output_dir]# pwd
/home/ec2-user/output_dir
[root@jump-host output_dir]# ls -ltr
-rw-r--r--. 1 root root 133 Sep 7 01:08 Row_count_migration_db.csv
-rw-r--r--. 1 root root 133 Sep 7 01:08 Row_count_migration_db_2021-09-07-0108.csv
-rw-r--r--. 1 root root 204 Sep 7 01:29 Row_count_test_db.csv
-rw-r--r--. 1 root root 204 Sep 7 01:29 Row_count_test_db_2021-09-07-0129.csv
-rw-r--r--. 1 root root 82 Sep 7 01:41 Row_count_kidsdb.csv
-rw-r--r--. 1 root root 82 Sep 7 01:41 Row_count_kidsdb_2021-09-07-0141.csv
[root@jump-host output_dir]# cat Row_count_kidsdb_2021-09-07-0141.csv
SCHEMA_NAME.TABLE_NAME;Postgres_R_Count;MySQL_R_Count;DIFF
kidsdb.employees;1;0;1
[root@jump-host output_dir]#
From workbech to avoid 1 row difference .
---------------------------------------------------------------------
MySQL Workbench Migration Wizard Report
Date: Tue Sep 7 08:37:51 2021
Source: PostgreSQL 1.0.0
Target: MySQL 8.0.26
---------------------------------------------------------------------
I. Migration
1. Summary
Number of migrated schemas: 1
1. kidsdb
Source Schema: kidsdb
- Tables: 1
- Triggers: 0
- Views: 0
- Stored Procedures: 0
- Functions: 0
2. Migration Issues
3. Object Creation Issues
4. Migration Details
4.1. Table kidsdb.employees (employees)
Columns:
- id INT
- name VARCHAR(100)
- dateofbirth DATE
- city VARCHAR(100)
- designation VARCHAR(100)
- joiningdate DATE
Foreign Keys:
Indices:
II. Data Copy
- `kidsdb`.`employees`
Succeeded : copied 1 of 1 rows from "kidsdb"."public"."employees"
Row count :-
----------------------
[root@jump-host ~]# mysql -u root -p -h 54.242.48.86 -e "call test.COUNT_ROWS_COUNTS_BY_TABLE('kidsdb');"
Enter password:
+----------------------+--------------+
| TABLE_NAME | RECORD_COUNT |
+----------------------+--------------+
| `kidsdb`.`employees` | 1 |
+----------------------+--------------+
+--------------------------+
| TOTAL_DATABASE_RECORD_CT |
+--------------------------+
| 1 |
+--------------------------+
[root@jump-host ~]#
[root@jump-host Validation_Scripts]# pwd
/home/ec2-user/Final_Scripts/Validation_Scripts
[root@jump-host Validation_Scripts]# ls -ltr
drwxr-xr-x. 2 root root 6 Sep 6 16:28 tmp_dir
drwxr-xr-x. 2 root root 6 Sep 6 16:28 output_dir
-rwxr-xr-x. 1 root root 2821 Sep 7 01:07 row_count_comparison_final_consider.sh
[root@jump-host Validation_Scripts]# ./row_count_comparison_final_consider.sh kidsdb kidsdb kidsdb
Postgres Schema entered : kidsdb
MySQL Schema/Database entered : kidsdb
Check the .CSV files under /home/ec2-user/output_dir/ for results
[root@jump-host Validation_Scripts]#
[root@jump-host output_dir]# pwd
/home/ec2-user/output_dir
[root@jump-host output_dir]# ls -ltr
-rw-r--r--. 1 root root 133 Sep 7 01:08 Row_count_migration_db_2021-09-07-0108.csv
-rw-r--r--. 1 root root 204 Sep 7 01:29 Row_count_test_db.csv
-rw-r--r--. 1 root root 204 Sep 7 01:29 Row_count_test_db_2021-09-07-0129.csv
-rw-r--r--. 1 root root 82 Sep 7 01:41 Row_count_kidsdb_2021-09-07-0141.csv
-rw-r--r--. 1 root root 134 Sep 7 03:03 Row_count_migration_db.csv
-rw-r--r--. 1 root root 134 Sep 7 03:03 Row_count_migration_db_2021-09-07-0303.csv
-rw-r--r--. 1 root root 82 Sep 7 03:09 Row_count_kidsdb.csv
-rw-r--r--. 1 root root 82 Sep 7 03:09 Row_count_kidsdb_2021-09-07-0309.csv
[root@jump-host output_dir]# cat Row_count_kidsdb_2021-09-07-0309.csv
SCHEMA_NAME.TABLE_NAME;Postgres_R_Count;MySQL_R_Count;DIFF
kidsdb.employees;1;1;0
[root@jump-host output_dir]#
---------------------------------------------------------------------
Step - 17 Stored Procedure for Tables Row count in MySQL Database
---------------------------------------------------------------------
---------------------------------------------------------------
Create the Procedure to get row count of each table ...
---------------------------------------------------------------
mysql> use test;
Database changed
mysql> DELIMITER $$
mysql> CREATE PROCEDURE `COUNT_ROWS_COUNTS_BY_TABLE`(dbName varchar(128))
BEGIN
DECLARE done INT DEFAULT 0;
DECLARE TNAME CHAR(255);
DECLARE table_names CURSOR for
SELECT CONCAT("`", TABLE_SCHEMA, "`.`", table_name, "`") FROM INFORMATION_SCHEMA.TABLES where TABLE_SCHEMA = dbName;
DECLARE CONTINUE HANDLER FOR NOT FOUND SET done = 1;
OPEN table_names;
DROP TABLE IF EXISTS TABLES_ROWS_COUNTS;
CREATE TEMPORARY TABLE TABLES_ROWS_COUNTS
(
TABLE_NAME CHAR(255),
RECORD_COUNT INT
) ENGINE = MEMORY;
WHILE done = 0 DO
FETCH NEXT FROM table_names INTO TNAME;
IF done = 0 THEN
SET @SQL_TXT = CONCAT("INSERT INTO TABLES_ROWS_COUNTS(SELECT '" , TNAME , "' AS TABLE_NAME, COUNT(*) AS RECORD_COUNT FROM ", TNAME, ")");
PREPARE stmt_name FROM @SQL_TXT;
EXECUTE stmt_name;
DEALLOCATE PREPARE stmt_name;
END IF;
END WHILE;
CLOSE table_names;
SELECT * FROM TABLES_ROWS_COUNTS;
SELECT SUM(RECORD_COUNT) AS TOTAL_DATABASE_RECORD_CT FROM TABLES_ROWS_COUNTS;
END$$
Query OK, 0 rows affected (0.01 sec)
mysql> DELIMITER ;
mysql> show procedure status LIKE 'COUNT_ROWS_COUNTS_BY_TABLE';
+------+----------------------------+-----------+----------------+---
| Db | Name | Type | Definer | Modified | Created | Security_type | Comment | character_set_client | collation_connection | Database Collation |
+------+----------------------------+-----------+----------------+---
| test | COUNT_ROWS_COUNTS_BY_TABLE | PROCEDURE | root@localhost | 2021-09-06 13:29:12 | 2021-09-06 13:29:12 | DEFINER | | utf8mb4 | utf8mb4_0900_ai_ci | utf8mb4_0900_ai_ci |
+------+----------------------------+-----------+----------------+---
1 row in set (0.01 sec)
mysql> use test;
Database changed
mysql> call COUNT_ROWS_COUNTS_BY_TABLE('mrsdb');
+-------------------------------------+--------------+
| TABLE_NAME | RECORD_COUNT |
+-------------------------------------+--------------+
| `mrsdb`.`mri_container` | 6979 |
| `mrsdb`.`mri_cp_s_bg_attr_def` | 0 |
| `mrsdb`.`mri_cr_folder` | 6048 |
| `mrsdb`.`mri_cr_p_app_project` | 7 |
| `mrsdb`.`mri_cr_p_app_spec` | 0 |
| `mrsdb`.`mri_cr_p_application` | 0 |
| `mrsdb`.`mri_cr_p_data_obj_config` | 4 |
| `mrsdb`.`mri_cr_p_mapping_feature` | 0 |
| `mrsdb`.`mri_cr_p_option_group` | 23 |
| `mrsdb`.`mri_cr_p_r_app_config` | 5 |
| `mrsdb`.`mri_cr_p_r_application` | 2 |
| `mrsdb`.`mri_cr_p_r_canon_data_mdl` | 1 |
| `mrsdb`.`mri_cr_p_r_map_feature_co` | 2 |
| `mrsdb`.`mri_cr_p_r_mapping_featur` | 2 |
| `mrsdb`.`mri_cr_pad_flat_byte_def` | 22 |
| `mrsdb`.`mri_cr_pad_pwx_aso` | 0 |
| `mrsdb`.`mri_cr_pad_rd_sap_aso_op` | 9 |
| `mrsdb`.`mri_cr_pad_read_rel_dso` | 6 |
| `mrsdb`.`mri_cr_pad_rel_data_src` | 6 |
| `mrsdb`.`mri_cr_pad_sap_aso` | 2 |
| `mrsdb`.`mri_cr_pad_sap_con_info` | 1 |
| `mrsdb`.`mri_cr_pbm_canonic_data_m` | 2 |
| `mrsdb`.`mri_cr_pbm_canonic_data_o` | 9 |
| `mrsdb`.`mri_cr_pbm_connect_info` | 5142 |
| `mrsdb`.`mri_cr_pbm_mapping` | 45 |
| `mrsdb`.`mri_cr_project_group` | 32 |
| `mrsdb`.`mri_cr_repository` | 39 |
| `mrsdb`.`mri_cr_s_bg_attr_def` | 0 |
| `mrsdb`.`mri_cr_s_bg_audit_history` | 0 |
| `mrsdb`.`mri_cr_s_bg_category` | 0 |
| `mrsdb`.`mri_cr_s_bg_glossary` | 0 |
| `mrsdb`.`mri_cr_s_bg_initiative` | 0 |
| `mrsdb`.`mri_cr_s_bg_list_attr` | 0 |
| `mrsdb`.`mri_cr_s_bg_policy` | 0 |
| `mrsdb`.`mri_cr_s_bg_rel_type` | 0 |
| `mrsdb`.`mri_cr_s_bg_term` | 0 |
| `mrsdb`.`mri_cr_s_bound_expr` | 0 |
| `mrsdb`.`mri_cr_s_expr_set` | 0 |
| `mrsdb`.`mri_cr_s_lms_link_attr` | 0 |
| `mrsdb`.`mri_cr_s_prof_rec` | 0 |
| `mrsdb`.`mri_cr_s_ptask_details` | 0 |
| `mrsdb`.`mri_cr_s_r_sql_feature` | 0 |
| `mrsdb`.`mri_cr_s_r_sql_feature_co` | 0 |
| `mrsdb`.`mri_cr_s_r_virtual_schema` | 0 |
| `mrsdb`.`mri_cr_s_rule_binding` | 0 |
| `mrsdb`.`mri_cr_s_scard_details` | 0 |
| `mrsdb`.`mri_cr_s_scard_metric_grp` | 0 |
| `mrsdb`.`mri_cr_s_scard_threshold` | 0 |
| `mrsdb`.`mri_cr_s_source_spec` | 1 |
| `mrsdb`.`mri_cr_s_sql_end_pt` | 0 |
| `mrsdb`.`mri_cr_s_virtual_schema` | 99 |
| `mrsdb`.`mri_cr_s_virtual_table` | 0 |
| `mrsdb`.`mri_ext_flat_rdr_cni` | 7 |
| `mrsdb`.`mri_folder` | 272 |
| `mrsdb`.`mri_namespace` | 39 |
| `mrsdb`.`mri_p_app_project` | 13 |
| `mrsdb`.`mri_p_app_spec` | 0 |
| `mrsdb`.`mri_p_application` | 0 |
| `mrsdb`.`mri_p_application_stat` | 0 |
| `mrsdb`.`mri_p_attribute` | 8 |
| `mrsdb`.`mri_p_data_cache_acces` | 2 |
| `mrsdb`.`mri_p_data_obj` | 1 |
| `mrsdb`.`mri_p_data_obj_access` | 1 |
| `mrsdb`.`mri_p_data_obj_config` | 4 |
| `mrsdb`.`mri_p_data_obj_stat` | 1 |
| `mrsdb`.`mri_p_do_instance_stat` | 1 |
| `mrsdb`.`mri_p_feature` | 5 |
| `mrsdb`.`mri_p_mapping_feat_sta` | 1 |
| `mrsdb`.`mri_p_mapping_feature` | 5 |
| `mrsdb`.`mri_p_mapping_serv_req` | 1 |
| `mrsdb`.`mri_p_option` | 23 |
| `mrsdb`.`mri_p_option_group` | 25 |
| `mrsdb`.`mri_p_r_app_config` | 6 |
| `mrsdb`.`mri_p_r_application` | 1 |
| `mrsdb`.`mri_p_r_canon_data_mdl` | 4 |
| `mrsdb`.`mri_p_r_feature` | 3 |
| `mrsdb`.`mri_p_r_map_feature_co` | 2 |
| `mrsdb`.`mri_p_r_mapping_featur` | 1 |
| `mrsdb`.`mri_pad_flat_b_gen_ds` | 15 |
| `mrsdb`.`mri_pad_flat_b_pars_d` | 5 |
| `mrsdb`.`mri_pad_flat_byte_def` | 22 |
| `mrsdb`.`mri_pad_pwx_aso` | 0 |
| `mrsdb`.`mri_pad_pwx_aso_oper` | 2 |
| `mrsdb`.`mri_pad_pwx_rt_config` | 4 |
| `mrsdb`.`mri_pad_rd_sap_aso_op` | 3 |
| `mrsdb`.`mri_pad_read_rel_dso` | 7 |
| `mrsdb`.`mri_pad_rel_data_src` | 4 |
| `mrsdb`.`mri_pad_rel_dso_rt_cfg` | 11 |
| `mrsdb`.`mri_pad_rel_rec_instan` | 10 |
| `mrsdb`.`mri_pad_rel_record` | 12 |
| `mrsdb`.`mri_pad_sap_aso` | 2 |
| `mrsdb`.`mri_pad_sap_con_info` | 1 |
| `mrsdb`.`mri_pad_sap_rec_inst` | 2 |
| `mrsdb`.`mri_pad_sap_record` | 12 |
| `mrsdb`.`mri_pad_sap_rt_config` | 7 |
| `mrsdb`.`mri_pad_write_rel_dso` | 5 |
| `mrsdb`.`mri_pbm_abstract_tx` | 97 |
| `mrsdb`.`mri_pbm_canonic_data_m` | 1 |
| `mrsdb`.`mri_pbm_canonic_data_o` | 6 |
| `mrsdb`.`mri_pbm_characteristic` | 14 |
| `mrsdb`.`mri_pbm_connect_info` | 5198 |
| `mrsdb`.`mri_pbm_data_source` | 21 |
| `mrsdb`.`mri_pbm_description` | 10245 |
| `mrsdb`.`mri_pbm_instance` | 10 |
| `mrsdb`.`mri_pbm_mapping` | 8 |
| `mrsdb`.`mri_pbm_mapplet_tx` | 2 |
| `mrsdb`.`mri_pbm_read_tx` | 4 |
| `mrsdb`.`mri_pbm_resrc_access` | 6 |
| `mrsdb`.`mri_pbm_write_tx` | 8 |
| `mrsdb`.`mri_project` | 38 |
| `mrsdb`.`mri_project_group` | 11 |
| `mrsdb`.`mri_repository` | 0 |
| `mrsdb`.`mri_s_bg_attr_def` | 0 |
| `mrsdb`.`mri_s_bg_attr_inst` | 0 |
| `mrsdb`.`mri_s_bg_attr_value` | 0 |
| `mrsdb`.`mri_s_bg_attribute` | 0 |
| `mrsdb`.`mri_s_bg_audit_history` | 0 |
| `mrsdb`.`mri_s_bg_boolean_val` | 0 |
| `mrsdb`.`mri_s_bg_cat_rel` | 0 |
| `mrsdb`.`mri_s_bg_category` | 0 |
| `mrsdb`.`mri_s_bg_datetime_val` | 0 |
| `mrsdb`.`mri_s_bg_enum_val` | 0 |
| `mrsdb`.`mri_s_bg_glossary` | 0 |
| `mrsdb`.`mri_s_bg_init_rel` | 0 |
| `mrsdb`.`mri_s_bg_initiative` | 0 |
| `mrsdb`.`mri_s_bg_integer_val` | 0 |
| `mrsdb`.`mri_s_bg_list_attr` | 0 |
| `mrsdb`.`mri_s_bg_policy` | 0 |
| `mrsdb`.`mri_s_bg_policy_rel` | 0 |
| `mrsdb`.`mri_s_bg_prim_attr` | 0 |
| `mrsdb`.`mri_s_bg_rel_type` | 0 |
| `mrsdb`.`mri_s_bg_rule` | 0 |
| `mrsdb`.`mri_s_bg_string_val` | 0 |
| `mrsdb`.`mri_s_bg_synonym` | 0 |
| `mrsdb`.`mri_s_bg_term` | 0 |
| `mrsdb`.`mri_s_bg_term_rel` | 0 |
| `mrsdb`.`mri_s_bg_user` | 0 |
| `mrsdb`.`mri_s_bound_expr` | 0 |
| `mrsdb`.`mri_s_cdo_src_spec` | 1 |
| `mrsdb`.`mri_s_connection_stat` | 0 |
| `mrsdb`.`mri_s_der_field_set` | 1 |
| `mrsdb`.`mri_s_dso_src_spec` | 1 |
| `mrsdb`.`mri_s_expr_set` | 1 |
| `mrsdb`.`mri_s_lms_attr` | 0 |
| `mrsdb`.`mri_s_lms_link` | 0 |
| `mrsdb`.`mri_s_lms_link_attr` | 0 |
| `mrsdb`.`mri_s_lms_link_ep` | 0 |
| `mrsdb`.`mri_s_mapping_run_req` | 0 |
| `mrsdb`.`mri_s_prof_details` | 0 |
| `mrsdb`.`mri_s_prof_field` | 0 |
| `mrsdb`.`mri_s_prof_model_req` | 0 |
| `mrsdb`.`mri_s_prof_rec` | 0 |
| `mrsdb`.`mri_s_prof_run_req` | 0 |
| `mrsdb`.`mri_s_ptask_details` | 0 |
| `mrsdb`.`mri_s_r_sql_feature` | 0 |
| `mrsdb`.`mri_s_r_sql_feature_co` | 0 |
| `mrsdb`.`mri_s_r_stored_proc` | 0 |
| `mrsdb`.`mri_s_r_virtual_schema` | 0 |
| `mrsdb`.`mri_s_r_virtual_table` | 0 |
| `mrsdb`.`mri_s_rule_binding` | 1 |
| `mrsdb`.`mri_s_rule_output` | 0 |
| `mrsdb`.`mri_s_scard_details` | 0 |
| `mrsdb`.`mri_s_scard_fixed_cost` | 0 |
| `mrsdb`.`mri_s_scard_metric` | 0 |
| `mrsdb`.`mri_s_scard_metric_grp` | 0 |
| `mrsdb`.`mri_s_scard_threshold` | 0 |
| `mrsdb`.`mri_s_scard_tsld_range` | 0 |
| `mrsdb`.`mri_s_scard_var_cost` | 0 |
| `mrsdb`.`mri_s_scorecard_run_re` | 0 |
| `mrsdb`.`mri_s_source_spec` | 1 |
| `mrsdb`.`mri_s_sql_end_pt` | 0 |
| `mrsdb`.`mri_s_sql_feature` | 0 |
| `mrsdb`.`mri_s_sql_feature_stat` | 0 |
| `mrsdb`.`mri_s_sql_request_stat` | 0 |
| `mrsdb`.`mri_s_table_src_spec` | 1 |
| `mrsdb`.`mri_s_virtual_schema` | 99 |
| `mrsdb`.`mri_s_virtual_table` | 0 |
| `mrsdb`.`mri_s_vt_cache_inst` | 0 |
| `mrsdb`.`mri_s_vt_stat` | 0 |
| `mrsdb`.`mri_s_ws_feature_stat` | 0 |
| `mrsdb`.`mri_s_ws_operation` | 0 |
| `mrsdb`.`mri_s_ws_request_stat` | 0 |
| `mrsdb`.`pc_auditlog` | 1 |
| `mrsdb`.`pc_dbobject` | 828 |
| `mrsdb`.`pc_dbrconfig` | 1 |
| `mrsdb`.`pc_iclass_sig` | 2335 |
| `mrsdb`.`pc_namespace` | 39 |
| `mrsdb`.`pc_namespace_sequence` | 0 |
| `mrsdb`.`pc_ormstate` | 1 |
| `mrsdb`.`pc_package` | 430 |
| `mrsdb`.`pc_preprocess_kv_entry` | 0 |
| `mrsdb`.`pc_repository` | 0 |
| `mrsdb`.`pc_upgrade_log` | 0 |
| `mrsdb`.`po_abstractcatchableeven1` | 3 |
| `mrsdb`.`po_abstractcatchableevent` | 0 |
| `mrsdb`.`po_abstracthumanflowconfi` | 0 |
| `mrsdb`.`po_abstractloadbalancingc` | 0 |
| `mrsdb`.`po_abstractparameterbindi` | 29 |
| `mrsdb`.`po_abstractparameterconte` | 16 |
| `mrsdb`.`po_abstractpayload` | 17 |
| `mrsdb`.`po_abstractpropertyvalue` | 0 |
| `mrsdb`.`po_abstractsappackage` | 14 |
| `mrsdb`.`po_abstractschema` | 5 |
| `mrsdb`.`po_abstractschemaitem` | 29 |
| `mrsdb`.`po_abstractsequenceobject` | 4 |
| `mrsdb`.`po_abstracttarge1` | 44 |
| `mrsdb`.`po_abstracttarget` | 0 |
| `mrsdb`.`po_abstracttas1` | 46 |
| `mrsdb`.`po_abstracttas2` | 17 |
| `mrsdb`.`po_abstracttask` | 0 |
| `mrsdb`.`po_abstracttaskexecutionc` | 0 |
| `mrsdb`.`po_abstractthrowableeven1` | 0 |
| `mrsdb`.`po_abstractthrowableevent` | 0 |
| `mrsdb`.`po_abstracttransformation` | 12 |
| `mrsdb`.`po_abstractvalu1` | 44 |
| `mrsdb`.`po_abstractvalue` | 0 |
| `mrsdb`.`po_abstractview` | 0 |
| `mrsdb`.`po_ace` | 0 |
| `mrsdb`.`po_aceannotation` | 0 |
| `mrsdb`.`po_activitinstancehistory` | 0 |
| `mrsdb`.`po_advancedparameter` | 0 |
| `mrsdb`.`po_advancedsignature` | 0 |
| `mrsdb`.`po_aggregatetimestamp` | 0 |
| `mrsdb`.`po_aggregationtype` | 0 |
| `mrsdb`.`po_aggregationtypecontain` | 0 |
| `mrsdb`.`po_alertconfig` | 0 |
| `mrsdb`.`po_annotation` | 10245 |
| `mrsdb`.`po_application` | 0 |
| `mrsdb`.`po_applicationdataobjects` | 0 |
| `mrsdb`.`po_applicationgroup` | 0 |
| `mrsdb`.`po_applicationregistryent` | 4 |
| `mrsdb`.`po_applicationspec` | 0 |
| `mrsdb`.`po_applicationstat` | 0 |
| `mrsdb`.`po_assignmentexpression` | 1 |
| `mrsdb`.`po_attachment` | 9 |
| `mrsdb`.`po_attachmenthistory` | 0 |
| `mrsdb`.`po_attribut1` | 0 |
| `mrsdb`.`po_attribut2` | 3 |
| `mrsdb`.`po_attribute` | 8 |
| `mrsdb`.`po_attributes` | 0 |
| `mrsdb`.`po_avbasereport` | 0 |
| `mrsdb`.`po_avoption` | 0 |
| `mrsdb`.`po_backpointer` | 0 |
| `mrsdb`.`po_basecadiengineconfig` | 0 |
| `mrsdb`.`po_beanstats` | 0 |
| `mrsdb`.`po_bgacl` | 0 |
| `mrsdb`.`po_bgassetproperties` | 0 |
| `mrsdb`.`po_bgattribute` | 0 |
| `mrsdb`.`po_bgattributedefinition` | 0 |
| `mrsdb`.`po_bgattributeinstance` | 0 |
| `mrsdb`.`po_bgattributevalue` | 0 |
| `mrsdb`.`po_bgaudithistory` | 0 |
| `mrsdb`.`po_bgcategory` | 0 |
| `mrsdb`.`po_bgcategoryrelationship` | 0 |
| `mrsdb`.`po_bgglossary` | 0 |
| `mrsdb`.`po_bgglossaryprivilegemap` | 0 |
| `mrsdb`.`po_bginitiative` | 0 |
| `mrsdb`.`po_bginitiativeobject` | 0 |
| `mrsdb`.`po_bglineagefilter` | 0 |
| `mrsdb`.`po_bgpolicy` | 0 |
| `mrsdb`.`po_bgpolicyrelationship` | 0 |
| `mrsdb`.`po_bgprivilege` | 0 |
| `mrsdb`.`po_bgprivilegeassignment` | 0 |
| `mrsdb`.`po_bgproperty` | 0 |
| `mrsdb`.`po_bgpropertygroup` | 0 |
| `mrsdb`.`po_bgpropertyvalue` | 0 |
| `mrsdb`.`po_bgrelationshiptype` | 0 |
| `mrsdb`.`po_bgrole` | 0 |
| `mrsdb`.`po_bgrule` | 0 |
| `mrsdb`.`po_bgsynonym` | 0 |
| `mrsdb`.`po_bgtemplate` | 0 |
| `mrsdb`.`po_bgtemplateelement` | 0 |
| `mrsdb`.`po_bgtemplateelementcontr` | 0 |
| `mrsdb`.`po_bgtemplatelayout` | 0 |
| `mrsdb`.`po_bgterm` | 0 |
| `mrsdb`.`po_bgtermrelationship` | 0 |
| `mrsdb`.`po_bguser` | 0 |
| `mrsdb`.`po_bguserobjectassociatio` | 0 |
| `mrsdb`.`po_bgvotingtaskresult` | 0 |
| `mrsdb`.`po_boxedreceiverobjec1` | 137 |
| `mrsdb`.`po_boxedreceiverobject` | 0 |
| `mrsdb`.`po_bufferblock` | 0 |
| `mrsdb`.`po_bytearray` | 0 |
| `mrsdb`.`po_cadisessionstat` | 1 |
| `mrsdb`.`po_cadisessionstatsummary` | 0 |
| `mrsdb`.`po_callbackmessage` | 0 |
| `mrsdb`.`po_canonicaldatamodel` | 1 |
| `mrsdb`.`po_capabilit1` | 0 |
| `mrsdb`.`po_capability` | 63 |
| `mrsdb`.`po_cascadeiudoptions` | 0 |
| `mrsdb`.`po_castsetting` | 124 |
| `mrsdb`.`po_changelist` | 1 |
| `mrsdb`.`po_changelistobject` | 1 |
| `mrsdb`.`po_characteristi1` | 4 |
| `mrsdb`.`po_characteristi2` | 14 |
| `mrsdb`.`po_characteristi3` | 3 |
| `mrsdb`.`po_characteristic` | 0 |
| `mrsdb`.`po_characterrange` | 0 |
| `mrsdb`.`po_classifieroperation` | 0 |
| `mrsdb`.`po_clusterconf` | 2 |
| `mrsdb`.`po_clusterconfiguration` | 0 |
| `mrsdb`.`po_clusteringport` | 0 |
| `mrsdb`.`po_clusteringstrategy` | 0 |
| `mrsdb`.`po_clusterusagestats` | 0 |
| `mrsdb`.`po_codepage` | 0 |
| `mrsdb`.`po_codepagesystem` | 0 |
| `mrsdb`.`po_collectionentry_0` | 2670 |
| `mrsdb`.`po_collectionentry_1` | 2921 |
| `mrsdb`.`po_collectionentry_2` | 2607 |
| `mrsdb`.`po_collectionentry_3` | 2325 |
| `mrsdb`.`po_collectionentry_4` | 2684 |
| `mrsdb`.`po_collectionentry_5` | 2810 |
| `mrsdb`.`po_collectionentry_6` | 2335 |
| `mrsdb`.`po_collectionentry_7` | 2034 |
| `mrsdb`.`po_collectionentry_8` | 2446 |
| `mrsdb`.`po_collectionentry_9` | 2737 |
| `mrsdb`.`po_columnfeature` | 31 |
| `mrsdb`.`po_columnstrateg1` | 22 |
| `mrsdb`.`po_columnstrategy` | 22 |
| `mrsdb`.`po_commen1` | 0 |
| `mrsdb`.`po_comment` | 0 |
| `mrsdb`.`po_commentedobject` | 0 |
| `mrsdb`.`po_commenthistory` | 0 |
| `mrsdb`.`po_commonoutcomecontainer` | 0 |
| `mrsdb`.`po_component` | 0 |
| `mrsdb`.`po_componentinterface` | 0 |
| `mrsdb`.`po_compositedatadomain` | 0 |
| `mrsdb`.`po_compositedomainelement` | 0 |
| `mrsdb`.`po_compositedomaininferen` | 0 |
| `mrsdb`.`po_conditio1` | 0 |
| `mrsdb`.`po_condition` | 0 |
| `mrsdb`.`po_conffetcherparameter` | 0 |
| `mrsdb`.`po_configobject` | 13 |
| `mrsdb`.`po_confnamespace` | 0 |
| `mrsdb`.`po_confproperty` | 0 |
| `mrsdb`.`po_connectinfo` | 5198 |
| `mrsdb`.`po_connectinfoclassifierc` | 0 |
| `mrsdb`.`po_connectioninfo` | 0 |
| `mrsdb`.`po_connectionpoolattribut` | 5109 |
| `mrsdb`.`po_connectionreference` | 0 |
| `mrsdb`.`po_connectionstat` | 0 |
| `mrsdb`.`po_consolidationconfigura` | 0 |
| `mrsdb`.`po_contactdetails` | 5 |
| `mrsdb`.`po_contentcontainer` | 0 |
| `mrsdb`.`po_correlationinfo` | 0 |
| `mrsdb`.`po_costcoefficientconfig` | 0 |
| `mrsdb`.`po_costcoefficients` | 4 |
| `mrsdb`.`po_counterstate` | 2 |
| `mrsdb`.`po_criterion` | 0 |
| `mrsdb`.`po_customproperty` | 3 |
| `mrsdb`.`po_d_arc` | 17 |
| `mrsdb`.`po_d_capabilityattributes` | 34 |
| `mrsdb`.`po_d_fieldbase` | 40 |
| `mrsdb`.`po_d_modelextensioncallca` | 12 |
| `mrsdb`.`po_d_modelextensionlookup` | 6 |
| `mrsdb`.`po_d_modelextensionoperat` | 6 |
| `mrsdb`.`po_d_modelextensionreadca` | 6 |
| `mrsdb`.`po_d_modelextensionwritec` | 7 |
| `mrsdb`.`po_d_operationbase` | 48 |
| `mrsdb`.`po_d_projection` | 19 |
| `mrsdb`.`po_d_reusableexpression` | 10 |
| `mrsdb`.`po_darc` | 0 |
| `mrsdb`.`po_dataaccessgroup` | 199 |
| `mrsdb`.`po_dataadapter` | 0 |
| `mrsdb`.`po_datadomain` | 0 |
| `mrsdb`.`po_datadomaincreationcrit` | 0 |
| `mrsdb`.`po_datadomaingroup` | 0 |
| `mrsdb`.`po_datadomainmembershipde` | 0 |
| `mrsdb`.`po_dataelementref` | 18 |
| `mrsdb`.`po_datahandler` | 97 |
| `mrsdb`.`po_datainterface` | 86 |
| `mrsdb`.`po_datamaptype` | 0 |
| `mrsdb`.`po_dataobject` | 14 |
| `mrsdb`.`po_dataobjectaccessstat` | 1 |
| `mrsdb`.`po_dataobjectattribute` | 4 |
| `mrsdb`.`po_dataobjectcacheinstan1` | 1 |
| `mrsdb`.`po_dataobjectcacheinstanc` | 2 |
| `mrsdb`.`po_dataobjectinfo` | 0 |
| `mrsdb`.`po_dataobjectkey` | 4 |
| `mrsdb`.`po_dataobjectreference` | 4 |
| `mrsdb`.`po_dataobjectstat` | 1 |
| `mrsdb`.`po_dataobjecttypeinfo` | 0 |
| `mrsdb`.`po_datapartitioningstrate` | 6 |
| `mrsdb`.`po_dataserviceconfig` | 0 |
| `mrsdb`.`po_datasource` | 28 |
| `mrsdb`.`po_datasourceoperation` | 54 |
| `mrsdb`.`po_datatypeconversion` | 0 |
| `mrsdb`.`po_datatypemap` | 0 |
| `mrsdb`.`po_defaultosprofile` | 3 |
| `mrsdb`.`po_definition` | 183 |
| `mrsdb`.`po_deployedns` | 0 |
| `mrsdb`.`po_deployment` | 0 |
| `mrsdb`.`po_deploymentdiagnostic` | 0 |
| `mrsdb`.`po_deploymentrecord` | 3 |
| `mrsdb`.`po_deploymentrequest` | 0 |
| `mrsdb`.`po_derivedfieldset` | 1 |
| `mrsdb`.`po_detaileddomaineventsta` | 0 |
| `mrsdb`.`po_detailhistory` | 0 |
| `mrsdb`.`po_dfield` | 0 |
| `mrsdb`.`po_dgraph` | 0 |
| `mrsdb`.`po_dgroupnode` | 0 |
| `mrsdb`.`po_dictionary` | 0 |
| `mrsdb`.`po_digestpolicy` | 0 |
| `mrsdb`.`po_dnode` | 0 |
| `mrsdb`.`po_document` | 4 |
| `mrsdb`.`po_domainconfidenceconfig` | 0 |
| `mrsdb`.`po_domaineventstat` | 1 |
| `mrsdb`.`po_domaininferencefeature` | 0 |
| `mrsdb`.`po_domaininfo` | 3 |
| `mrsdb`.`po_domainproximitydefinit` | 0 |
| `mrsdb`.`po_domainserviceprocessev` | 6 |
| `mrsdb`.`po_dprunparameters` | 0 |
| `mrsdb`.`po_dqcontentitem` | 0 |
| `mrsdb`.`po_dqobject` | 0 |
| `mrsdb`.`po_dscibase` | 1 |
| `mrsdb`.`po_dsnamedelement` | 70 |
| `mrsdb`.`po_dsoruntimeconfig` | 97 |
| `mrsdb`.`po_dtbundle` | 0 |
| `mrsdb`.`po_dtdocument` | 0 |
| `mrsdb`.`po_dtobject` | 0 |
| `mrsdb`.`po_dtp` | 0 |
| `mrsdb`.`po_dtpexportmanifest` | 0 |
| `mrsdb`.`po_dtportbinding` | 0 |
| `mrsdb`.`po_dtpproxy` | 0 |
| `mrsdb`.`po_dtservice` | 0 |
| `mrsdb`.`po_dtstructuralfeatureove` | 0 |
| `mrsdb`.`po_dynamiccachefield` | 0 |
| `mrsdb`.`po_dynamicloadbalancingta` | 0 |
| `mrsdb`.`po_element` | 0 |
| `mrsdb`.`po_emailattachment` | 1 |
| `mrsdb`.`po_emailconfig` | 0 |
| `mrsdb`.`po_emailcontent` | 2 |
| `mrsdb`.`po_encoding` | 0 |
| `mrsdb`.`po_endpoint` | 0 |
| `mrsdb`.`po_entityfielddescriptor` | 0 |
| `mrsdb`.`po_enumeratedvaluequalifi` | 183 |
| `mrsdb`.`po_execution` | 0 |
| `mrsdb`.`po_executionenvironment` | 0 |
| `mrsdb`.`po_executionenvironmentty` | 0 |
| `mrsdb`.`po_executioninfo` | 0 |
| `mrsdb`.`po_executionitem` | 0 |
| `mrsdb`.`po_executionparameter` | 0 |
| `mrsdb`.`po_executionplan` | 0 |
| `mrsdb`.`po_executionstat` | 0 |
| `mrsdb`.`po_executionstep` | 5 |
| `mrsdb`.`po_executionstepdependenc` | 4 |
| `mrsdb`.`po_executionstepstat` | 4 |
| `mrsdb`.`po_expressio1` | 6 |
| `mrsdb`.`po_expression` | 0 |
| `mrsdb`.`po_extensibilityelement` | 18 |
| `mrsdb`.`po_extensibleelementexter` | 2 |
| `mrsdb`.`po_extensibleelementinter` | 12 |
| `mrsdb`.`po_extensionattribute` | 85 |
| `mrsdb`.`po_extensioncontainer` | 115 |
| `mrsdb`.`po_extention` | 0 |
| `mrsdb`.`po_failure` | 0 |
| `mrsdb`.`po_feature` | 5 |
| `mrsdb`.`po_featurepath` | 9 |
| `mrsdb`.`po_featureregistryentry` | 4 |
| `mrsdb`.`po_fielddescriptor` | 0 |
| `mrsdb`.`po_fieldmaplinkage` | 10 |
| `mrsdb`.`po_fieldmapspec` | 10 |
| `mrsdb`.`po_fieldparameter` | 0 |
| `mrsdb`.`po_fieldselectorscop1` | 63 |
| `mrsdb`.`po_fieldselectorscope` | 74 |
| `mrsdb`.`po_fieldtestdata` | 0 |
| `mrsdb`.`po_fileinf1` | 9 |
| `mrsdb`.`po_fileinfo` | 1 |
| `mrsdb`.`po_filter` | 0 |
| `mrsdb`.`po_filtercondition` | 184 |
| `mrsdb`.`po_filtercriteria` | 0 |
| `mrsdb`.`po_fixedshiftstate` | 2 |
| `mrsdb`.`po_flatbytedefinition` | 25 |
| `mrsdb`.`po_flatbyteextension` | 14 |
| `mrsdb`.`po_flatfiledictonarysourc` | 0 |
| `mrsdb`.`po_folder` | 272 |
| `mrsdb`.`po_foreignke1` | 3 |
| `mrsdb`.`po_foreignkey` | 0 |
| `mrsdb`.`po_foreignkeys` | 0 |
| `mrsdb`.`po_frequencycell` | 0 |
| `mrsdb`.`po_ftpconnectionattribute` | 0 |
| `mrsdb`.`po_function` | 0 |
| `mrsdb`.`po_functionaldependency` | 0 |
| `mrsdb`.`po_functioncategory` | 0 |
| `mrsdb`.`po_functioncontainer` | 0 |
| `mrsdb`.`po_functiondefinitio1` | 0 |
| `mrsdb`.`po_functiondefinition` | 0 |
| `mrsdb`.`po_functiongrou1` | 7 |
| `mrsdb`.`po_functiongroup` | 0 |
| `mrsdb`.`po_generatestrategy` | 14 |
| `mrsdb`.`po_generationstrategy` | 0 |
| `mrsdb`.`po_genericdatahandlingopt` | 0 |
| `mrsdb`.`po_generictaskproperty` | 0 |
| `mrsdb`.`po_globalnotificationconf` | 0 |
| `mrsdb`.`po_glossary` | 5 |
| `mrsdb`.`po_groupdocumentobject` | 0 |
| `mrsdb`.`po_groupengine` | 0 |
| `mrsdb`.`po_grouppartitioningschem` | 6 |
| `mrsdb`.`po_groupstat` | 12 |
| `mrsdb`.`po_hadoopclusterinfo` | 0 |
| `mrsdb`.`po_hadoopdistribution` | 0 |
| `mrsdb`.`po_hadoopexecutionconfig` | 0 |
| `mrsdb`.`po_hadoopexecutionparamet` | 0 |
| `mrsdb`.`po_hadoopimpersonationpro` | 3 |
| `mrsdb`.`po_hcomponent` | 2 |
| `mrsdb`.`po_header` | 22 |
| `mrsdb`.`po_hgroup` | 2 |
| `mrsdb`.`po_hierarchicalpattern` | 0 |
| `mrsdb`.`po_historysummary` | 0 |
| `mrsdb`.`po_hiveinfo` | 0 |
| `mrsdb`.`po_hivepushdownengineconf` | 0 |
| `mrsdb`.`po_hivesessionstat` | 0 |
| `mrsdb`.`po_hivesessionstatsummary` | 1 |
| `mrsdb`.`po_hport` | 5 |
| `mrsdb`.`po_httpfileinfo` | 1 |
| `mrsdb`.`po_icpconfigurationobject` | 0 |
| `mrsdb`.`po_identitylink` | 0 |
| `mrsdb`.`po_idmssetowner` | 0 |
| `mrsdb`.`po_idobj` | 71 |
| `mrsdb`.`po_idtoqualnames` | 11503 |
| `mrsdb`.`po_ilmconnection` | 0 |
| `mrsdb`.`po_ilmconnectionproperty` | 0 |
| `mrsdb`.`po_imfpackageupgradeinfo` | 0 |
| `mrsdb`.`po_ingestionconfig` | 0 |
| `mrsdb`.`po_injectionpoint` | 2 |
| `mrsdb`.`po_input` | 0 |
| `mrsdb`.`po_inputbindin1` | 44 |
| `mrsdb`.`po_inputbinding` | 0 |
| `mrsdb`.`po_instanc1` | 0 |
| `mrsdb`.`po_instance` | 10 |
| `mrsdb`.`po_instancelineage` | 0 |
| `mrsdb`.`po_invaliddatacost` | 0 |
| `mrsdb`.`po_invalidfield` | 0 |
| `mrsdb`.`po_ispboolean` | 0 |
| `mrsdb`.`po_ispinteger` | 0 |
| `mrsdb`.`po_isplong` | 0 |
| `mrsdb`.`po_ispstring` | 0 |
| `mrsdb`.`po_job` | 0 |
| `mrsdb`.`po_join` | 185 |
| `mrsdb`.`po_joinconditio1` | 0 |
| `mrsdb`.`po_joincondition` | 182 |
| `mrsdb`.`po_ke1` | 0 |
| `mrsdb`.`po_key` | 0 |
| `mrsdb`.`po_keyfield` | 1 |
| `mrsdb`.`po_keyrangefield` | 1 |
| `mrsdb`.`po_keystatistics` | 0 |
| `mrsdb`.`po_keytabauthparams` | 0 |
| `mrsdb`.`po_keyvalu1` | 0 |
| `mrsdb`.`po_keyvalu2` | 2 |
| `mrsdb`.`po_keyvalue` | 0 |
| `mrsdb`.`po_keyvaluecollection` | 2 |
| `mrsdb`.`po_l_arc` | 6 |
| `mrsdb`.`po_l_condition` | 0 |
| `mrsdb`.`po_l_constraint` | 2 |
| `mrsdb`.`po_l_dataformatconfi1` | 100 |
| `mrsdb`.`po_l_dataformatconfig` | 106 |
| `mrsdb`.`po_l_graph` | 9 |
| `mrsdb`.`po_l_index` | 2 |
| `mrsdb`.`po_l_indexfield` | 2 |
| `mrsdb`.`po_l_modelextensionarc` | 4 |
| `mrsdb`.`po_l_modelextensioncondit` | 0 |
| `mrsdb`.`po_l_modelextensionconstr` | 2 |
| `mrsdb`.`po_l_modelextensiondatafo` | 67 |
| `mrsdb`.`po_l_modelextensionfieldb` | 2 |
| `mrsdb`.`po_l_modelextensiongraph` | 4 |
| `mrsdb`.`po_l_modelextensionindex` | 2 |
| `mrsdb`.`po_l_modelextensionindexf` | 1 |
| `mrsdb`.`po_l_modelextensionnode` | 7 |
| `mrsdb`.`po_l_node` | 8 |
| `mrsdb`.`po_label` | 0 |
| `mrsdb`.`po_larc` | 0 |
| `mrsdb`.`po_lbinarycontainer` | 3 |
| `mrsdb`.`po_lblob` | 0 |
| `mrsdb`.`po_lcontainer` | 1 |
| `mrsdb`.`po_ldapnamespace` | 7 |
| `mrsdb`.`po_ldapoptions` | 7 |
| `mrsdb`.`po_lgraph` | 0 |
| `mrsdb`.`po_librar1` | 0 |
| `mrsdb`.`po_librar2` | 0 |
| `mrsdb`.`po_librar3` | 0 |
| `mrsdb`.`po_library` | 0 |
| `mrsdb`.`po_lightattribute` | 0 |
| `mrsdb`.`po_lightflow` | 0 |
| `mrsdb`.`po_lighttaskinstance` | 0 |
| `mrsdb`.`po_lightworkflow` | 0 |
| `mrsdb`.`po_link` | 0 |
| `mrsdb`.`po_linkageorder` | 10 |
| `mrsdb`.`po_linkattributes` | 0 |
| `mrsdb`.`po_linkpolicy` | 8 |
| `mrsdb`.`po_lnode` | 0 |
| `mrsdb`.`po_loadbalancinginfo` | 0 |
| `mrsdb`.`po_loadorderconstraint` | 6 |
| `mrsdb`.`po_loadorderstrategy` | 6 |
| `mrsdb`.`po_lockedobject` | 0 |
| `mrsdb`.`po_lockedobjectsummary` | 0 |
| `mrsdb`.`po_logicaldb` | 4 |
| `mrsdb`.`po_logicalresource` | 0 |
| `mrsdb`.`po_logicalschema` | 5 |
| `mrsdb`.`po_logmessage` | 0 |
| `mrsdb`.`po_looku1` | 0 |
| `mrsdb`.`po_lookup` | 0 |
| `mrsdb`.`po_lscibase` | 5 |
| `mrsdb`.`po_maccesscontrolentry` | 41 |
| `mrsdb`.`po_maccesscontrolentryann` | 0 |
| `mrsdb`.`po_maccesscontrollist` | 38 |
| `mrsdb`.`po_mapping` | 10 |
| `mrsdb`.`po_mappingfeatureconfig` | 5 |
| `mrsdb`.`po_mappingfeaturestat` | 1 |
| `mrsdb`.`po_mappinglineage` | 0 |
| `mrsdb`.`po_mappingoutputcontainer` | 0 |
| `mrsdb`.`po_mappingpersistedoutpu1` | 0 |
| `mrsdb`.`po_mappingpersistedoutput` | 0 |
| `mrsdb`.`po_mappingproxy` | 0 |
| `mrsdb`.`po_mappingruninfo` | 0 |
| `mrsdb`.`po_mappingrunrequeststat` | 0 |
| `mrsdb`.`po_mappingspec` | 37 |
| `mrsdb`.`po_mappingstat` | 0 |
| `mrsdb`.`po_mappingstatref` | 0 |
| `mrsdb`.`po_mappletinterfacecardin` | 5 |
| `mrsdb`.`po_maskingdata` | 0 |
| `mrsdb`.`po_masterlookupfieldrole` | 0 |
| `mrsdb`.`po_medomain` | 0 |
| `mrsdb`.`po_medomaindefinition` | 0 |
| `mrsdb`.`po_mergekind` | 6 |
| `mrsdb`.`po_messag1` | 0 |
| `mrsdb`.`po_message` | 0 |
| `mrsdb`.`po_messagebody` | 0 |
| `mrsdb`.`po_messageitem` | 0 |
| `mrsdb`.`po_messages` | 0 |
| `mrsdb`.`po_metadataconnectionspec` | 0 |
| `mrsdb`.`po_metadataextensionattri` | 6326 |
| `mrsdb`.`po_metadatarepository` | 0 |
| `mrsdb`.`po_mirequeststat` | 0 |
| `mrsdb`.`po_mispec` | 0 |
| `mrsdb`.`po_modelenumentry` | 0 |
| `mrsdb`.`po_modelrepository` | 4 |
| `mrsdb`.`po_monitoringaggregatesta` | 1 |
| `mrsdb`.`po_mpprocesseddataupgrade` | 2 |
| `mrsdb`.`po_mpproject` | 38 |
| `mrsdb`.`po_mpprojectgroup` | 11 |
| `mrsdb`.`po_mpsearchsysteminfo` | 1 |
| `mrsdb`.`po_mpserviceinfo` | 0 |
| `mrsdb`.`po_nameruleentries` | 69 |
| `mrsdb`.`po_namespace` | 30 |
| `mrsdb`.`po_namespacecontent` | 5 |
| `mrsdb`.`po_namespaceref` | 5 |
| `mrsdb`.`po_namespacetoprefixmap` | 5 |
| `mrsdb`.`po_namevaluepai1` | 114 |
| `mrsdb`.`po_namevaluepai2` | 0 |
| `mrsdb`.`po_namevaluepair` | 948 |
| `mrsdb`.`po_nativepartitioningconf` | 0 |
| `mrsdb`.`po_nativesessionstat` | 0 |
| `mrsdb`.`po_nativesessionstatsumma` | 0 |
| `mrsdb`.`po_nesteditem` | 0 |
| `mrsdb`.`po_nestedscopedescriptor` | 11 |
| `mrsdb`.`po_nlpentity` | 0 |
| `mrsdb`.`po_nodeaddress` | 0 |
| `mrsdb`.`po_nodehourlysummarizedst` | 0 |
| `mrsdb`.`po_nodelevelsummarization` | 0 |
| `mrsdb`.`po_nodeprocessstat` | 0 |
| `mrsdb`.`po_nodeprocesstupleidenti` | 0 |
| `mrsdb`.`po_noderef` | 0 |
| `mrsdb`.`po_normalizernode` | 5 |
| `mrsdb`.`po_normalizersrcfield` | 4 |
| `mrsdb`.`po_notificatio1` | 19 |
| `mrsdb`.`po_notification` | 0 |
| `mrsdb`.`po_notificationconfig` | 0 |
| `mrsdb`.`po_notificationconfigprop` | 0 |
| `mrsdb`.`po_notifications` | 0 |
| `mrsdb`.`po_notificationselector` | 0 |
| `mrsdb`.`po_notificationtransporti` | 1 |
| `mrsdb`.`po_numbersetvaluerule` | 0 |
| `mrsdb`.`po_objectgraphdecoration` | 0 |
| `mrsdb`.`po_objecthistory` | 0 |
| `mrsdb`.`po_objectidentifier` | 0 |
| `mrsdb`.`po_objectrefcontainer` | 166 |
| `mrsdb`.`po_objectreferenc1` | 203 |
| `mrsdb`.`po_objectreference` | 14 |
| `mrsdb`.`po_objectscopedescriptor` | 14 |
| `mrsdb`.`po_optimizerconfig` | 0 |
| `mrsdb`.`po_optio1` | 23 |
| `mrsdb`.`po_option` | 0 |
| `mrsdb`.`po_optiongroup` | 25 |
| `mrsdb`.`po_optionset` | 0 |
| `mrsdb`.`po_osprofile` | 3 |
| `mrsdb`.`po_osprofileproductextens` | 3 |
| `mrsdb`.`po_outcome` | 0 |
| `mrsdb`.`po_outlinelink` | 10 |
| `mrsdb`.`po_output` | 10 |
| `mrsdb`.`po_outputbindin1` | 44 |
| `mrsdb`.`po_outputbinding` | 0 |
| `mrsdb`.`po_outputexpression` | 10 |
| `mrsdb`.`po_outputfield` | 0 |
| `mrsdb`.`po_outputfieldlink` | 0 |
| `mrsdb`.`po_outputmap` | 0 |
| `mrsdb`.`po_outputstat` | 0 |
| `mrsdb`.`po_outputvalue` | 0 |
| `mrsdb`.`po_pairgenconfiguration` | 0 |
| `mrsdb`.`po_param` | 0 |
| `mrsdb`.`po_paramete1` | 0 |
| `mrsdb`.`po_paramete2` | 128 |
| `mrsdb`.`po_parameter` | 0 |
| `mrsdb`.`po_parametercontainer` | 0 |
| `mrsdb`.`po_parameterentry` | 9 |
| `mrsdb`.`po_parametermodel` | 4 |
| `mrsdb`.`po_parameterreference` | 0 |
| `mrsdb`.`po_parameterset` | 7 |
| `mrsdb`.`po_parametertype` | 0 |
| `mrsdb`.`po_parametervalue` | 137 |
| `mrsdb`.`po_parametervaluemodel` | 0 |
| `mrsdb`.`po_parametervaluesmodel` | 0 |
| `mrsdb`.`po_paramfieldmapport` | 17 |
| `mrsdb`.`po_paramfieldmapportwrapp` | 8 |
| `mrsdb`.`po_paramlocator` | 0 |
| `mrsdb`.`po_paramval` | 0 |
| `mrsdb`.`po_parc` | 0 |
| `mrsdb`.`po_parentinfo` | 1 |
| `mrsdb`.`po_parseroperation` | 0 |
| `mrsdb`.`po_parsestrategy` | 4 |
| `mrsdb`.`po_partitionallocationstr` | 1 |
| `mrsdb`.`po_partitioningkey` | 16 |
| `mrsdb`.`po_partitioningkeyranges` | 2 |
| `mrsdb`.`po_partitioningkeyvalues` | 3 |
| `mrsdb`.`po_partitioningscheme` | 27 |
| `mrsdb`.`po_partitionkeyrange` | 1 |
| `mrsdb`.`po_partitionpoint` | 6 |
| `mrsdb`.`po_passwordcomplexityconf` | 3 |
| `mrsdb`.`po_passwordrulemodel` | 3 |
| `mrsdb`.`po_patternparseroperation` | 0 |
| `mrsdb`.`po_pcapabilityinfo` | 0 |
| `mrsdb`.`po_permission` | 0 |
| `mrsdb`.`po_permissionlist` | 0 |
| `mrsdb`.`po_pfield` | 0 |
| `mrsdb`.`po_pgraph` | 0 |
| `mrsdb`.`po_pimfpackageinfo` | 0 |
| `mrsdb`.`po_planinfo` | 0 |
| `mrsdb`.`po_pnode` | 0 |
| `mrsdb`.`po_port` | 46 |
| `mrsdb`.`po_portlineage` | 2 |
| `mrsdb`.`po_portpolicy` | 0 |
| `mrsdb`.`po_preference` | 0 |
| `mrsdb`.`po_pregisteredcapabilitie` | 0 |
| `mrsdb`.`po_primarykey` | 0 |
| `mrsdb`.`po_primitivevalue` | 23 |
| `mrsdb`.`po_principal` | 0 |
| `mrsdb`.`po_privilege` | 12 |
| `mrsdb`.`po_processdefinition` | 0 |
| `mrsdb`.`po_processinstancehistory` | 0 |
| `mrsdb`.`po_processstat` | 1 |
| `mrsdb`.`po_processstatsummary` | 2 |
| `mrsdb`.`po_profileabledatadomain` | 0 |
| `mrsdb`.`po_profileablefiel1` | 3 |
| `mrsdb`.`po_profileablefield` | 0 |
| `mrsdb`.`po_profileablerecor1` | 1 |
| `mrsdb`.`po_profileablerecord` | 0 |
| `mrsdb`.`po_profiledefinitio1` | 2 |
| `mrsdb`.`po_profiledefinition` | 0 |
| `mrsdb`.`po_profiletask` | 0 |
| `mrsdb`.`po_profiletunin1` | 0 |
| `mrsdb`.`po_profiletuning` | 0 |
| `mrsdb`.`po_project` | 4 |
| `mrsdb`.`po_projectgrouptenantmapp` | 3 |
| `mrsdb`.`po_projectscopedescriptor` | 8 |
| `mrsdb`.`po_propert1` | 0 |
| `mrsdb`.`po_propert2` | 0 |
| `mrsdb`.`po_property` | 0 |
| `mrsdb`.`po_propertyvalue` | 0 |
| `mrsdb`.`po_propidtocolumn` | 5228 |
| `mrsdb`.`po_pscibase` | 2 |
| `mrsdb`.`po_pwxasooperationfield` | 1 |
| `mrsdb`.`po_pwxconnectivitytype` | 0 |
| `mrsdb`.`po_pwxdatamapproperties` | 0 |
| `mrsdb`.`po_pwxfieldinfo` | 0 |
| `mrsdb`.`po_pwxrecordlevelexit` | 0 |
| `mrsdb`.`po_queryfilterparams` | 0 |
| `mrsdb`.`po_queryresult` | 1 |
| `mrsdb`.`po_rangestatistics` | 0 |
| `mrsdb`.`po_rangevalue` | 0 |
| `mrsdb`.`po_rarc` | 0 |
| `mrsdb`.`po_recipient` | 0 |
| `mrsdb`.`po_record` | 3 |
| `mrsdb`.`po_recordkey` | 2 |
| `mrsdb`.`po_recordstrategy` | 22 |
| `mrsdb`.`po_recurrence` | 24 |
| `mrsdb`.`po_referencetable` | 0 |
| `mrsdb`.`po_relationalconnectivity` | 0 |
| `mrsdb`.`po_relationalfield` | 10 |
| `mrsdb`.`po_relationalrecord` | 12 |
| `mrsdb`.`po_relationalrecordinstan` | 10 |
| `mrsdb`.`po_relationship` | 0 |
| `mrsdb`.`po_renameoption` | 635 |
| `mrsdb`.`po_repositoryoperationali` | 0 |
| `mrsdb`.`po_repositoryoperationres` | 0 |
| `mrsdb`.`po_repositoryupgradeinfo` | 0 |
| `mrsdb`.`po_requestresponse` | 0 |
| `mrsdb`.`po_requeststat` | 1 |
| `mrsdb`.`po_resource` | 0 |
| `mrsdb`.`po_resourcedescriptor` | 0 |
| `mrsdb`.`po_response` | 0 |
| `mrsdb`.`po_responsestatusobject` | 0 |
| `mrsdb`.`po_restendpoint` | 0 |
| `mrsdb`.`po_restfeaturestat` | 0 |
| `mrsdb`.`po_restfieldmetadata` | 3 |
| `mrsdb`.`po_restforeignidentifier` | 0 |
| `mrsdb`.`po_restidentifier` | 0 |
| `mrsdb`.`po_restoperationstat` | 0 |
| `mrsdb`.`po_restpushintomapentry` | 1 |
| `mrsdb`.`po_restresource` | 0 |
| `mrsdb`.`po_result` | 1 |
| `mrsdb`.`po_retrypolicy` | 0 |
| `mrsdb`.`po_rfield` | 0 |
| `mrsdb`.`po_rhcomponent` | 2 |
| `mrsdb`.`po_rhgroup` | 2 |
| `mrsdb`.`po_rnode` | 0 |
| `mrsdb`.`po_routingconfig` | 0 |
| `mrsdb`.`po_rul1` | 0 |
| `mrsdb`.`po_rule` | 510 |
| `mrsdb`.`po_ruledefinition` | 0 |
| `mrsdb`.`po_rulefieldbinding` | 0 |
| `mrsdb`.`po_ruleoutput` | 0 |
| `mrsdb`.`po_rulespecfieldbinding` | 0 |
| `mrsdb`.`po_rulespecoutput` | 0 |
| `mrsdb`.`po_runnablefeatureinfo` | 0 |
| `mrsdb`.`po_runparameters` | 0 |
| `mrsdb`.`po_runtimeapplicationstat` | 1 |
| `mrsdb`.`po_runtimefeaturestat` | 2 |
| `mrsdb`.`po_rview` | 0 |
| `mrsdb`.`po_samlconfig` | 3 |
| `mrsdb`.`po_samplingpolic1` | 0 |
| `mrsdb`.`po_samplingpolicy` | 0 |
| `mrsdb`.`po_sapabstractstructure` | 13 |
| `mrsdb`.`po_sapasooperationfield` | 2 |
| `mrsdb`.`po_sapfilter` | 2 |
| `mrsdb`.`po_sapforeignkey` | 11 |
| `mrsdb`.`po_sapindex` | 11 |
| `mrsdb`.`po_sapindexfield` | 21 |
| `mrsdb`.`po_sapprograminfo` | 0 |
| `mrsdb`.`po_saprecordinstance` | 2 |
| `mrsdb`.`po_saptableconnectivityty` | 0 |
| `mrsdb`.`po_saptableresource` | 4 |
| `mrsdb`.`po_sapuniquekey` | 11 |
| `mrsdb`.`po_sarcextendedinfo` | 0 |
| `mrsdb`.`po_schedule` | 29 |
| `mrsdb`.`po_scheduleddisjob` | 0 |
| `mrsdb`.`po_scheduledtaskrunstat` | 1 |
| `mrsdb`.`po_schemaalias` | 0 |
| `mrsdb`.`po_schemaproxy` | 0 |
| `mrsdb`.`po_score` | 0 |
| `mrsdb`.`po_scorecard` | 0 |
| `mrsdb`.`po_scoregroup` | 0 |
| `mrsdb`.`po_sdkconnectbaseattribut` | 107 |
| `mrsdb`.`po_sdkconnectinfoattr` | 3 |
| `mrsdb`.`po_sdkconnectinfomodelext` | 9 |
| `mrsdb`.`po_sdkconnectionattrdef` | 0 |
| `mrsdb`.`po_sdkconnectiontype` | 0 |
| `mrsdb`.`po_sdkdbtype` | 0 |
| `mrsdb`.`po_sdkfieldattr` | 4 |
| `mrsdb`.`po_sdkfieldattrdef` | 0 |
| `mrsdb`.`po_sdkreaderattr` | 5 |
| `mrsdb`.`po_sdkreaderattrdef` | 0 |
| `mrsdb`.`po_sdkreaderextensiontype` | 0 |
| `mrsdb`.`po_sdkrecord` | 4 |
| `mrsdb`.`po_securitydomain` | 0 |
| `mrsdb`.`po_seedednamespacecontent` | 0 |
| `mrsdb`.`po_seededoutputstatsconta` | 0 |
| `mrsdb`.`po_semanticlayeroptions` | 0 |
| `mrsdb`.`po_sequenceflo1` | 3 |
| `mrsdb`.`po_sequenceflow` | 0 |
| `mrsdb`.`po_sequencestate` | 2 |
| `mrsdb`.`po_sequencestateinfo` | 2 |
| `mrsdb`.`po_servicedescriptor` | 0 |
| `mrsdb`.`po_serviceprivilegedef` | 3 |
| `mrsdb`.`po_serviceprocesshourlysu` | 1 |
| `mrsdb`.`po_serviceprocessstat` | 1 |
| `mrsdb`.`po_serviceprocesstupleide` | 0 |
| `mrsdb`.`po_session` | 3 |
| `mrsdb`.`po_setdatevalue` | 0 |
| `mrsdb`.`po_setnumbervalue` | 0 |
| `mrsdb`.`po_setparentchildvalue` | 0 |
| `mrsdb`.`po_setstringvalue` | 0 |
| `mrsdb`.`po_setvalue` | 0 |
| `mrsdb`.`po_sfieldextendedinfo` | 0 |
| `mrsdb`.`po_sgraphextendedinfo` | 0 |
| `mrsdb`.`po_signature` | 0 |
| `mrsdb`.`po_singleemailnotificatio` | 0 |
| `mrsdb`.`po_snodeextendedinfo` | 0 |
| `mrsdb`.`po_sort` | 0 |
| `mrsdb`.`po_sortfield` | 1 |
| `mrsdb`.`po_sortfieldlist` | 1 |
| `mrsdb`.`po_sortke1` | 8 |
| `mrsdb`.`po_sortkey` | 2 |
| `mrsdb`.`po_sortkeyfieldref` | 0 |
| `mrsdb`.`po_sortkeyfieldreflist` | 0 |
| `mrsdb`.`po_sortport` | 4 |
| `mrsdb`.`po_sortsystem` | 0 |
| `mrsdb`.`po_source` | 190 |
| `mrsdb`.`po_sourceconstraint` | 68 |
| `mrsdb`.`po_sourcedatastatistics` | 0 |
| `mrsdb`.`po_sourcefilter` | 1 |
| `mrsdb`.`po_sourcereference` | 0 |
| `mrsdb`.`po_sourcespec` | 1 |
| `mrsdb`.`po_sparkdatabricksexecuti` | 0 |
| `mrsdb`.`po_sparkengineconfig` | 0 |
| `mrsdb`.`po_sparksessionstat` | 1 |
| `mrsdb`.`po_sparksessionstatsummar` | 2 |
| `mrsdb`.`po_sqlendpoint` | 0 |
| `mrsdb`.`po_sqlfeaturestat` | 0 |
| `mrsdb`.`po_sqlindex` | 3 |
| `mrsdb`.`po_sqlindexcolumn` | 7 |
| `mrsdb`.`po_staginginfoattributes` | 0 |
| `mrsdb`.`po_statemen1` | 0 |
| `mrsdb`.`po_statement` | 0 |
| `mrsdb`.`po_statobject` | 25 |
| `mrsdb`.`po_storagesemantic` | 0 |
| `mrsdb`.`po_storedprocedure` | 3 |
| `mrsdb`.`po_strategy` | 0 |
| `mrsdb`.`po_streamingconfig` | 0 |
| `mrsdb`.`po_stringdistanceengine` | 0 |
| `mrsdb`.`po_stringdistanceinput` | 0 |
| `mrsdb`.`po_stringsetvaluerule` | 0 |
| `mrsdb`.`po_stringvalue` | 463 |
| `mrsdb`.`po_structuralfeatureinfo` | 159 |
| `mrsdb`.`po_subscription` | 0 |
| `mrsdb`.`po_substituteholder` | 0 |
| `mrsdb`.`po_tablegroup` | 0 |
| `mrsdb`.`po_tablepair` | 0 |
| `mrsdb`.`po_tablepairjoin` | 0 |
| `mrsdb`.`po_tag` | 0 |
| `mrsdb`.`po_taglinks` | 0 |
| `mrsdb`.`po_target` | 34 |
| `mrsdb`.`po_targetconnectinfo` | 0 |
| `mrsdb`.`po_targetgroup` | 11 |
| `mrsdb`.`po_targetholder` | 4 |
| `mrsdb`.`po_task` | 0 |
| `mrsdb`.`po_taskconfi1` | 39 |
| `mrsdb`.`po_taskconfi2` | 16 |
| `mrsdb`.`po_taskconfig` | 0 |
| `mrsdb`.`po_taskdescriptor` | 39 |
| `mrsdb`.`po_taskinstanc1` | 12 |
| `mrsdb`.`po_taskinstanc2` | 3 |
| `mrsdb`.`po_taskinstance` | 0 |
| `mrsdb`.`po_taskinstancehistory` | 0 |
| `mrsdb`.`po_taskoutputseedcontain1` | 0 |
| `mrsdb`.`po_taskoutputseedcontaine` | 0 |
| `mrsdb`.`po_taskrun` | 0 |
| `mrsdb`.`po_taskrunstat` | 4 |
| `mrsdb`.`po_taskschedulestagestat` | 5 |
| `mrsdb`.`po_taskstat` | 0 |
| `mrsdb`.`po_taskuser` | 107 |
| `mrsdb`.`po_tdgforeignkey` | 0 |
| `mrsdb`.`po_tdgtable` | 0 |
| `mrsdb`.`po_tdguniquekey` | 0 |
| `mrsdb`.`po_testcase` | 0 |
| `mrsdb`.`po_testcasecolumn` | 0 |
| `mrsdb`.`po_testdefinitionbadrecor` | 0 |
| `mrsdb`.`po_testdefinitionparamete` | 0 |
| `mrsdb`.`po_testdefinitionsampling` | 0 |
| `mrsdb`.`po_testdefinitionsource` | 0 |
| `mrsdb`.`po_threshold` | 0 |
| `mrsdb`.`po_timeofday` | 3 |
| `mrsdb`.`po_tokstdop` | 0 |
| `mrsdb`.`po_transform` | 183 |
| `mrsdb`.`po_transformationconfigur` | 0 |
| `mrsdb`.`po_transformationfieldlis` | 4 |
| `mrsdb`.`po_txhandlerconfig` | 0 |
| `mrsdb`.`po_txhandlerconfigassocia` | 6 |
| `mrsdb`.`po_txhandlerconfigpartiti` | 0 |
| `mrsdb`.`po_type` | 1907 |
| `mrsdb`.`po_typeaconnectionattribu` | 0 |
| `mrsdb`.`po_typebconnectionattribu` | 0 |
| `mrsdb`.`po_typeconfig` | 1703 |
| `mrsdb`.`po_typecontainer` | 66 |
| `mrsdb`.`po_typedelement` | 2169 |
| `mrsdb`.`po_typerconnectionattribu` | 0 |
| `mrsdb`.`po_typeruleentry` | 147 |
| `mrsdb`.`po_typesystemmap` | 0 |
| `mrsdb`.`po_udfrefcountdependency` | 258 |
| `mrsdb`.`po_umserviceconfig` | 4 |
| `mrsdb`.`po_uniquekey` | 3 |
| `mrsdb`.`po_useraccountsecuritycon` | 3 |
| `mrsdb`.`po_usercredentials` | 4 |
| `mrsdb`.`po_userdefinedfunction` | 1 |
| `mrsdb`.`po_userinfo` | 11 |
| `mrsdb`.`po_userlocaleinfo` | 0 |
| `mrsdb`.`po_userregularexpression` | 0 |
| `mrsdb`.`po_uumnamespace` | 3 |
| `mrsdb`.`po_value` | 3432 |
| `mrsdb`.`po_valueobjectmodel` | 0 |
| `mrsdb`.`po_valuepattern` | 101 |
| `mrsdb`.`po_valuequalifier` | 526 |
| `mrsdb`.`po_valuetouserstuple` | 0 |
| `mrsdb`.`po_valuetype` | 67 |
| `mrsdb`.`po_variable` | 0 |
| `mrsdb`.`po_variablecontaine1` | 0 |
| `mrsdb`.`po_variablecontainer` | 0 |
| `mrsdb`.`po_variablevalu1` | 12 |
| `mrsdb`.`po_variablevalue` | 0 |
| `mrsdb`.`po_versionhistory` | 0 |
| `mrsdb`.`po_versionmapping` | 0 |
| `mrsdb`.`po_virtualfunctiondefinit` | 0 |
| `mrsdb`.`po_virtualport` | 0 |
| `mrsdb`.`po_virtualprofileablereco` | 0 |
| `mrsdb`.`po_virtualschema` | 99 |
| `mrsdb`.`po_virtualtable` | 0 |
| `mrsdb`.`po_vrldocument` | 0 |
| `mrsdb`.`po_vrlelement` | 0 |
| `mrsdb`.`po_weighteddistanceengine` | 0 |
| `mrsdb`.`po_windowkey` | 1 |
| `mrsdb`.`po_windowspec` | 2 |
| `mrsdb`.`po_workflo1` | 3 |
| `mrsdb`.`po_workflow` | 0 |
| `mrsdb`.`po_workflowfeatureconfig` | 0 |
| `mrsdb`.`po_workflowfeaturestat` | 0 |
| `mrsdb`.`po_workflowstat` | 0 |
| `mrsdb`.`po_workflowsummarystat` | 0 |
| `mrsdb`.`po_workflowuse1` | 1 |
| `mrsdb`.`po_workflowuser` | 0 |
| `mrsdb`.`po_wscpushintomapentry` | 0 |
| `mrsdb`.`po_wsdl` | 0 |
| `mrsdb`.`po_wsendpoint` | 0 |
| `mrsdb`.`po_wsfeaturestat` | 0 |
| `mrsdb`.`po_wsoperationstat` | 0 |
| `mrsdb`.`po_xpath` | 11 |
| `mrsdb`.`po_xsdcontent` | 5 |
| `mrsdb`.`pr_attribute` | 69284 |
| `mrsdb`.`pr_b_rsid_txid` | 0 |
| `mrsdb`.`pr_gannotations` | 6014 |
| `mrsdb`.`pr_resource` | 6979 |
| `mrsdb`.`pr_rsid_txid` | 0 |
+-------------------------------------+--------------+
1029 rows in set (3.98 sec)
+--------------------------+
| TOTAL_DATABASE_RECORD_CT |
+--------------------------+
| 209720 |
+--------------------------+
1 row in set (3.98 sec)
Query OK, 0 rows affected (3.98 sec)
---------------------------------------------------------------------
Step - 18 Final Validation of Rows count of all tables and objectives between PostgreSQL and MySQL Databases
---------------------------------------------------------------------
Tables Count :-
-----------------
select table_schema,count(table_name) from information_schema.tables where table_schema not in ('pg_catalog', 'information_schema') and table_type = 'BASE TABLE' GROUP BY table_schema; ==========> PostgreSQL [ Should connect individual Database and run ]
SELECT TABLE_SCHEMA,count(TABLE_NAME) AS TOTAL_NUMBER_OF_TABLES FROM information_schema.TABLES WHERE TABLE_TYPE LIKE 'BASE TABLE' AND TABLE_SCHEMA IN ('blob','test_db','mrsdb','migration_db','dvdrental','kidsdb') GROUP BY TABLE_SCHEMA; ===> MySQL
Results :-
--------
In Postgresql :-
-----------------
blob=# select table_schema,count(table_name) from information_schema.tables where table_schema not in ('pg_catalog', 'information_schema') and table_type = 'BASE TABLE' GROUP BY table_schema;
table_schema | count
--------------+-------
public | 1
(1 row)
dvdrental=# select table_schema,count(table_name) from information_schema.tables where table_schema not in ('pg_catalog', 'information_schema') and table_type = 'BASE TABLE' GROUP BY table_schema;
table_schema | count
--------------+-------
public | 15
(1 row)
kidsdb=# select table_schema,count(table_name) from information_schema.tables where table_schema not in ('pg_catalog', 'information_schema') and table_type = 'BASE TABLE' GROUP BY table_schema;
table_schema | count
--------------+-------
public | 1
(1 row)
migration_db=# select table_schema,count(table_name) from information_schema.tables where table_schema not in ('pg_catalog', 'information_schema') and table_type = 'BASE TABLE' GROUP BY table_schema;
table_schema | count
--------------+-------
public | 2
(1 row)
mrsdb=# select table_schema,count(table_name) from information_schema.tables where table_schema not in ('pg_catalog', 'information_schema') and table_type = 'BASE TABLE' GROUP BY table_schema;
table_schema | count
--------------+-------
public | 847
(1 row)
test_db=# select table_schema,count(table_name) from information_schema.tables where table_schema not in ('pg_catalog', 'information_schema') and table_type = 'BASE TABLE' GROUP BY table_schema;
table_schema | count
--------------+-------
public | 6
(1 row)
In MySQL:-
-------------
mysql> SELECT TABLE_SCHEMA,count(TABLE_NAME) AS TOTAL_NUMBER_OF_TABLES FROM information_schema.TABLES WHERE TABLE_TYPE LIKE 'BASE TABLE' AND TABLE_SCHEMA IN ('blob','test_db','mrsdb','migration_db','dvdrental','kidsdb') GROUP BY TABLE_SCHEMA;
+--------------+------------------------+
| TABLE_SCHEMA | TOTAL_NUMBER_OF_TABLES |
+--------------+------------------------+
| blob | 1 |
| dvdrental | 15 |
| kidsdb | 1 |
| migration_db | 2 |
| mrsdb | 847 |
| test_db | 6 |
+--------------+------------------------+
6 rows in set (0.01 sec)
Views Count :-
---------------
select count(table_name) from INFORMATION_SCHEMA.views WHERE table_schema = ANY (current_schemas(false)); ===> Postgresql [ Should connect individual Database and run ]
SELECT TABLE_SCHEMA,count(TABLE_NAME) FROM information_schema.TABLES WHERE TABLE_TYPE LIKE 'VIEW' AND TABLE_SCHEMA IN ('blob','test_db','mrsdb','migration_db','dvdrental','kidsdb') GROUP BY TABLE_SCHEMA; ==> MySQL
Results :-
--------
In Postgresql :-
-----------------
postgres=# \c blob
You are now connected to database "blob" as user "postgres".
blob=# select count(table_name) from INFORMATION_SCHEMA.views WHERE table_schema = ANY (current_schemas(false));
count
-------
0
(1 row)
dvdrental=# select count(table_name) from INFORMATION_SCHEMA.views WHERE table_schema = ANY (current_schemas(false));
count
-------
7
(1 row)
kidsdb=# select count(table_name) from INFORMATION_SCHEMA.views WHERE table_schema = ANY (current_schemas(false));
count
-------
0
(1 row)
migration_db=# select count(table_name) from INFORMATION_SCHEMA.views WHERE table_schema = ANY (current_schemas(false));
count
-------
0
(1 row)
mrsdb=# select count(table_name) from INFORMATION_SCHEMA.views WHERE table_schema = ANY (current_schemas(false));
count
-------
182
(1 row)
test_db=# select count(table_name) from INFORMATION_SCHEMA.views WHERE table_schema = ANY (current_schemas(false));
count
-------
0
(1 row)
In MySQL :-
-------------
mysql> SELECT TABLE_SCHEMA,count(TABLE_NAME) FROM information_schema.TABLES WHERE TABLE_TYPE LIKE 'VIEW' AND TABLE_SCHEMA IN ('blob','test_db','mrsdb','migration_db','dvdrental','kidsdb') GROUP BY TABLE_SCHEMA;
+--------------+-------------------+
| TABLE_SCHEMA | count(TABLE_NAME) |
+--------------+-------------------+
| dvdrental | 7 |
| mrsdb | 182 |
+--------------+-------------------+
2 rows in set (0.00 sec)
Functions count :-
--------------------------------------
select routine_type,routine_schema,count(routine_name) from information_schema.routines where routine_schema not in ('pg_catalog', 'information_schema') and routine_type='FUNCTION' and specific_name!='last_updated_16407' GROUP BY routine_schema,routine_type; ===> PostgreSQL [ Should connect individual Database and run ]
select ROUTINE_SCHEMA,count(ROUTINE_NAME),ROUTINE_TYPE from information_schema.ROUTINES where ROUTINE_SCHEMA IN('blob','test_db','mrsdb','migration_db','dvdrental','kidsdb') GROUP BY ROUTINE_SCHEMA; ===> MySQL
Results :-
PostgreSQL :-
------------------
dvdrental=# select routine_type,routine_schema,count(routine_name) from information_schema.routines where routine_schema not in ('pg_catalog', 'information_schema') and routine_type='FUNCTION' and specific_name!='last_updated_16407' GROUP BY routine_schema,routine_type;
routine_type | routine_schema | count
--------------+----------------+-------
FUNCTION | public | 8
(1 row)
blob=# select routine_type,routine_schema,count(routine_name) from information_schema.routines where routine_schema not in ('pg_catalog', 'information_schema') and routine_type='FUNCTION' GROUP BY routine_schema,routine_type;
routine_type | routine_schema | count
--------------+----------------+-------
FUNCTION | public | 1
(1 row)
In MySQL :-
------------------
mysql> select ROUTINE_SCHEMA,count(ROUTINE_NAME),ROUTINE_TYPE from information_schema.ROUTINES where ROUTINE_SCHEMA IN('blob','test_db','mrsdb','migration_db','dvdrental','kidsdb') GROUP BY ROUTINE_SCHEMA;
+----------------+---------------------+--------------+
| ROUTINE_SCHEMA | count(ROUTINE_NAME) | ROUTINE_TYPE |
+----------------+---------------------+--------------+
| blob | 1 | PROCEDURE |
| dvdrental | 8 | FUNCTION |
+----------------+---------------------+--------------+
2 rows in set (0.00 sec)
mysql> SELECT `ROUTINE_SCHEMA` AS `database`,`ROUTINE_TYPE` AS `type`,`SPECIFIC_NAME` AS `name`,`DTD_IDENTIFIER` AS `data_type` FROM `INFORMATION_SCHEMA`.`ROUTINES` WHERE `definer` LIKE CONCAT('%', CONCAT((SUBSTRING_INDEX((SELECT user()), '@', 1)), '%')) ORDER BY `database` ,`type` ,`name`; =======> MySQL
+--------------------+-----------+----------------------------+------
| database | type | name | data_type |
+--------------------+-----------+----------------------------+------
| blob | PROCEDURE | bytea_import | NULL |
| dvdrental | FUNCTION | get_customer_balance | decimal(10,0) |
| dvdrental | FUNCTION | inventory_held_by_customer | int |
| dvdrental | FUNCTION | inventory_in_stock | tinyint(1) |
| dvdrental | PROCEDURE | film_in_stock | NULL |
| dvdrental | PROCEDURE | film_not_in_stock | NULL |
| dvdrental | PROCEDURE | last_day | NULL |
| dvdrental | PROCEDURE | rewards_report | NULL |
| dvdrental | PROCEDURE | _group_concat | NULL |
+--------------------+-----------+----------------------------+------
9 rows in set (0.01 sec)
Tables_Count = Passed For all Schemas
Views_Count = Passed For all Schemas
Table_Row_Count = Passed For all Schemas
Functions_Count = Passed For all Schemas
Triggeres_Count = Passed For all Schemas
...........Completed PostgreSQL to MYSQL Migration ..........
Thank you All .....
Thank you Raj
ReplyDeleteThanks
ReplyDelete