Installing MariaDB 10.1 on CentOS 6.8

MariaDB is a fork of the MySQL; it is notable for being led by the original developers of MySQL and is community-developed. The original developers forked it due to concerns over its acquisition by Oracle.

MariaDB intends to be a “drop-in” replacement for MySQL, ensuring capability with library binary and matching with MySQL APIs and commands. Making it extremely easy for current MySQL User/Administrator to switch over with little to no difference in how they use it.

It includes the XtraDB storage engine an enhanced version of the InnoDB storage engine. XtraDB is designed to better scale on modern hardware and includes a variety of other features useful in high-performance environments. To top it off XtraDB is backwards compatible with the standard InnoDB, make it a good “drop-in” replacement.

Installational is pretty straight forward and very similar to installing MySQL. I prefer to install package with yum. So the first thing it to add the MariaDB yum repo.

Pick your favorite editor and added the following file.
/etc/yum.repos.d/MariaDB.repo

# MariaDB 10.1 CentOS repository list - created 2017-03-03 18:33 UTC
# http://downloads.mariadb.org/mariadb/repositories/
[mariadb]
name = MariaDB
baseurl = http://yum.mariadb.org/10.1/centos6-amd64
gpgkey=https://yum.mariadb.org/RPM-GPG-KEY-MariaDB
gpgcheck=1

Now run the following.

[rhosto@localhost ~]$ sudo yum clean all

[rhosto@localhost ~]$ sudo yum install MariaDB-server MariaDB-client

Now we can start the service.

[rhosto@localhost ~]$ sudo service mysql start

Next I strongly recommend running ‘/usr/bin/mysql_secure_installation’. Which will set the MariaDB root user password and give you the option of removing the test databases and anonymous user created by default.

[rhosto@localhost ~]$ sudo /usr/bin/mysql_secure_installation

NOTE: RUNNING ALL PARTS OF THIS SCRIPT IS RECOMMENDED FOR ALL MariaDB
SERVERS IN PRODUCTION USE! PLEASE READ EACH STEP CAREFULLY!

In order to log into MariaDB to secure it, we'll need the current
password for the root user. If you've just installed MariaDB, and
you haven't set the root password yet, the password will be blank,
so you should just press enter here.

Enter current password for root (enter for none): 
OK, successfully used password, moving on...

Setting the root password ensures that nobody can log into the MariaDB
root user without the proper authorisation.

Set root password? [Y/n] y
New password: 
Re-enter new password: 
Password updated successfully!
Reloading privilege tables..
... Success!


By default, a MariaDB installation has an anonymous user, allowing anyone
to log into MariaDB without having to have a user account created for
them. This is intended only for testing, and to make the installation
go a bit smoother. You should remove them before moving into a
production environment.

Remove anonymous users? [Y/n] Y
... Success!

Normally, root should only be allowed to connect from 'localhost'. This
ensures that someone cannot guess at the root password from the network.

Disallow root login remotely? [Y/n] Y
... Success!

By default, MariaDB comes with a database named 'test' that anyone can
access. This is also intended only for testing, and should be removed
before moving into a production environment.

Remove test database and access to it? [Y/n] Y
- Dropping test database...
... Success!
- Removing privileges on test database...
... Success!

Reloading the privilege tables will ensure that all changes made so far
will take effect immediately.

Reload privilege tables now? [Y/n] Y
... Success!

Cleaning up...

All done! If you've completed all of the above steps, your MariaDB
installation should now be secure.

Thanks for using MariaDB!

Now verify that it will startup on reboot.

[rhosto@localhost ~]$ sudo chkconfig --list mysql
mysql 0:off 1:off 2:on 3:on 4:on 5:on 6:off

And you are good to go.

[rhosto@localhost ~]$ mysql -u root -p
Enter password: 
Welcome to the MariaDB monitor. Commands end with ; or \g.
Your MariaDB connection id is 11
Server version: 10.1.21-MariaDB MariaDB Server

Copyright (c) 2000, 2016, Oracle, MariaDB Corporation Ab and others.

Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.

MariaDB [(none)]>

Querying Apache Hadoop Resource Manager with Python.

Querying Apache Hadoop Resource Manager with Python.

I was recently asked to write a script that would monitor the running application on the Apache Hadoop Resource Manager.

I wonder over to the Apache Hadoop Cluster Application Statistics API. The API allows to query most of the information that you see in the WEB UI. Information such as status on the cluster, metrics on the cluster, scheduler information, information about nodes in the cluster, and information about applications on the cluster.

I first start by querying the cluster info.

import urllib2
import json

resource_manager = 'http://resourcemanager:8088'

info_url = resource_manager+"/ws/v1/cluster/info"

request = urllib2.Request(info_url)

'''
If you prefer to work with xml replace json below with xml
'''
request.add_header('Accept', 'application/json')

response = urllib2.urlopen(request)
data = json.loads(response.read())

print json.dumps(data, sort_keys=True, indent=4, separators=(',', ': '))

returns the following:

{
"clusterInfo": {
"haState": "ACTIVE",
"hadoopBuildVersion": "2.6.0-cdh5.7.0 from c00978c67b0d3fe9f3b896b5030741bd40bf541a by jenkins source checksum b2eabfa328e763c88cb14168f9b372",
"hadoopVersion": "2.6.0-cdh5.7.0",
"hadoopVersionBuiltOn": "2016-03-23T18:36Z",
"id": 1478120586043,
"resourceManagerBuildVersion": "2.6.0-cdh5.7.0 from c00978c67b0d3fe9f3b896b5030741bd40bf541a by jenkins source checksum deb0fdfede32bbbb9cfbda6aa7e380",
"resourceManagerVersion": "2.6.0-cdh5.7.0",
"resourceManagerVersionBuiltOn": "2016-03-23T18:43Z",
"rmStateStoreName": "org.apache.hadoop.yarn.server.resourcemanager.recovery.NullRMStateStore",
"startedOn": 1478120586043,
"state": "STARTED"
}
}

Now onto what I need to do, querying the Resource Manager about running applications. The Cluster Applications API allow you to collect information on resources, which represents an application. There are multiple parameters that can be specified to retrieve data. For a list of parameters go to Cluster_Applications_API

I however just need the information on running applications. Which looks something like.

import urllib2
import json

resource_manager = 'http://dvcdhnn02:8088'

info_url = resource_manager+"/ws/v1/cluster/apps?states=running"

request = urllib2.Request(info_url)

'''
If you prefer to work with xml replace json below with xml
'''
request.add_header('Accept', 'application/json')

response = urllib2.urlopen(request)
data = json.loads(response.read())

print json.dumps(data, sort_keys=True, indent=4, separators=(',', ': '))

which returns something like:

{
"apps": {
"app": [
{
"allocatedMB": 24576,
"allocatedVCores": 3,
"amContainerLogs": "http://resourcemanager:8042/node/containerlogs/container_1478120586043_15232_01_000001/hdfs",
"amHostHttpAddress": "resourcemanager:8042",
"applicationTags": "",
"applicationType": "MAPREDUCE",
"clusterId": 1478120586043,
"diagnostics": "",
"elapsedTime": 18009,
"finalStatus": "UNDEFINED",
"finishedTime": 0,
"id": "application_1478120586043_15232",
"logAggregationStatus": "NOT_START",
"memorySeconds": 431865,
"name": "SELECT 1 AS `number_of_records...TIMESTAMP))(Stage-1)",
"numAMContainerPreempted": 0,
"numNonAMContainerPreempted": 0,
"preemptedResourceMB": 0,
"preemptedResourceVCores": 0,
"progress": 54.07485,
"queue": "root.hdfs",
"runningContainers": 3,
"startedTime": 1479156085020,
"state": "RUNNING",
"trackingUI": "ApplicationMaster",
"trackingUrl": "http://resourcemanager:8088/proxy/application_1478120586043_15232/",
"user": "hdfs",
"vcoreSeconds": 51
}
]
}
}

straight forward and simple to use.

MongoDB Script for counting records in collections in all the databases

Here is a quick script. I wrote for a co-worker.

var host = "localhost"
var port = 27000
var dbslist = db.adminCommand('listDatabases');

for( var d = 0; d < dbslist.databases.length; d++) {
     var db = connect(host+":"+port+"/"+dbslist.databases[d].name);
     var collections = db.getCollectionNames();
     for(var i = 0; i < collections.length; i++){
         var name = collections[i];
         if(name.substr(0, 6) != 'system') {
            print("\t"+dbslist.databases[d].name+"."+name + ' = ' + db[name].count() + ' records');
         }
     }
}

 

Apache Oozie – Shell Script Example.

Recently I needed the ability to allow a user to submit jobs that required them to pass arguments to a shell script. While it’s easy enough to submit a job using a Web UI like HUE. I wanted to tie it to a homegrown SaaS solution that we were developing to allow developers to load datasets into a database for testing.

Since I was already using Hadoop and Sqoop to store and load the datasets, and I didn’t want to reinvent the wheel, I decided to use Oozie that I had already installed to handle some of the Hadoop ETL jobs.

I started off by creating a working directory on HDFS.

hdfs dfs mkdir -p /user/me/oozie-scripts/OozieTest

Next I created a simple shell script that take two parameters. For testing, I decided to use curl to retrieve a CSV from google.com and then copy it to HDFS. Keep mind that any application that you use in your shell script needs to be install on your data nodes.

#!/bin/bash
/usr/bin/curl -i “$1″  -o $2
/usr/bin/hdfs dfs -copyFromLocal $2 $2

Now I copied the script to the working directory on HDFS.

shell> hdfs dfs copyFromLocal GetCSVData.sh /usr/me/oozie-scripts/OozieTest

Next I created a simple workflow.xml template to handle the Oozie job. Oozie requires it to be name workflow.xml. This defines the actions and parameters for the actions.

<workflow-app name=”GetCSVData” xmlns=”uri:oozie:workflow:0.4″>
<start to=”GetCSVData”/>
<action name=”GetCSVData”>
<shell xmlns=”uri:oozie:shell-action:0.1″>
<job-tracker>${jobTracker}</job-tracker>
<name-node>${nameNode}</name-node>
<exec>GetCSVData.sh</exec>
<argument>${url}</argument>
<argument>${output}</argument>
<file>/user/root/oozie-scripts/OozieTest/GetCSVData.sh#GetCSVData.sh</file>
</shell>
<ok to=”end”/>
<error to=”kill”/>
</action>
<kill name=”kill”>
<message>Action failed, error message[${wf:errorMessage(wf:lastErrorNode())}]</message>
</kill>
<end name=”end”/>
</workflow-app>

The important part of this is <shell xmlns=”uri:oozie:shell-action:0.1″>. Which defines the type of action and the requirements for it.

The sets which job tracker to run the job <job-tracker>${jobTracker}</job-tracker>

The name-node where everything is stored <name-node>${nameNode}</name-node>

The name of the Shell script to be executed <exec>GetCSVData.sh</exec>

The first argument to pass to the shell script <argument>${url}</argument>

The second argument to pass to the shell script <argument>${output}</argument>

The location of the shell script on HDFS <file>/user/me/oozie-scripts/OozieTest/GetCSVData.sh#GetCSVData.sh</file>

Now will we need to create a properties file named “oozietest.properties” for submitting the Oozie job. This will basically fill in the all the variables for the workflow.xml.

oozie.wf.application.path=hdfs://localhost:8020/user/me/oozie-scripts/OozieTest
jobTracker=localhost:8032
nameNode=hdfs://localhost:8020
url=http://www.google.com/finance/historical?q=NYSE%3ADATA&ei=TH0mVsrWBce7iwLE86_ABw&output=csv
output=/tmp/DATA.csv

The oozie.wf.application.path is the working directory on HDFS that has the workflow.xml. Where as the rest are key value pairs to fill in the value.

Now all we need to do is submit job.

shell> oozie job -oozie http://localhost:11000/oozie -config oozietest.properties -run

The should generate a job id which we can use to check the status of the job we submitted.

shell> oozie job -oozie http://dbz-datarepo-app02:11000/oozie -info [JOB_ID]

for more information check out Apache Oozie.