Category: Databases Administration

Covering NoSQL, Relational Database, Data Visualization, and Reporting.

  • What does Facebook consider an average day's worth of data?

    Well according to this article from gigaom.com. The average day looks something like this.

    • 2.5 billion content items shared per day (status updates + wall posts + photos + videos + comments)
    • 2.7 billion Likes per day
    • 300 million photos uploaded per day
    • 100+ petabytes of disk space in one of FB’s largest Hadoop (HDFS) clusters
    • 105 terabytes of data scanned via Hive, Facebook’s Hadoop query language, every 30 minutes
    • 70,000 queries executed on these databases per day
    • 500+terabytes of new data ingested into the databases every day

    I also love this quote from the VP of Infrastructure.

    “If you aren’t taking advantage of big data, then you don’t have big data, you have just a pile of data,” said Jay Parikh, VP of infrastructure at Facebook on Wednesday. “Everything is interesting to us.”

  • Big Data for Small Business

    I have said it before and will say it again you don’t have to be fortune 500 company to use Big Data. Big Data is more about understanding your data, then it is about how big it is and understanding all your different data sources and gathering them into one place, so that you can analyze and understand it better.

    http://www.pcworld.com/article/2047486/how-small-businesses-can-mine-big-data.html

  • What's New with MongoDB Hadoop Integration.

    I attend this Webinar yesterday and it was pretty good. I like how straight they interaction between MongoDB and Hadoop. Check it out if you get a chance.

    http://www.10gen.com/presentations/webinar-whats-new-mongodb-hadoop-integration

  • Wrangling Customer Usage Data with Hadoop

    Here is our session from the Hadoop Summit 2013.

     

    Title: Wrangling Customer Usage Data with Hadoop

    Slides: http://www.slideshare.net/Hadoop_Summit/hall-johnson-june271100amroom211v2

    Description:

    At Clearwire we have a big data challenge: Processing millions of unique usage records comprising terabytes of data for millions of customers every week. Historically, massive purpose-built database solutions were used to process data, but weren’t particularly fast, nor did they lend themselves to analysis. As mobile data volumes increase exponentially, we needed a scalable solution that could process usage data for billing, provide a data analysis platform, and inexpensively store the data indefinitely. The solution? A Hadoop-based platform allowed us to architect and deploy an end-to-end solution based on a combination of physical data nodes and virtual edge nodes in less than six months. This solution allowed us to turn off our legacy usage processing solution and reduce processing times from hours to as little as 15-min. This improvement has enabled Clearwire to deliver actionable usage data to partners faster and more predictably than ever before. Usage processing was just the beginning; we’re now turing to the raw data stored in Hadoop, adding new data sources, and starting to anlyze the data. Clearwire is now able to put multiple data sources in the hands of our analysts for further discovery and actionable intelligence.

     

  • Windows Azure HDInsight ( Hadoop on Windows )

    Lately I have been asked by a lot of my co-workers, if Hadoop runs on Windows. After going to the Hadoop Summit last month, I have been able to tell them about Azure HDInsight. Which is basically Apache Hadoop running on Windows Azure.

    It appears that Microsoft has been working with Hortonworks to bring Apache Hadoop to Windows and here is the end produce.

    <a href="http://www xenical medication.windowsazure.com/en-us/documentation/services/hdinsight/”>http://www.windowsazure.com/en-us/documentation/services/hdinsight/

    So if you are interest in Hadoop on Windows check it out.

  • Hadoop to Hadoop Copy

    Here recently I need to copy the content of one hadoop cluster to another for geo redundancy. Thankfully instead of have to write something to do it, Hadoop supply a hand tool to do it “DistCp (distributed copy)”.

     

    DistCp is a tool used for large inter/intra-cluster copying. It uses Map/Reduce to effect its distribution, error handling and recovery, and reporting. It expands a list of files and directories into input to map tasks, each of which will copy a partition of the files specified in the source list. Its Map/Reduce pedigree has endowed it with some quirks in both its semantics and execution. The purpose of this document is to offer guidance for common tasks and to elucidate its model.

     

    Here are the basic for using:

     

    bash$ hadoop distcp hdfs://nn1:8020/foo/bar \

    hdfs://nn2:8020/bar/foo

     

    This will expand the namespace under /foo/bar on nn1 into a temporary file, partition its contents among a set of map tasks, and start a copy on each TaskTracker from nn1 to nn2. Note that DistCp expects absolute paths.

     

    Here is how you can handle multiple source directories on the command line:

     

    bash$ hadoop distcp hdfs://nn1:8020/foo/a \

    hdfs://nn1:8020/foo/b \

    hdfs://nn2:8020/bar/foo

  • Hortonworks Road Show "Big Business Value from Big Data and Hadoop"

    This morning I went to the Hortonworks Road Show. It’s wasn’t Bad. I have to say out of the Hadoop Vendor I have talked to, I like Hortonworks business model the best.

    The fact that they are a large committer to the Apache Hadoop Project, along with several other sub projects such as Apache Ambari Project doesn’t hurt. They seem to be more community based then the others. If you have a chance or know someone that would like a good introduce to hadoop I would recommend that they go.

    http://info.hortonworks.com/RoadShowFall2012.html?mktotrk=roadshow

    –Peace

  • Working with Hadoop Streaming

    Hadoop streaming is a utility that comes with the Hadoop distribution. The utility allows you to create and run map/reduce jobs with any executable or script as the mapper and/or the reducer. For example:

    shell> $HADOOP_HOME/bin/hadoop jar $HADOOP_HOME/hadoop-streaming.jar  -input myInputDirs -output myOutputDir -mapper /bin/cat -reducer /bin/wc

    If you using the tar package from Apache Hadoop. You can find the hadoop-streaming.jar in $HADOOP_HOME/contrib/streaming/hadoop-streaming-xxx.jar

  • Amazon Relational Database Service (Amazon RDS)

    It appears that Amazon is introducing a new service specifically targeted at Relational Databases helpful hints. You can choose from MySQLOracle, and Microsoft Sql Server.

    Amazon Relational Database Service (Amazon RDS) is a web service that makes it easy to set up, operate, and scale a relational database in the cloud. It provides cost-efficient and resizable capacity while managing time-consuming database administration tasks, freeing you up to focus on your applications and business.