Cloudera Impala an Exciting Technology

cloudera-impalaCloudera Impala is an exciting Apache Hadoop based technology for doing fast sql queries on big data. Historically people have used Apache Hive, part of the hadoop tool set, but queries running on substantial data sets can take a long time to run. Hive turns the queries into map-reduce jobs and runs them on hadoop. Impala is a massively parallel query engine, chops up the data and the query facets into chunks and splits it up over the cluster which can run queries dramatically faster. A query can complete in seconds, where it might take an hour in hive. Essentially it is a subset of the functionality that hive provides. Impala does not support the map, list and set or json data types for instance, which one might use with the serde functionality, you might not be able to do with Impala. Some of the data transformation aspects of hive also aren’t supported. Some of the dml functionality update, delete are missing. You can connect to it with a Hive Server 2 driver to the impala specific port, using odbc, jdbc, and similar tools.

Impala prefers the Parquet storage format, which is a column oriented compressed binary format, though it can also create and insert in text formats. It can also query with Avro, RCFile, and SequenceFile, but can’t insert into. One particular issue working with Impala along side with Hive, using Parquet format tables, using timestamp columns or decimal fields is not supported in hive earlier, but will be provided in Hive 0.14 which is being tested at present.

Although the big data sql field has been changing recently with hive on Tez which Hive 0.13 will support, spark SQL and facebook’s Presto engine

 

Twitter4j + twitter + hadoop

I have been doing a bit of work with hadoop of late in my work life, mainly using streaming map reduce and pig working to extract additional data out of weblogs, which is a powerful paradigm. Before the election I wanted to develop a way to look at data during the election period. Twitter is a powerful communication tool often trivialized, but is a powerful way to promote and for mass sentiment to be made known.

Twitter has a powerful streaming api, that allows twitter to push to the client the data in a large mass. PHP is often a tool that I have used as a rapid development tool, but usually lacks a multi-threaded model and libraries that implement features like twitter’s streaming api. Twitter4j is a good library for java and also works with android, which works well with twitter. This allowed me to capture a significant amount of data for analysis. the code had matured significantly by the time the town hall debate took place, which led to capturing a good quality of data. This run used a the Query Stream, which allowed to filter from the global data set that twitter is, and limit it to the united states and topics relating to the debate and presidential election. Wanting to do more work with hadoop’s java libraries and features, I wrote the hadoop map reduce jobs in java and setup a single pseudo distributed node to process the data. These are the results imported into Google spreadsheets.

Great visualizations with D3

I’ve been seeing a lot of amazing infographics and visualizations. I was at a conference presented by Actuate the BI company. They presented a talk on visualizations in part because of purchasing a company to help compete with products like Tableau whose product helps visualizing data. In the discussion was d3 a javascript html5 library, which sites like The New York Times uses to do some of the wonderful graphics they do. You can see from the samples gallery some of the amazing things you can do with it. If you have good skills with Css and Javascript, you can create very dynamic graphics for projects you are working on.

You can see to the left a clip from a project I have been working on in my free time. During the period leading up to the elections I was working on a project to capture twitter data during the debates and later build hadoop jobs to crunch the data and reduce it down to data. The sample to the left from the town hall debate is from the source data, top 100 sources, which represents the twitter clients that people were using with the larget text representing the most used clients. These used a word cloud type visualization, which is hard to draw conclusions from it, though you can pick out the important information. The data needed to be scaled from a range of approximately 30:58000, so I scaled using log10(n/500).