Wednesday, November 6, 2013

Facebook goes open source with Presto query engine for big data


Potentially raising the bar on SQL scalability, Facebook has released as open source a SQL query engine it developed called Presto that was built to work with petabyte-sized data warehouses.


Currently, over 1,000 Facebook employees use Presto daily to run 30,000 interactive queries, involving over a petabyte of processing, according to a post authored by Facebook software engineer Martin Traverso. The company has scaled the software to run on a 1,000 node cluster.


[ Download InfoWorld's Big Data Analytics Deep Dive for a comprehensive, practical overview of this hot topic. | Cut to the key news for technology development and IT management with our once-a-day summary of the top tech happenings. Subscribe to the InfoWorld Daily newsletter. ]


Now, Facebook wants other data-driven organizations to use, and it hopes, refine Presto. The company has posted the software's source code and is encouraging contributions from other parties. The software is already being tested by a number of other large Internet services, namely AirBnB and Dropbox.


Standard data warehouses would be hard-pressed to offer the responsiveness of Presto given the amount of data Facebook collects, according to engineers at the company. Facebook's data warehouse has more than 300 petabytes worth of material from its users, stored on Hadoop clusters. Presto interacts with this data through interactive analysis, as well as through machine-learning algorithms and standard batch processing.


To analyze this data, Facebook originally used Hadoop MapReduce along with Hive. But as the data warehouse grew, this approach proved to be far too slow.


The Facebook Data Infrastructure group first looked for other software for running faster queries, but didn't find anything that was both mature enough and capable of scaling to the required levels. Instead, the group built its own distributed SQL query engine, using Java.


Presto can do many of the tasks that standard SQL engines can, including complex queries, aggregations, left/right outer joins, subqueries, and most of the common aggregate and scalar functions. It lacks the ability to write results back to data tables and cannot create table joins beyond a certain size.


Unlike Hive, Presto does not use MapReduce, which involves writing results back to disk. Instead, Presto compiles parts of the query on the fly and does all of its processing in memory. As a result, Facebook claims Presto is 10 times better in terms of CPU efficiency and latency than the Hive and MapReduce combo.


Presto is one of a number of newly emerging SQL query engines that tackle the problem of offering speedy results for queries run against large Hadoop data sets. Hadoop distributor Pivotal has developed Hawq for this purpose, and fellow Hadoop distributor Cloudera is working on its own software called Impala.


Another benefit Facebook built into Presto is the ability to work with data sources other than Hadoop. Facebook runs a custom data store for its news feed, for instance, which Presto can also tap into. Facebook has also built connectors for HBase and Scribe. The software is extensible to other sources as well, according to Traverso.


Joab Jackson covers enterprise software and general technology breaking news for The IDG News Service. Follow Joab on Twitter at @Joab_Jackson. Joab's e-mail address is Joab_Jackson@idg.com


Source: http://www.infoworld.com/d/business-intelligence/facebook-goes-open-source-presto-query-engine-big-data-230348?source=rss_business_intelligence
Related Topics: Case Keenum   samhain   dracula   Scott Carpenter   Brian Hoyer  

No comments:

Post a Comment

Note: Only a member of this blog may post a comment.