#garagekidztweetz

#garagekidztweetz

id:garage-kid@76whizkidz のライフログ・ブログ!

Hadoop Summit 2012 Recap in my view point. #hadoopsummit

スポンサーリンク

Recap in my viewpoint.

Following on Notes of Day1 and Notes of Day2, this is the last post from me about Hadoop Summit 2012, Recap in my viewpoint.

➤ Recap in my viewpoint.

"Hadoop for Enterprise".
This phrase firstly came to my mind when I recapped Hadoop Summit 2012.
It is no doubt Data*1 business have already crossed the Chasm*2. And Hadoop is working as its core function.

According to Day2 Keynote's Mr. Geoffrey Moore's*3 slide, I think we are in the TORNADE phase (If you hard to see above image, you can enlarge that image by clicking it).
And as also from Day2 Keynote's Yahoo's slide, if we want to grow and survive in Data business, Real-time/Resource Management/HW are the next our fronteer, I also agree with this idea.

This time I mostly attended "Future of Apache Hadoop" or "Deployment and Operations" categorized sessions, so not much fit to those topics, but if you want to see my notes. I already up my notes Day1 here and Day2 here.

➤ Other sentiments of mine.

➤➤ 1: Growing scale.

➤➤ 2: Manies are Pig user
  • Because Yahoo! hosted affected or not I don't know, but this time in many sessions I heard Pig user.
➤➤ 3: Less energetic than Hadoop World.
  • Each sessions are 40min, so each presentations are not much in detail, even more abstractive *5. So it's hard to ask a question for me.
  • I don't know why but seeing from twitter hashtag #hadoopsummit not much people tweeting. Also its facebook page is not so much used.
  • Slide's are not still up on their site, and until when its up, I don't know. They said over coming days, though.

  • I also attended last year Nov. Hadoop World, compared to it, I feel this conference less energetic.
➤➤ One more: Facebook group for this conference.
  • I want to say thank you all who joining to this group.
  • If I can allow to say, only one regret is no non-Japanese join to this group.
    #And I'm sorry myself, too introvert hesitate to ask someone non-Japanese people to join this group spoke up for myself.
  • And after this conference I found this LinkedIn event. And here so many attendees. This time I didn't have LinkedIn account yet, so next time I want to try LinkedIn as the similar trial.

➤ What I think more I need to learn and investigate from this conference.

Hortonworks Data Platform (HDP)
Hortonworks Data Platform (HDP) is a 100% open source data management platform based on Apache Hadoop. It allows you to load, store, process and manage data in virtually any format and at any scale. As the foundation for the next generation enterprise data architecture, HDP includes all of the necessary components to begin uncovering business insights from the quickly growing streams of data flowing into and throughout your business. Hortonworks Data Platform is ideal for organizations that want to combine the power and cost-effectiveness of Apache Hadoop with the advanced services required for enterprise deployments. It is also ideal for solution providers that wish to integrate or extend their solutions with an open and extensible Apache Hadoop-based platform. Hortonworks Data Platform will be available for download beginning on Friday, June 15th. ...

Apache Ambari
Apache Ambari is a web-based tool for installing, managing, and monitoring Apache Hadoop clusters. The set of Hadoop components that are currently supported by Ambari includes: Apache HBase Apache HCatalog Apache Hadoop HDFS Apache Hive Apache Hadoop MapReduce Apache Oozie Apache Pig Apache Sqoop Apache Templeton Apache Zookeeper ...

Apache HCatalog
Apache HCatalog is a table and storage management service for data created using Apache Hadoop. This includes: Providing a shared schema and data type mechanism. Providing a table abstraction so that users need not be concerned with where or how their data is stored. Providing interoperability across data processing tools such as Pig, Map Reduce, and Hive. ...

YARN
MapReduce NextGen aka YARN aka MRv2 The new architecture introduced in hadoop-0.23, divides the two major functions of the JobTracker: resource management and job life-cycle management into separate components. The new ResourceManager manages the global assignment of compute resources to applications and the per-application ApplicationMaster manages the application‚〓〓s scheduling and coordination. An application is either a single job in the sense of classic MapReduce jobs or a DAG of such jobs. The ResourceManager and per-machine NodeManager daemon, which manages the user processes on that machine, form the computation fabric. The per-application ApplicationMaster is, in effect, a framework specific library and is tasked with negotiating resources from the ResourceManager and working with the NodeManager(s) to execute and monitor the tasks. ...

backup-hadoop-and-hive/README.txt at master · TAwarehouse/backup-hadoop-and-hive · GitHub
This project provides a method for backing up a hadoop cluster. There are two components that need to be backed up: the contents of the hadoop filesystem (hdfs), and the hive DDL. The latter is what provides a way to string together the files kept in hdfs and access them via HQL. The BackupHdfs class deals with the first task. It traverses the entire hdfs filesystem, ordering all found files by timestamp. It then copies them (just like "hadoop fs -copyToLocal") to the local filesystem. Each file is checksum-verified after the copy to ensure integrity. BackupHdfs has options for ignoring files earlier than a given timestamp, which is needed for incremental backups. ...


Apache〓 OODT
It's metadata for middleware (and vice versa): Transparent access to distributed resources Data discovery and query optimization Distributed processing and virtual archives But it's not just for science! It's also a software architecture: Models for information representation Solutions to knowledge capture problems Unification of technology, data, and metadata ...

Folding into Hive · hbutani/SQLWindowing Wiki · GitHub
Currently the Windowing Engine runs on top of Hive. An obvious question is if this functionality can be folded into Hive. There are undeniable reasons for doing this, briefly: end users would want this functionality inside Hive for reasons of consistent behavior, support etc. use of a consistent expression language. For e.g. reuse of Hive functions in Windowing clauses. Implementation wise: Windowing is orders of magnitude simpler than Hive, and can benefit from using equivalent components that are in Hive. More on this in the next section. Avoid the trap of constantly chasing changes in the Hive code base. Folding in Table function mechanics may open up optimizations not possible with the approach today. Here, we initially summarize how the Windowing Engine works and map it to concepts/components in Hive. Then we list a multi-stage process of moving Windowing & Table functionality into Hive. I am no expert on Hive, so have erred on the side of being non intrusive. There is good chance that there are better approaches to some of the later steps; open to comments/suggestions from the community. ...

Vertica
The Vertica Analytics Platform delivers: Real-time insight into your data allowing you to consume, analyze, and make informed decisions at the speed of business. Fastest time to value making it possible to monetize your data in a matter of minutes, not days. Maximized performance meaning you get the most out of your analytic infrastructure investment. ...

➤ Appendix: Links to each my notes of this Hadoop Summit 2012.

Relative posts.

*1:I don't want to use the word BIGDATA

*2:[http://en.wikipedia.org/wiki/File:Technology-Adoption-Lifecycle.png:title=Technology Adoption Lifecycle Model]

*3:author of “Crossing the Chasm” and “Escape Velocity”

*4:this speaker already move to LinkedIn, though

*5:and most of the speaker's slide paging and speaking are too fast