Home

From Wikibon

Revision as of 17:56, 4 December 2008 by Dab4168 (Talk | contribs)
Jump to: navigation, search


Peer Incite: Grant, a Sr. Storage Admin at a large bank discusses how heterogeneous storage virtualization can help reduce the budget for 2009.

Media:11-18-08_Peer_Incite_mashup.mp3‎

Wikitip

Global Hadoop Market - A Brief Market Research Overlook By TMR

Hadoop gets a lot of buzz these days in database and content management circles, but many people in the industry still don’t really know what it is and or how it can be best applied.

Hadoop is a buzzing name in content and database management circles these days, but not many have the actual idea of what it is or what it is used for. Here, we discuss its background and applications:

What is it?

Hadoop is an open source technology that was designed to solve problems related with the storage and management of a large quantity of data – possibly a mixture of structured and complex data – which does not fit properly in the usual data storage entities such as tables; such enormous quantity of data is commonly referred to as Big Data.

The underlying technology for Hadoop was developed by Google technologies back in time when they were in search of a technology that would allow them to properly index all the textual and structural information they were gathering, and then present it to the user in a meaningful and quick manner.

There was no technology in the market at that time that could’ve allowed them such features and so, they developed their own platform. The basic innovations by Google for one such data handling platform resulted in an open source platform, named Nutch and Hadoop was later formed from that.

Since its inception, many technology leaders have played significant roles in the development of Hadoop, lending it its current status of an enterprise platform.

Applications of Hadoop:

Hadoop was basically designed to solve problems arising out of management of Big Data and suits situations when the computations are extensive and the analytics, extremely deep.

As such, it applies to a huge bunch of varied market areas such as finance, online retail, research, social networking sites - which are only a few examples. In fact, every industry that deals with hordes of data that are likeable to grow in exponential degrees are all worthy of operating through Hadoop.

Hadoop has the capability not only to manage unstructured data, add new potentials to data management systems but also to do so at a much higher speed and a reduced cost. All this has resulted in granting Hadoop the popularity as one of the best big data management platforms available.

Nowadays, Hadoop has huge demand in fields that deal with massive amounts of data such as research (medical, space), social networking, finance and banking, to name a few. Its global market and demand is driven by the exponentially rising volumes of data in these fields and Hadoop’s capability to manage data at cheaper rates and faster speeds. Its ability to handle even complex, unstructured data with ease has gained it the popularity as being the best data management platform for big data (enormous amounts of data not manageable by typical data management systems) available in the market in the recent times.

However, the lack of qualified and experienced professionals in the market with an ability to effectively handle Hadoop architecture poses a big challenge to the growth of this market. The problem of lower rates of Hadoop’s adoption is further diversified due to a lack of awareness about the enormous benefits this platform can provide to large and mid-sized data-based organizations.

Numbers:

Though not hugely popular, Hadoop has a huge market due to its use in some world renowned and popular social networking sites and many financial, banking and government agencies. It was estimated that this industry had a value of nearly 1.5 billion USD in 2012. Market analysts estimate that these figures could healthily rise to USD 20.9 billion by 2018, by observing global year-on-year growth at the CAGR of 54.7% during the forecast period defined between 2012 and 2018 by US based market research firm Transparency Market Research.

Currently, North America is estimated to lead the market by amassing its most shares of revenue. This could be contributed to the enormous amount of data being generated in the region and an ever increasing need for better data storage and processing capabilities.

View Another Wikitip

Featured Case Study

Virtualization Energizes Cal State University

John Charles is the CIO of California State University, East Bay (CSUEB) and Rich Avila is Director, Server & Network Operations. In late 2007 they were both looking down the barrel of a gun. The total amount of power being used in the data center was 67KVA. The maximum power from the current plant was 75kVA. PG&E had informed them that no more power could be delivered. They would be out of power in less than six months. A new data center was planned, but would not be available for two years.

read more...

Storage Professional Alerts


Featured How-To Note

Storage Virtualization Design and Deployment

A main impediment to storage virtualization is the lack of multiple storage vendor (heterogeneous) support within available virtualization technologies. This inhibits deployment across a data center. The only practical approach is either to implement a single vendor solution across the whole of the data center (practical only for small and some medium size data centers) or to implement virtualization in one or more of the largest storage pools within a data center.

read more...





























Personal tools