Posts Tagged Big Data
Our friends at Forbes.com have put together a fantastic new infographic leveraging data from Wikibon’s Big Data Vendor Revenue and Market Forecast, 2012 – 2017 report. It provides a compelling view of the Big Data universe and illustrates the real revenue vendors are deriving from Big Data. They range from the mega-planets (if you’ll go with me on this analogy) IBM, HP and EMC to the smaller but powerful emerging planets like Hortonworks, 10gen and DataStax.
Ok, not the greatest analogy but still a great infographic:
Then-CEO Sam Palmisano launched IBM’s Smarter Planet initiative five years ago during a speech at the Council on Foreign Relations. IBM would focus its energies, Palmisano said, on helping governments and companies understand and analyze the voluminous data streaming off connected devices and industrial equipment to improve operational efficiencies and deliver better services to citizens and customers.
Since then, IBM has largely had the Industrial Internet, as the concept of has come to be called, to itself. The company’s Smarter Planet division has played a key role in making IBM the biggest Big Data company on the planet and was a lone bright spot in IBM’s otherwise disappointing Q1 2013 results.
Mobile devices play a dual role in the context of Big Data. Mobile devices – namely traditional mobile phones, smartphones and tablets – are both sources of Big Data and delivery mechanisms for Big Data.
Mobile as Source of Big Data
When talking about Big Data, the conversation tends to focus on Data Science and analytics. That is, the stories about Big Data that hit the front pages of the mainstream press and the hallway conversations taking place at events like Strata are mostly about all the cool new ways to use data to greater effect.
But Big Data Analytics doesn’t take place in a vacuum. It takes place in the enterprise. And any time you mix data and the enterprise, you can’t afford to ignore data management best practices. It may not be as sexy as predictive analytics, but failure to apply fundamental data management best practices to Big Data projects can lead not just to failed projects, but to potential legal consequences as well.
Tomorrow marks the kickoff of Strata Conference 2013. This year, SiliconANGLE Wikibon is expanding its coverage from two days to three full days of live broadcast from the show floor. Tune into theCUBE at SiliconANGLE.tv all week to catch it all, and log on to strataconf.com/live between 8:45 am and 10:00 am PST Wednesday and Thursday to watch the live keynotes.
We start things off Tuesday morning when we welcome Edd Dumbill, Co-Chair of the Strata Conference, to theCUBE. Edd and hosts Dave Vellante and John Furrier will preview the upcoming action and layout the themes we’ll be covering.
A Massachusetts company called Prelert released a new application yesterday that combines machine learning and predictive analytics to detect and report anomalous behavior emanating from IT infrastructure. If that sounds a lot like what Splunk does, you’re right.
As data is continuously collected and created, companies have difficulty just storing it, missing any opportunity to leverage the information. The wave of big data has the potential to flip the burden of data management into the opportunity of new value creation. Yesterday’s solutions don’t accomplish this today and will be even less effective tomorrow.
While the volume of data has grown exponentially over the last few decades, the fundamental and underlying technology on which we store data hasn’t. Sure, we’ve had improvements in densities (to store more data) and connectivity (to provide better access to data), but the pace of data growth has overwhelmed the benefits of these technological advancements.
We all know there’s lots of excitement and buzz surrounding Hadoop, but talk to some CIOs in “non-web” industries about moving mission critical apps to the open source Big Data framework and you’re bound to hear a little fear in their voices.
They’re worried that Hadoop is not ready for primetime because it has a single point of failure. That is, if the NameNode in a cluster goes down, the entire cluster goes down. Spinning clusters back up into working order following a NameNode failure takes time and, by definition, mission critical applications can’t go down … ever. Until the SPOF is solved, more than a handful of Fortune 500 companies will continue paying Oracle through the nose rather than risk a disruption to critical apps.
EMC has been touting its “Cloud Meets Big Data” messaging for nearly two years now, and today it took a major step in transforming that message into reality.
EMC announced that it is forming a new “virtual organization” focused on Big Data and application development in the cloud. EMC is calling the new organization the Pivotal Initiative and it will include 800 employees from EMC’s Greenplum and Pivotal Labs divisions, and 600 employees from VMware’s vFabric, Cloud Foundry, GemFire, SpringSource and Cetas organizations. EMC owns over 80% of VMware, where former EMC COO Pat Gelsinger joined as CEO earlier this fall.
Former Republican congressman-turned-TV pundit Joe Scarborough doesn’t buy Nate Silver’s numbers. For Scarborough, they just don’t add up.
Speaking on Morning Joe on Oct. 29, when Silver’s FiveThirtyEight blog put Obama’s chances at reelection somewhere around 75%, Scarborough declared: “Both sides understand [the presidential election] is close, it could go either way, and anybody that thinks this race is anything but a tossup right now is such an ideologue they should be kept away from typewriters, computers, laptops and microphones for the next ten days because they’re jokes.”