Open Source is the future of the software industry for one overwhelming reason that few people really understand – its vastly more efficient marketing strategy, MIT Prof. Mike Stonebraker told Wikibon.org co-founder and CEO David Vellante in an interview on SiliconAngle.TV from the Mass Technology Leadership Council event Feb. 17. “Standard enterprise software is sold by a four-legged sales team – a sales guy who's only smart enough to take the customer to lunch, paired with a sales consultant who answers the technical questions.” And then the sales cycle is usually a year long. The combination is a very expensive sales strategy.
Open source vendors, on the other hand, have no sales teams. They have websites. Prospects – IT guys who need their software – come to them, read about their software on the website, and if they are interested simply download and install it. It is really self-service marketing and sales. “That is wildly cheaper. And any company that installs system software is going to buy the support, so you end up selling them the enterprise version anyway.”
This is what most people don't understand about Open Source software, he says. The efficiency gain of that self-service sales strategy cuts a huge amount of cost out of the equation, because sales people are expensive, and makes it possible for the vendor to give away the software. This is the strategy that has made Red Hat and Cloudera, among others, so successful, and it will work for a lot of other companies.
What Prof. Stonebraker has doubts about is whether data storage on the public Cloud will catch on. He cites an experience he had at a conference, where the presenter asked a 250-person audience whether they would trust their company's vital data to the Cloud. Only a few said they would. The others cited several reasons that they would not, including regulatory requirements, security concerns in general, and in some cases requirements that their data not leave the country.
On the other hand, he says, “Everybody I know of is building private clouds.” So the vital corporate data is going into the private cloud, behind the corporate firewall where the company has more control. That, he says, will be the next generation IT architecture.
And one reason for that is the need to support big data. “This is a real sea change,” he said. The issue isn't as much the amount of data as the need to manage it, interconnect it, and support complex queries against huge data sets. “If all you want is to store ... 10 gazillion pictures and get them back by ID, it isn't difficult,” he says. “But suppose you are the U.S. military, and you want to put a video cam on every light pole in Iraq. Then you want to spot specific cars … as they drive through successive intersections, so that you can know where they might have stopped to plant a roadside bomb.” That creates the complexity.
And many companies, in all kinds of different verticals, are facing this data tsunami from two sources: Web 2.0 and scientific and research applications. For instance, as the expense is driven out of doing individual genomes, it will become practical and desirable to do the genomes of virtually every living human across the planet. One thing that the big pharmaceuticals will want to do with that huge database will be to identify populations of people with specific diseases and then search their genomes for any special sequences that might have made them more susceptible to that disease. That will have huge public health benefits, but it will require managing truly large amounts of data.
“So I think this is a sea change in the kind of apps that people are focused on,” he says. “And it's not coming from traditional enterprise data warehouses. This is a green field that presents a great opportunity.”