In a tech startup industry that loves its shiny new objects, the term “Big Data” is in the unenviable position of sounding increasingly “3 years ago”. While Hadoop was created in 2006, interest in the concept of “Big Data” reached fever pitch sometime between 2011 and 2014. This was the period when, at least in the press and on industry panels, Big Data was the new “black”, “gold” or “oil”. However, at least in my conversations with people in the industry, there’s an increasing sense of having reached some kind of plateau. 2015 was probably the year when the cool kids in the data world (to the extent there is such a thing) moved on to obsessing over AI and its many related concepts and flavors: machine intelligence, deep learning, etc.
Beyond semantics and the inevitable hype cycle, our fourth annual “Big Data Landscape” (scroll down) is a great opportunity to take a step back, reflect on what’s happened over the last year or so and ponder the future of this industry.
In 2016, is Big Data still a “thing”? Let’s dig in.
Enterprise Technology = Hard Work
The funny thing about Big Data is, it wasn’t a very likely candidate for the type of hype it experienced in the first place.
Products and services that receive widespread interest beyond technology circles tend to be those that people can touch and feel, or relate to: mobile apps, social networks, wearables, virtual reality, etc.
But Big Data, fundamentally, is… plumbing. Certainly, Big Data powers many consumer or business user experiences, but at its core, it is enterprise technology: databases, analytics, etc: stuff that runs in the back that no one but a few get to see.
And, as anyone who works in that world knows, adoption of new technologies in the enterprise doesn’t exactly happen overnight.
The early years of the Big Data phenomenon were propelled by a very symbiotic relationship among a core set of large Internet companies (in particular Google, Yahoo, Facebook, Twitter, LinkedIn, etc), which were both heavy users and creators of a core set of Big Data technologies. Those companies were suddenly faced with unprecedented volume of data, had no legacy infrastructure and were able to recruit some of the best engineers around, so they essentially started building the technologies they needed. The ethos of open source was rapidly accelerating and a lot of those new technologies were shared with the broader world. Over time, some of those engineers left the large Internet companies and started their own Big Data startups. Other “digital native” companies, including many of the budding unicorns, started facing similar needs as the large Internet companies, and had no legacy infrastructure either, so they became early adopters of those Big Data technologies. Early successes led to more entrepreneurial activity and more VC funding, and the whole thing was launched.
Fast forward a few years, and we’re now in the thick of the much bigger, but also trickier, opportunity: adoption of Big Data technologies by a broader set of companies, ranging from medium-sized to the very largest multinationals. Unlike the “digital native” companies, those companies do not have the luxury of starting from scratch. They also have a lot more to lose: in the vast majority of those companies, the existing technology infrastructure “does the trick”. It may not have all the bells and whistles, and many within the organization understand that it will need to be modernized sooner rather than later, but they’re not going to rip and replace their mission critical systems overnight. Any evolution will require processes, budgets, project management, pilots, departmental deployments, full security audits, etc. Large corporations are understandingly cautious about having young startups handle critical parts of their infrastructure. And, to the despair of some entrepreneurs, many (most?) still stubbornly refuse to move their data to the cloud, at least the public one.