What is the number one mistake technical founders make? Why is pricing so important? Should entrepreneurs avoid at all costs having a service component to their business? What is fundamentally new and different in go to market strategies for modern enterprise software startups?
A self-avowed “failed physicist”, Martin Casado is a General Partner at Andreessen Horowitz, and previously was the co-founder and CTO of Nicira, a pioneer in software-defined networking and network virtualization that was acquired by VMware for $1.26 billion.
I have had the pleasure of getting to know Martin through the board of ActionIQ, a great NYC startup in which we are both investors.
Martin joined us for a fireside chat at the most recent edition of Data Driven NYC. The conversation centered largely around one of Martin’s favorite topics, go to market strategies for enterprise startups. There’s plenty of interesting thoughts and directly applicable advice for entrepreneurs in there, as Martin spoke as much from his previous founder experience as he did as a VC.
Here’s the video, and my notes from the chat are below the fold.
Who would be crazy enough to compete head-on with AWS?
The question was almost as obvious seven years ago than it is today. Yet in just a few years since its founding, Digital Ocean, a cloud infrastructure startup based in New York with data centers around the world, has managed to build a very impressive and fast-growing business, successfully competing with the giants of cloud computing.
Ben Uretsky, co-founder of the company (with his brother Moisey and 3 others) and its CEO from 2011 to 2018, stopped by for a chat at Data Driven NYC to tell the story of the company and share some lessons learned.
Here’s the video, and below are my notes from our great chat.
The hedge fund world has been evolving dramatically over the last few years.
Just like in other industries, software, data and AI/ML have been playing an increasingly important, and disruptive, role. Many hedge funds have been scrambling to embrace this evolution – not just to gain an edge, but also to avoid becoming extinct.
Certainly, quantitative hedge funds have been making heavy use of software and data for a while now. The “quant” funds rely upon algorithmic or systematic strategies for their trades – meaning that they generally employ automated trading rules rather than discretionary (human) ones, and they will trade tens or hundreds of assets simultaneously.
But another big part of the industry, the “fundamental” hedge funds, had been operating very differently. Those funds will perform a bottoms up analysis on individual securities to value them in the marketplace and assess whether they are “undervalued” and “overvalued” assets. They’ll often have a much more concentrated portfolio.
In part because the entire hedge fund industry has been performing generally poorly recently (years of performance trailing the stock market), there’s been mounting pressure on hedge funds to evolve rapidly, particularly fundamental ones.
Not so long ago, AI startups were the new shiny object that everyone was getting excited about. It was a time of seemingly infinite promise: AI was going to not just redefine everything in business, but also offer entrepreneurs opportunities to build category-defining companies.
A few years (and billions of dollars of venture capital) later, AI startups have re-entered reality. Time has come to make good on the original promise, and prove that AI-first startups can become formidable companies, with long term differentiation and defensibility.
In other words, it is time to go from “starting” mode to “scaling” mode.
To be clear: I am as bullish on the AI space as ever. I believe AI is a different and powerful enough technology that entire new industry leaders can be built by leveraging it, as long it is applied to the right business problems.
At the same time, I have learned plenty of lessons in the last three or four years by being on the board of AI startups, and talking to many AI entrepreneurs in the context of Data Driven NYC. I’ll be sharing some notes here.
This post is a sequel to a presentation I made almost three years ago at the O’Reilly Artificial Intelligence conference, entitled “Building an AI Startup: Realities & Tactics“, which covered a lot of of core ideas about starting an AI company: building a team, acquiring data, finding the right market positioning. A lot of those concepts still hold, and this post will focus more on specific lessons around scaling.
At the kind invitation of Rob May and the Botchain team, I had the opportunity recently to keynote Brains and Chains, an interesting conference in New York exploring the intersection of artificial intelligence and blockchain.
This is both an exciting and challenging topic, and the goal of my talk was to provide a broad introduction to kick things off, and frame the discussion for the rest of the day: discuss why the topic matters in the first place, and highlight the work of some interesting companies in the space.
Below is the presentation, with some added commentary when relevant. Scroll to the very bottom for a SlideShare widget, if you’d like to flip through the slides.
It’s been an exciting, but complex year in the data world.
Just as last year, the data tech ecosystem has continued to “fire on all cylinders”. If nothing else, data is probably even more front and center in 2018, in both business and personal conversations. Some of the reasons, however, have changed.
On the one hand, data technologies (Big Data, data science, machine learning, AI) continue their march forward, becoming ever more efficient, and also more widely adopted in businesses around the world. It is no accident that one of the key themes in the corporate world in 2018 so far has been “digital transformation”. The term may feel quaint to some (“isn’t that what’s been happening for the last 25 years?”), but it reflects that many of the more traditional industries and companies are now fully engaged into their journey to become truly data-driven.
On the other hand, a much broader cross-section of the public has become aware of the pitfalls of data. Whether it is through the very public debate over the risks of AI, the Cambridge Analytica scandal, the massive Equifax data breach, GDPR-related privacy discussions or reports of growing government surveillance in China, the data world has started revealing some darker, scarier undertones.
As I wrote recently, the Internet of Things (IoT) has been experiencing, at a minimum, some serious growing pains. This is particularly true for consumer IoT where a lot of old issues (interoperability) remain, while others (security) are becoming more concerning. With a few bright exceptions, many consumer IoT products solve first-world problems, often representing a marginal improvement over existing solutions.
But the IoT was always meant to be more ambitious and exciting than just the smart home, the factory or other discreet “single-player mode” use cases. The internet of things was always about networks, where connected objects could be tracked and activated across wide geographic areas, supply chains, health systems and other contexts representing trillions of dollars of economic value.
Rather than IoT, perhaps we should start using the expression “intelligent infrastructure” more frequently to describe those networks. With the parallel progress of machine learning at the edge, intelligent infrastructure will enable software-based intelligence to permeate the physical world, enabling real-time optimization and orchestration of connected “things” (objects, vehicles, machines, buildings), at a system level. Uber, Lyft and others give us perhaps the closest approximation what such networks could look like at scale, except that, in an intelligent infrastructure paradigm, such communications would be machine-to-machine, with no human in the loop.
Some call it “strong” AI, others “real” AI, “true” AI or artificial “general” intelligence (AGI)… whatever the term (and important nuances), there are few questions of greater importance than whether we are collectively in the process of developing generalized AI that can truly think like a human — possibly even at a superhuman intelligence level, with unpredictable, uncontrollable consequences.
I spend a lot of time thinking about hype cycles, across industries (Big Data/AI, IoT) and ecosystems (New York).
Whether you use the Carlota Perez surge cycle (see this great Fred Wilson post) or the Gartner version, hype cycles convey the fundamental idea that technology markets don’t develop linearly, but instead go through phases of boom and bust before they reach wide adoption.
Hype cycles are a great framework for investors (and founders), because entering the market at the right time is both crucial and very hard.
2017 was an extraordinary and crazy year in the world of cryptocurrencies. Prices skyrocketed (Bitcoin: +1,400%; Litecoin: +5,400%, Ethereum: +8,700%; Ripple +35,000%). ICOs raised over $3 billion. Crypto hedge funds emerged all over the map and a handful of blockchain startups reached unicorn-level valuations.
Almost inevitably, the price of individual cryptocurrencies will experience substantial volatility in 2018, and the first few days of January already look like a rollercoaster. Prices may very well crash altogether. In more ways than one, the space feels reminiscent of the dot-com days of the late 1990s, whether it is stories of newly minted bitcoin millionaires, the undeniable speculation rampant throughout the market, or the emergence of many weird things. While growing and expanding, the actual use cases of the blockchain still trail behind.
Taking a step back from the immediate frothiness, however, it seems that the crypto world has hit the point of no return, vaulting from a fringe movement into the mainstream collective consciousness, with strong interest both from the public and Wall Street. The blockchain has cemented its position as a new paradigm, which will only grow in importance, offering new solutions to the world, and new opportunities to entrepreneurs.
For proponents of the Internet of Things, the last 12-18 months have been often frustrating. The Internet of Things (IoT) was supposed to be huge by now. Instead, the industry news has been dominated by a string of startup failures, as well as alarming security issues. Cisco estimated in a (controversial) study that almost 75% of IoT projects fail. And the Internet of Things certainly lost a part of its luster as a buzzword, easily supplanted in 2017 by AI and bitcoin.
Interestingly, however, the Internet of Things continues its inexorable march towards massive scale. 2017 was most likely the year when the total number of IoT devices (wearables, connected cars, machines, etc.) surpassed mobile phones.Global spending in the space continues to accelerate – IDC was forecasting it to hit $800 billion in 2017, a 16.7% increase over the previous year’s number.
A few days ago, I sat down Sam DeBrule of Machine Learnings for a broad conversation about AI and startups. We got into a number of topics including creative data acquisition tactics, data network effects, and what makes AI startups different.
Last year, we asked “Is Big Data Still a Thing?”, observing that since Big Data is largely “plumbing”, it has been subject to enterprise adoption cycles that are much slower than the hype cycle. As a result, it took several years for Big Data to evolve from cool new technologies to core enterprise systems actually deployed in production.
In 2017, we’re now well into this deployment phase. The term “Big Data” continues to gradually fade away, but the Big Data space itself is booming. We’re seeing everywhere anecdotal evidence pointing to more mature products, more substantial adoption in Fortune 1000 companies, and rapid revenue growth for many startups.
Meanwhile, the froth has indisputably moved to the machine learning and artificial intelligence side of the ecosystem. AI experienced in the last few months a “Big Bang” in collective consciousness not entirely dissimilar to the excitement around Big Data a few years ago, except with even more velocity.
2017 is also shaping up to be an exciting year from another perspective: long-awaited IPOs. The first few months of this year have seen a burst of activity for Big Data startups on that front, with warm reception from the public markets.
All in all, in 2017 the data ecosystem is firing on all cylinders. As every year, we’ll use the annual revision of our Big Data Landscape to do a long-form, “State of the Union” roundup of the key trends we’re seeing in the industry.
What goes up must go down, and the hype around AI will inevitably deflate sooner or later.
One unfortunate consequence of the hype is that it created the widely-shared perception that AI reached seemingly overnight a stage where it can be fully automated, leading both to endless possibilities, as well as concerns about its impact on jobs and society.
However, this is not the reality just yet and, both in private conversations and on social media, I’m starting to increasingly sense a backlash – the general theme being that “so many humans are involved behind the scenes” in various AI products or companies. This is sometimes ushered in Theranos-Like tones, as if the horrible underbelly of the beast was about to be exposed.
So let’s make it clear: today, scores of humans are involved just about everywhere in AI, whether in tiny startups or massive tech companies. In fact, most AI products are very much NOT fully automated, at least not in an end-to-end, 100% bulletproof way. It is probably ok for the general press to get a bit carried away with AI. However, we in the tech industry should probably better understand this reality, and acknowledge it as a necessary step in the process of building a major new wave of technology products.
A few months ago, Foursquare achieved an impressive feat by predicting, ahead of official company results, that Chipotle’s Q1 2016 sales would be down nearly 30%. Because it captures geo-location data from both check-ins and visits through its apps, Foursquare was able to extrapolate foot-traffic stats that turned out to be very accurate predictors of financial performance.
That a social media company could be building a data asset of immense value to Wall Street is part of an accelerating trend known as “alternative data”. As just about everything in our lives is getting sensed and captured by technology, financial services firms have been turning their attention to startups, with the hope of mining their data to extract the type of gold nuggets that will enable them to beat the market.
Could working with Wall Street be a business model for you?
The opportunity is open to a wide range of startups. Many tech companies these days generate an interesting “data exhaust” as a by-product of their core activity. If your company offers a payment solution, you may have interesting data on what people buy. A mobile app may accumulate geo-location data on where people shop or how often they go to the movies. A connected health device may know who gets sick when and where. A commerce company may have data on trends and consumer preferences. A SaaS provider may know what corporations purchase, or how many employees they hire, in which region. And so on and so forth.
At the same time, this is a tricky topic, with a lot of misunderstandings. The hedge fund world is very different from the startup world, and a lot gets lost in translation. Rumors about hedge funds paying “millions” for data sets abound, which has created a distorted perception of the size of the financial opportunity. A fair number of startups I speak with do incorporate idea of selling data to Wall Street into their business plan and VC pitches, but how that would work exactly remains generally very fuzzy.
If you’re one of the many startups sitting on a growing data asset and trying to figure out whether you can make money selling it to Wall Street, this post is for you: a deep dive to provide context, clarify concepts and offer some practical tips.