“We are, by far, the earliest company here”. This how Zach Perret, CEO of Plaid, started his talk at his first appearance at Data Driven NYC, back in February 2013. “We are basically three guys, coding 24 hours a day, and building developer tools…”.
Fast forward to today: the company was valued at $2.7B (“allegedly”, says Zach) in its most recent $250M round; Plaid has integrated with 15,000 banks in the U.S. and Canada and 4,000 fintech applications. One in four people in the U.S. have linked an account using Plaid. And they have just acquired New York based competitor Quovo for $200M (“reportedly” as well).
Not bad for a self-described “data plumbing” company. As today’s consumers expect to live fully digital financial lives, with their phone at the core, Plaid provides the financial infrastructure that enables developers in fintech companies to build great applications, and have consumers connect those to their bank accounts – basically Plaid is the connective tissue between the app and the bank, and takes care of moving all the data back and forth in the background.
It was a lot of fun having Zach back at the event 6 years later. Here’s the video of our fireside chat, and my notes are below the fold.
Perhaps this is slightly strange for an early stage venture VC, but I’m fascinated by entrepreneurs who bootstrap their tech startup and build them into very large, industry-leading companies.
The odds of building a massive company are low enough for the lucky few that manage to raise tens (or hundreds) of millions of venture capital money but, now, doing it with no outside investment? That is a really hard way to do it.
It can be a really long journey, as well. In fact, for all the obvious advantages of bootstrapping (less/no dilution, more control, etc.), the main trade-off involved in bootstrapping seems to be… time. It just takes longer to build a product and get to early scale simply based on cash-flow (or a small amount of debt or founder money).
I tweeted this a couple of days ago, and it led to an interesting thread:
Not so long ago, AI startups were the new shiny object that everyone was getting excited about. It was a time of seemingly infinite promise: AI was going to not just redefine everything in business, but also offer entrepreneurs opportunities to build category-defining companies.
A few years (and billions of dollars of venture capital) later, AI startups have re-entered reality. Time has come to make good on the original promise, and prove that AI-first startups can become formidable companies, with long term differentiation and defensibility.
In other words, it is time to go from “starting” mode to “scaling” mode.
To be clear: I am as bullish on the AI space as ever. I believe AI is a different and powerful enough technology that entire new industry leaders can be built by leveraging it, as long it is applied to the right business problems.
At the same time, I have learned plenty of lessons in the last three or four years by being on the board of AI startups, and talking to many AI entrepreneurs in the context of Data Driven NYC. I’ll be sharing some notes here.
This post is a sequel to a presentation I made almost three years ago at the O’Reilly Artificial Intelligence conference, entitled “Building an AI Startup: Realities & Tactics“, which covered a lot of of core ideas about starting an AI company: building a team, acquiring data, finding the right market positioning. A lot of those concepts still hold, and this post will focus more on specific lessons around scaling.
I spend a lot of time thinking about hype cycles, across industries (Big Data/AI, IoT) and ecosystems (New York).
Whether you use the Carlota Perez surge cycle (see this great Fred Wilson post) or the Gartner version, hype cycles convey the fundamental idea that technology markets don’t develop linearly, but instead go through phases of boom and bust before they reach wide adoption.
Hype cycles are a great framework for investors (and founders), because entering the market at the right time is both crucial and very hard.
For proponents of the Internet of Things, the last 12-18 months have been often frustrating. The Internet of Things (IoT) was supposed to be huge by now. Instead, the industry news has been dominated by a string of startup failures, as well as alarming security issues. Cisco estimated in a (controversial) study that almost 75% of IoT projects fail. And the Internet of Things certainly lost a part of its luster as a buzzword, easily supplanted in 2017 by AI and bitcoin.
Interestingly, however, the Internet of Things continues its inexorable march towards massive scale. 2017 was most likely the year when the total number of IoT devices (wearables, connected cars, machines, etc.) surpassed mobile phones.Global spending in the space continues to accelerate – IDC was forecasting it to hit $800 billion in 2017, a 16.7% increase over the previous year’s number.
A few days ago, I sat down Sam DeBrule of Machine Learnings for a broad conversation about AI and startups. We got into a number of topics including creative data acquisition tactics, data network effects, and what makes AI startups different.
A few months ago, Foursquare achieved an impressive feat by predicting, ahead of official company results, that Chipotle’s Q1 2016 sales would be down nearly 30%. Because it captures geo-location data from both check-ins and visits through its apps, Foursquare was able to extrapolate foot-traffic stats that turned out to be very accurate predictors of financial performance.
That a social media company could be building a data asset of immense value to Wall Street is part of an accelerating trend known as “alternative data”. As just about everything in our lives is getting sensed and captured by technology, financial services firms have been turning their attention to startups, with the hope of mining their data to extract the type of gold nuggets that will enable them to beat the market.
Could working with Wall Street be a business model for you?
The opportunity is open to a wide range of startups. Many tech companies these days generate an interesting “data exhaust” as a by-product of their core activity. If your company offers a payment solution, you may have interesting data on what people buy. A mobile app may accumulate geo-location data on where people shop or how often they go to the movies. A connected health device may know who gets sick when and where. A commerce company may have data on trends and consumer preferences. A SaaS provider may know what corporations purchase, or how many employees they hire, in which region. And so on and so forth.
At the same time, this is a tricky topic, with a lot of misunderstandings. The hedge fund world is very different from the startup world, and a lot gets lost in translation. Rumors about hedge funds paying “millions” for data sets abound, which has created a distorted perception of the size of the financial opportunity. A fair number of startups I speak with do incorporate idea of selling data to Wall Street into their business plan and VC pitches, but how that would work exactly remains generally very fuzzy.
If you’re one of the many startups sitting on a growing data asset and trying to figure out whether you can make money selling it to Wall Street, this post is for you: a deep dive to provide context, clarify concepts and offer some practical tips.
Over the last few months, the usual debate around unicorns and bubbles seems to have been put on hold a bit, as fears of a major crash have thankfully not materialized, at least for now.
Instead another discussion has emerged, one that’s actually probably more fundamental. What’s next in tech? Which areas will produce the Googles and Facebooks of the next decade?
What’s prompting the discussion is a general feeling that we’re on the tail end of the most recent big wave of innovation, one that was propelled by social, mobile and cloud. A lot of great companies emerged from that wave, and the concern is whether there’s room for a lot more “category-defining” startups to appear. Does the world need another Snapchat? (see Josh Elman’s great thoughts here). Or another marketplace, on-demand company, food startup, peer to peer lending platform? Isn’t there a SaaS company in just about every segment now? And so on and so forth.
One alternative seems to be “frontier tech”: a seemingly heterogeneous group that includes artificial intelligence, the Internet of Things, augmented reality, virtual reality, drones, robotics, autonomous vehicles, space, genomics, neuroscience, and perhaps the blockchain, depending on who you ask.
As we are perhaps reaching the end of a cycle of innovation in tech – the one that resulted from the simultaneous emergence of social, mobile and cloud – and collectively pondering what’s next, one of the areas I’ve found particularly exciting recently is the intersection of Big Data and life sciences.
A little over two years ago, in connection with my investment in Recombine, a genomics startup, I wrote (here) about another powerful combination of trends: the sharp drop in the cost of sequencing the human genome, the maturation of Big Data technologies, and the increasing commoditization of wet lab work.
The fundamental premise was, and still very much is, as follows:
In the furiously competitive world of tech startups, where good entrepreneurs tend to think of comparable ideas around the same time and “hot spaces” get crowded quickly with well-funded hopefuls, competitive moats matter more than ever. Ideally, as your startup scales, you want to not only be able to defend yourself against competitors, but actually find it increasingly easier to break away from them, making your business more and more unassailable and leading to a “winner take all” dynamic. This sounds simple enough, but in reality many growing startups, including some well-known ones, experience exactly the reverse (higher customer acquisition costs resulting from increased competition, core technology that gets replicated and improved upon by competitors that started later and learned from your early mistakes, etc.).
While there are various types of competitive moats, such as a powerful brand (Apple) or economies of scale (Oracle), network effects are particularly effective at creating this winner takes all dynamic, and have been associated with some of the biggest success stories in the history of the Internet industry.
Network effects come in different flavors, and today I want to talk about a specific type that has been very much at the core of my personal investment thesis as a VC, resulting from my profound interest in the world of data and machine learning: data network effects.
We’re about to see a lot more 3D content in our digital lives. Various trends, some years in the making, are now intersecting to make this a near-term reality.
On the production side, 3D has of course existed for many years – this has been, in particular, the world of Computer Aided Design (CAD), which originated in part from MIT’s Sketchpad project in the early sixties. In one form or another, 3D has been used as a professional format across many industries, such as architecture, engineering, construction, and entertainment. Creation of 3D content (even for consumer-facing products like gaming) has remained largely the province of a comparatively small group of specialized professionals. Continue reading “Sketchfab and the democratization of 3D content”
Among all the excitement for the Internet of Things and the resurgence of hardware as an investable category, venture capitalists, many of whom new to the space, have been re-discovering the opportunities and challenges of working alongside entrepreneurs to build hardware companies. Below are the slides that David Rogg and I prepared for the recent Connected Conference, a great global event held in Paris. They’re a good snapshot of how someone like me thinks about the hardware space, mid-2015.
The venture financing path has evolved incredibly fast over the last 18 months. In this very busy financing market, what used to be a reasonably well understood progression from a seed round to a Series A to a Series B, etc. has now morphed into a more complex nomenclature of pre-seeds ($500k or less), crowdfunding rounds (especially for hardware), seeds ($1M-$2M, 6-9 months after the pre-seed), seed primes (an extra $1M or so, 12-18 months after the seed), Series A (now routinely $10-$12M in size, occasionally up to $15M), Series A-1, Series B, C, D, E, F etc. (as companies remain private longer).
The latest entrant in this rapidly evolving nomenclature seems to be what I’d call the “Straight to A” round, where the founders skip the seed stage altogether and raise directly a $5M-$10M Series A, often before building anything, sometimes even before incorporating a company. I had seen it here and there in the past, but it now seems to have become an accelerating trend. Continue reading “The “Straight to A” Round”
A few days ago, I was invited to speak at a Yale Entrepreneurship Breakfast about about one of my favorite areas of interest, Artificial Intelligence. Here are the slides from the talk — a primer on how AI rose from of the ashes to become a fascinating category for startup founders and venture capitalists. Very much a companion to my earliest post about our investment in x.ai. Many thanks to my colleague Jim Hao, who worked with me on this presentation.
AI is experiencing an astounding resurrection. After so many broken promises, the term “artificial intelligence” had become almost a dirty word in technology circles. The field is now rising from the ashes. Researchers who had been toiling away in semi-obscurity over the last few decades have suddenly become superstars and have been aggressively recruited by the largest Internet companies: Yann LeCun (see his recent talk at our Data Driven NYC event here) by Facebook; Geoff Hinton by Google; Andrew Ng by Baidu. Google spent over $400 million to acquire DeepMind, a 2 year old secretive UK AI startup. The press and social media are awash with thoughts on AI. Elon Musk cautions us against its perils.
What’s different this time? As Irving Wladawsky-Berger pointed out in a Wall Street Journal article, “a different AI paradigm emerged. Instead of trying to program computers to act intelligently–an approach that hadn’t worked because we don’t really know what intelligence is– AI now embraced a statistical, brute force approach based on analyzing vast amounts of information with powerful computers and sophisticated algorithms.” In other words, the resurgence of AI is partly a child of Big Data, as better algorithms (in particular, what’s known as “deep learning”, pioneered by LeCun and others) have been enabled by larger than ever datasets and the ability to process those datasets at scale at reasonable cost.