The Potential of Artificial Intelligence

Posted: Dec 26, 2015
mobile_photo

by Nathan Benaich
Crunch Network
December 24, 2015

Artificial intelligence is one of the most exciting and transformative opportunities of our time. First, with 40 percent of the world’s population now online, and more than 2 billion smartphones being used with increasing addiction every day (KPCB), we’re creating data assets, the raw material for AI, that describe our behaviors, interests, knowledge, connections and activities at a level of granularity that has never existed.

Second, the costs of compute and storage are both plummeting by orders of magnitude, while the computational capacity of today’s processors is growing, making AI applications possible and affordable.

Third, we’ve seen significant improvements recently in the design of learning systems, architectures and software infrastructure that, together, promise to further accelerate the speed of innovation. Indeed, we don’t fully appreciate what tomorrow will look and feel like.

We also must realize that AI-driven products are already out in the wild, improving the performance of search engines, recommender systems (e.g., e-commerce, music), ad serving and financial trading (amongst others).

Companies with the resources to invest in AI are already creating an impetus for others to follow suit — or risk not having a competitive seat at the table. Together, therefore, the community has a better understanding and is equipped with more capable tools with which to build learning systems for a wide range of increasingly complex tasks.

How Might You Apply AI Technologies?

With such a powerful and generally applicable technology, AI companies can enter the market in different ways. Here are six to consider, along with example businesses that have chosen these routes:

  1. There are vast amounts of enterprise and open data available in various data silos, whether web or on-premise. Making connections between these enables a holistic view of a complex problem, from which new insights can be identified and used to make predictions (e.g., DueDil*, Premise and Enigma).
     
  2. Leverage the domain expertise of your team and address a focused, high-value, recurring problem using a set of AI techniques that extend the shortfalls of humans (e.g., Sift Science or Ravelin* for online fraud detection).
     
  3. Productize existing or new AI frameworks for feature engineering, hyperparameter optimization, data processing, algorithms, model training and deployment (amongst others) for a wide variety of commercial problems (e.g., H2O.ai, Seldon* and SigOpt).
     
  4. Automate the repetitive, structured, error-prone and slow processes conducted by knowledge workers on a daily basis using contextual decision making (e.g., Gluru, x.ai and SwiftKey).
     
  5. Endow robots and autonomous agents with the ability to sense, learn and make decisions within a physical environment (e.g., Tesla, Matternet and SkyCatch).
     
  6. Take the long view and focus on research and development (R&D) to take risks that would otherwise be relegated to academia — but due to strict budgets, often isn’t anymore (e.g., DNN Research, DeepMind and Vicarious). A key consideration, however, is that the open sourcing of technologies by large incumbents (Google, Microsoft, Intel, IBM) and the range of companies productizing technologies for cheap means that technical barriers are eroding fast. What ends up moving the needle are proprietary data access/creation, experienced talent and addictive products.

Build With The User In The Loop

There are two big factors that make involving the user in an AI-driven product paramount. One, machines don’t yet recapitulate human cognition. To pick up where software falls short, we need to call on the user for help. And two, buyers/users of software products have more choice today than ever. As such, they’re often fickle (the average 90-day retention for apps is 35 percent).

Returning expected value out of the box is key to building habits (hyperparameter optimization can help). Here are some great examples of products that prove that involving the user in the loop improves performance:

  • Search: Google uses autocomplete as a way of understanding and disambiguating language/query intent.
  • Vision: Google Translate or Mapillary traffic sign detection enable the user to correct results.
  • Translation: Unbabel community translators perfect machine transcripts.
  • Email Spam Filters: Google, again, to the rescue.

We can even go a step further, I think, by explaining how machine-generated results are obtained. For example, IBM Watson surfaces relevant literature when supporting a patient diagnosis in the oncology clinic. Doing so improves user satisfaction and helps build confidence in the system to encourage longer-term use and investment. Remember, it’s generally hard for us to trust something we don’t truly understand.

Which Problems Remain To Be Solved for Healthcare?

I spent a number of summers in university and three years in grad school researching the genetic factors governing the spread of cancer around the body. A key takeaway I left with is the following: therapeutic development is very challenging, expensive, lengthy and regulated, and ultimately offers a transient solution to treating disease.

Instead, I truly believe that what we need to improve healthcare outcomes is granular and longitudinal monitoring of physiology and lifestyle. This should enable early detection of health conditions in near real time, driving down cost of care over a patient’s lifetime while consequently improving outcomes.

Consider the digitally connected lifestyles we lead today. The devices some of us interact with on a daily basis are able to track our movements, vital signs, exercise, sleep and even reproductive health. We’re disconnected for fewer hours of the day than we’re online, and I think we’re less apprehensive to storing various data types in the cloud (where they can be accessed, with consent, by third-parties). Sure, the news might paint a different story, but the fact is that we’re still using the web and its wealth of products.

On a population level, therefore, we have the chance to interrogate data sets that have never before existed. From these, we could glean insights into how nature and nurture influence the genesis and development of disease. That’s huge.

AI-driven products are already out in the wild.

Look at today’s clinical model. A patient presents into the hospital when they feel something is wrong. The doctor must conduct a battery of tests to derive a diagnosis. These tests address a single (often late-stage) time point, at which moment little can be done to reverse damage (e.g., in the case of cancer).

Now imagine the future. In a world of continuous, non-invasive monitoring of physiology and lifestyle, we could predict disease onset and outcome, understand which condition a patient likely suffers from and how they’ll respond to various therapeutic modalities. There are loads of applications for artificial intelligence here: intelligence sensors, signal processing, anomaly detection, multivariate classifiers, deep learning on molecular interactions…

Some companies are already hacking away at this problem:

  • Sano: Continuously monitor biomarkers in blood using sensors and software.
  • Enlitic/MetaMind/Zebra Medical: Vision systems for decision support (MRI/CT).
  • Deep Genomics/Atomwise: Learn, model and predict how genetic variation influence health/disease and how drugs can be repurposed for new conditions.
  • Flatiron Health: Common technology infrastructure for clinics and hospitals to process oncology data generated from research.
  • Google: Filed a patent covering an invention for drawing blood without a needle. This is a small step toward wearable sampling devices.
  • A point worth noting is that the U.K. has a slight leg up on the data access front. Initiatives like the U.K. Biobank (500,000 patient records), Genomics England (100,000 genomes sequenced), HipSci (stem cells) and the NHS care.data program are leading the way in creating centralized data repositories for public health and therapeutic research.
  • Enterprise Automation

Could businesses ever conceivably run themselves? AI-enabled automation of knowledge work could cut employment costs by $9 trillion by 2020 (BAML). Coupled with the efficiency gains worth $1.9 trillion driven by robots, I reckon there’s a chance for near-complete automation of core, repetitive businesses functions in the future.

Think of all the productized SaaS tools that are available off the shelf for CRM, marketing, billing/payments, logistics, web development, customer interactions, finance, hiring and BI. Then consider tools like Zapier or Tray.io, which help connect applications and program business logic. These could be further expanded by leveraging contextual data points that inform decision making.

Perhaps we could eventually re-image the new eBay, where you’ll have fully automated inventory procurement, pricing, listing generation, translation, recommendations, transaction processing, customer interaction, packaging, fulfillment and shipping. Of course, this is probably a ways off.

Artificial intelligence is one of the most exciting and transformative opportunities of our time.
I’m bullish on the value to be created with artificial intelligence across our personal and professional lives. I think there’s currently low VC risk tolerance for this sector, especially given shortening investment horizons for value to be created. More support is needed for companies driving long-term innovation, especially considering that far less is occurring within universities. VC was born to fund moonshots.

We must remember that access to technology will, over time, become commoditized. It’s therefore key to understand your use case, your user, the value you bring and how it’s experienced and assessed. This gets to the point of finding a strategy to build a sustainable advantage such that others find it hard to replicate your offering.

Aspects of this strategy may in fact be non-AI and non-technical in nature (e.g., the user experience layer ). As such, there’s renewed focus on core principles: build a solution to an unsolved/poorly served high-value, persistent problem for consumers or businesses.

Finally, you must have exposure to the U.S. market, where the lion’s share of value is created and realized. We have an opportunity to catalyze the growth of the AI sector in Europe, but not without keeping close tabs on what works/doesn’t work across the pond.



Scroll to Top