The AI industry in the next 5 years

Jay Parthasarthy
6 min readMay 1, 2018

How the AI industry is going to change in the next 5 years.

I often see a divide between research and applied AI. They’re often separate teams within companies, and they get separate blog pages. Often, this makes sense, but when we’re thinking about the AI industry as a whole, it’s important to realize that only the combination of both frontiers moves the needle.

In Toronto, I’ve gotten the chance to see the industry from both the research and commercial perspective. I’ve gotten perspective from industry leaders, mentorship from engineers and research professors: I’ve even founded my own company. And across the board, I see some wide trends that will shape the way the industry moves in the next 5 years.

1. Greater Knowledge Development

As tie goes on, the total level of knowledge within any group of people increases. We see this knowledge manifest in many different ways across sectors- The AI industry is very much still reaching its maturity, and this will manifest in a few crucial ways.

When tackling a new dataset or objective, a very common and important heuristic for machine learning engineers is to find exist models, and adjust them for the task at hand. An engineer can determine whether a simple or a deep model is best for a given application, but it’s always hard to find the correct hyperparameters for a given task or dataset unless it’s been thoroughly understood through applications.

The canonical example is image recognition: creating an image recognition is now a trivialized task on most datasets. While transfer learning is a big part of this, even if we didn’t use it, existing architectures would inform our hyperparameter decisions.

As the industry grows, we’ll get better at two things: we’ll have a larger knowledge base to draw from, and we will understand how to apply past knowledge better. This is especially impactful on deep models where it’s harder to choose hyperparameters and architectures.

We saw this happening recently in 3D object detection. Because there was little domain knowledge floating around, specialized 3D localization tasks (like in manufacturing) were hard to execute on. In fact, most companies in this space chose to not implement AI because it was to onerous, even though it would offer greater flexibility and accuracy.

However, domain knowledge exploded as the technology became more important for self-driving cars, and now we see the technology being implemented in tons of manufacturing applications.

It’s important to note that I’m not talking specifically about research, although integrating research into applications is something that we’re getting better at all the time.

2. Better AI focused design

Companies are getting better at applying AI in consumer-facing applications.

A product is only powerful when it has to address a need. To quote Google Design: “If you aren’t aligned with a human need, you’re just going to build a very powerful system to address a very small — or perhaps nonexistent — problem.” And even when you’re tackling the right problem, if you don’t get the digital “form factor” of your system right, it may not be able to fill that need.

These design considerations rest at the AI-UX intersection.(https://design.google/library/ux-ai/) We’re getting to be much better at it. For example, many UX designers now understand core AI ideas and principles, meaning that interdisciplinary teams can become more integrated.

Companies have also developed guidelines and principles that better guide AI design, allowing for better application of the technology. My favorite read is this article. While some of these guidelines may seem simple, we’ve spent years without implementing them! The marriage of design and engineering in ML will greatly aid adoption.

3. Reinforcement learning

Okay, I may be calling this one a bit early. Will reinforcement learning change the AI landscape within 5 years? Probably not, but there will come a day when a large portion of what supervised and unsupervised learning does today will be done with reinforcement learning.

Reinforcement learning is getting better in two important ways:

  1. Our algorithms are getting better at learning in reward-based environments.
  2. We are getting better at learning how to accomplish goals with fuzzier success metrics.

For example, just reading OpenAI publications within the last year can show you that the rate that reinforcement learning is advancing is growing hugely.

These research developments are critical for applied machine learning, and will absolutely change how we create AI systems in the future. Imagine building a recommender system today. Multiple objectives have to be balanced, and we have to learn how a dozen variables interact with each other. A whole team needs to be compiled to build this one application.

However, imagine we could use reinforcement learning in this case instead. If it’s able to accurately discern future value function, it will be able to balance these goals implicitly. Your team can be replaced by algorithm.

It’s hard to understate how impactful a reinforcement learning will be on robotics. We have no idea how to solve many control system problems. Reinforcement learning can finally tackle these challenges when it hits its stride.

4. Explainability increases.

Even if we can train a classifier to have a high verification accuracy, if it’s making high-stakes decisions, we often can’t put it to use. Even if it beats out human doctors in quantitative metrics, an AI doctor can’t make a final diagnosis.

This is because a neural network transforms data in a way that we, as humans, can’t really parse. When neural networks make mistakes, we just can’t figure out why. However, when making diagnoses, human doctors have very stringent criteria. For example, here are the criteria for deciding whether or not lung nodules are cancerous or not (https://www.aafp.org/afp/2015/1215/p1084.html). A radiologist can clearly justify a decision that they make based on these guidelines, in a

This is why AI companies in the healthcare space don’t just slap a classifier on a diagnosis problem and call it a product. There’s a lot of nuance to AI product design- one of my favorite AI companies, Arterys, doesn’t actually do any diagnosis. It just automates repetitive parts of the workflow (e.g. calculating blood flow to the heart) using thoughtfully implemented AI. Necessarily, they don’t /have/ to justify any decisions their AI is making- it’s just a smarter tool for a radiologist to use.

The uses of explainability go far beyond just healthcare. The question we’re ALL tired of hearing, the self-driving car trolley problem, can be solved with increases in explainability. Even when things don’t go wrong, better explainability allows us to increase adoption by taking the fear out of the decisions at an AI makes.

We’re seeing progress in this area every day. Currently, there’s a big trade-off between My new favorite article is about visualizing classifications via CNNs, and it just goes to show the strides we’re making in the space every day. We are also seeing constant increases in this space everyday. This article explores combining existing explanatory techniques, and the results can be just breathtaking:

In my mind, there’s no doubt that the way we approach explainability will shape the way AI develops in the next 5 years.

Conclusions

There are many more things that I’d like to talk about in this article, but I think these 4 general trends will be the most high-impact for the industry. I don’t think there will be any crazy developments that really revolutionize the industry, but just growth in the general trends that we might see with the maturity of any industry. AI is finally reaching its maturity, and its applications are becoming much more cohesive. Applications are becoming much easier to create. Governance is increasing.

My prediction: a slow build in knowledge and processes will shape in how AI is created and perceived in the next 5 years.

I do think reinforcement learning is sick, though.

--

--