AI Experts Predict 2021 Trends
2020 hasn’t quite gone to plan for many of us, but here’s to 2021! We asked some of our AI expert community what they think the prominent trends of 2021 could be. Included are contributions from MILA, Gartner, Facebook, Alexa AI, NASA, Mailchimp and more.
Andriy Burkov, Director of Data Science, Gartner
Fully autonomous self-driving cars will still be “ready next year.” Though some companies might start offering autonomous “home-work-home” commute after some initial training from the user. Also, fully automated trucks will start doing coast-to-coast travels on the freeways (likely, remotely monitored by a human operator) but the last mile will still be directly operated by humans.
Transformer-based pre-trained models, like GPT-3, will become even more uncomfortably good at faking intelligence, to the point that some people would argue that they are already intelligent.
Anirudh Koul, ML Lead, NASA, Incubation @ Pinterest & Author of O’Reilly’s Practical Deep Learning Book
Two things that keep me excited for 2021 — things that will go bigger, and things that will get smaller. And the best way to predict the future is to look at the past (and probably fit an LSTM on it).
In two years, we went from BERT (440 MB) to GPT-3 (350 GB). We can already rent supercomputers in the cloud with 285,000 CPU cores and 10,000 GPUs (Developed for OpenAI from Microsoft). We also already reached 11.8 billion transistors in your pocket with iPhone’s A14 chip. And we can already train ImageNet, which took months earlier in the decade, down to 90 seconds. So it’s foreseeable that the power of computing, models, and algorithms will keep growing exponentially, further unveiling new magical powers of AI.
On the other side, we can now achieve BERT accuracy, but 233x faster inference on CPU (FastFormers). We can transmit video calls at 500x less bandwidth (NVIDIA Maxine). Time to train AutoML models went from 40000 GPU hours (MNasNet, 2018) to 3.75 hours (Single-Path NAS, 2019)! Putting models on a diet with model pruning and quantization are not research topics, but effectively 3 line of code that practitioners already make (TensorFlow Model Optimization Toolkit). The joy for little things will continue to grow in 2021, helping bring those magical experiences from powerful models to users with edge devices.
Dr. Rana el Kaliouby, Co-Founder and CEO, Affectiva
We’ll see new use cases of Emotion AI to improve online collaboration and communication in light of the pandemic.
In the COVID-19 pandemic we are relying more than ever on video conferencing to connect us virtually — working remotely, learning from home, and in our social lives. But there’s a big problem: these technologies are emotion blind. When we communicate in person, we convey so much more than the words we say: we express ourselves through nonverbal cues from our faces, voices and body language. But technology is not designed to capture the nuances of how we interact with those around us. AI may be the answer to preserving our humanity in virtual environments. Specifically, Emotion AI — software that can understand nuanced human emotions and complex cognitive states based on facial and vocal expressions — can address some of technology’s shortcomings in light of the pandemic, and we’ll see companies using it for new use cases, such as:
- Video conferencing and virtual events — Emotion AI can provide insight on how people are emotionally engaging in a virtual event or meeting. This provides presenters with valuable audience feedback, gives participants a sense of shared experience, and can help companies take a pulse on collective engagement during this stressful time.
- Online learning — Emotion AI can give feedback on how students are engaging with online educational materials and lectures, flagging if they’re confused, stressed or bored. This becomes especially important during the pandemic as so many students are learning online and suffering from “Zoom fatigue.”
- Telehealth — Emotion AI can create more meaningful discussions and trust between patients and healthcare providers as telehealth appointments are replacing in-person visits. And, a data-driven analysis of a patient’s emotional wellbeing provides a quantitative measure of mental health that goes beyond self-reporting on a rating scale of 1–10.
Alexia Jolicoeur-Martineau, PhD Researcher, MILA
Variants of Denoising Score Matching with Annealed Langevin Sampling (DSM-ALS; https://arxiv.org/abs/1907.05600) and denoising diffusion (https://arxiv.org/abs/2006.11239) will start breaking records in generative modeling; they will beat SOTA GANs.
We will see new metrics for generative modeling because the current metrics (inception score, FID) will have values close to perfection, yet photo-realism will still not be reached.
Chandra Khatri, Chief Scientist and Head of Conversational AI at Got-It.AI
The Surge of No-Code AI Platforms, Products, and Startups: Past few years were spent on building powerful Deep Learning/AI toolkits such as PyTorch and Tensorflow. Engineers are now ready to build the No-Code AI platform and product layers on top of existing toolkits, wherein users can simply provide their data and list/select the model through config or UI. AI models will not only be trained and served but also REST APIs will be exposed to applications. Got-It AI’s No-Code, Self-Discovering, Self-Training, and Self-Managing platform is an effort towards Democratizing Conversational AI. Microsoft’s recent “Lobe” app for anyone to train the AI model is an effort in that direction as well.
AI for Sustainability: We are going through a phase where pandemics are becoming common (SARS, H1N1, Covid-19), and climate change is leading to massive forest fires, species extinction, and floods or droughts becoming more frequent. We are going to see an increase in the startups and initiatives (funded by large firms or entities) leveraging AI for Sustainability. We have seen several green tech startups such as Fuergy and Facedrive, we might see AI startups leveraging predictive modeling for climate change, pandemics prediction and mitigation, and urban mobility.
Abhishek Gupta, Founder and Principal Researcher, Montreal AI Ethics Institute and ML Engineer and CSE Responsible AI Board Member, Microsoft
My prediction (and sincere hope) is that ethics, safety, and inclusion become something that everybody practices in their AI work on a daily basis. More so, I see that concepts like differential privacy will become more mainstream and well-integrated into everyday practice. As newer organizations start to realize the value in utilizing AI for their work, I also foresee a lot of capacity building work that will start to take place in domains outside of the traditional outlets where AI is applied. The concept of citizen data scientists will gain more traction as the tools become easier to use and they will be applied in novel ways to solve problems that are of importance to the community by those who are a part of the community. Finally, some problems like information pollution will intensify and I believe that this will usher in an era of knowledge building and awareness efforts by organizations and activist groups to equip and empower everyday folks with the skills to navigate that environment better.
Shalini Ghosh, Principal Research Scientist, Alexa AI, Amazon
As people start interacting more with AI-enabled devices in their home and work environment, there will be a growing number of tasks where these devices will help the user, e.g., assistants will help users complete tasks like renting a movie or ordering food online, smart monitoring will keep the home secure through anomalous event detection. Many of these tasks will be multimodal, involving processing and analysis of video, audio, speech and text data. So, in 2021 there will be continued interest in multimodal AI, as we saw in 2020. Along with that, there will be a paucity of labeled training data for many of the advanced tasks, which will motivate the community to further investigate techniques of learning in data-sparse environments, e.g., few-shot learning, self-supervised learning. Finally, since many of these AI-enabled tasks will be running on consumer devices, we will see more interest in studying on-device ML (and in general resource-constrained ML).
Muhammed Ahmed, Data Scientist, Mailchimp
Greater use of zero-shot annotation! In recent years, we’ve thoroughly explored how much we can tease out of large pre-trained NLP models. Many recent works have focused on using zero-shot (ZS) learners as out-of-the-box classifiers (Yin et al. (2019), Schick et al. (2020), Davidson (2020)).
In 2021, I expect to see more usage of ZS learning to annotate datasets over their usage as out-of-the-box classifiers. The benefits ZS annotation over ZS classification include:
- Free annotations: supervise datasets without the need for expensive annotators like Amazon mechanical turk
- Annotation bootstrapping: convert an open ended annotation task into a simple true/false annotation
- Privacy preserving annotations: helpful when annotating with sensitive data (e.g. health care, genetic data)
- Downstream training set example diversity by obtaining ZS labeled examples from several sources
- The ability to train new classifiers in the future
- Inference speed up (in some cases like multiclass ZS NLI)
The rise of the full stack data scientist
For many machine learning tasks, feature engineering and modeling are not the hard parts anymore. This is due to the large strides that the machine learning community has made over the years. For natural language understanding and generation, we know to use transformers. For computer vision, we know to use a CNN. For tabular data, we know to use a tree method that uses bagging or boosting. This has freed up a lot of time that was previously spent on solutioning and experimentation which has allowed us to train state-of-the-art models quickly. The new pain points for many data science teams are getting those models deployed and writing production ready code which requires software engineering and MLOps skills.
In 2021, I expect to see a greater need for machine learning engineers and full stack data scientists.
Vishakha Gupta, Founder & CEO, Aperture Data
As the field of ML and data science matures, the ML industry is evolving from improving performance and accuracy of a model on a particular dataset to addressing MLOps challenges. With current ML tools and platforms abstracting away details, going forward, I believe the focus will be more about reducing complexity, improving productivity, and demonstrating results on live company data (under a certain latency and footprint) than about proving feasibility.
Capturing business value from data via ML comprises multiple steps and is currently being addressed by multiple siloed solutions that, when integrated, result in an inefficient system. Given that each of these different steps interact with data, offering a unified and efficient way to interact with the data, regardless of the stage, reduces the complexity of ML pipelines as they scale.
My research is focused on smart data management, and for 2021, I predict we will see increased emphasis on infrastructure that can enable easier and more scalable ML deployments in edge and cloud with time to solution and ability to securely run large training and validation tasks on real world data as primary metrics, and energy efficiency as a secondary but increasingly important metric. Another area of emphasis in 2021 will be tools to validate how well models will work on more representative datasets. Several research groups have identified how models suffer in real world image captures, how texts can be confusing, and how training data itself is not representative. There will be more standardized metrics and neutral 3rd party validation tools or services to evaluate model accuracy. Such tools or services could eventually incorporate operational and performance metrics for a solution to give an overall scoresheet.”
Jekaterina Novikova, Director of Machine Learning, WinterLight Labs
This year was exceptional to many people in a way that’s not necessarily positive. Our normal lives changed a lot due to pandemic, increased risk to our health, unprecedented mobility and travel restrictions across the globe, and many other consequences associated with COVID-19. I think such a situation has accelerated the adoption of artificial intelligence in several areas and will result in several major shifts in the year 2021. First, AI solutions will be adopted more widely in health care, especially in mental health care. The numbers of people suffering from depression, anxiety, stress and other issues raised dramatically, and AI-based tools and solutions can and will respond to this crisis. Second, COVID situation drastically emphasized privacy concerns through the use of contact-tracing apps and similar AI-based solutions. These products are obviously extremely useful and help enforce necessary social controls, predict outbreaks and trace infections. However, potential negative privacy implications due to AI are going to be a serious issue in 2021.
Taniya Mishra, Founder and CEO, SureStart
We’ll see more of a focus on people and communities in AI. The reawakened questions around diversity, equity and inclusion in all arenas of business, including AI and technology, is an emergent trend that is not going away. Specifically, AI companies that ignore DEI are taking on business risks that will affect their bottom-line. If AI does not work for all of its intended markets or users, businesses will leave money on the table. There’s been a lot of focus on addressing AI’s diversity and ethics problem through technical evaluations of data and algorithmic bias, which are certainly important. But to really address this issue — and in light of conversations around racial equity and justice — I believe we’ll see more of a focus on the people, communities and teams building AI.
We all have seen top-down DEI initiatives regularly fail; often because individual employees don ‘t understand how to instantiate it in their day-to-day decision-making or because they don’t connect emotionally with it. So, in 2021 we’ll start to see AI companies take a bottom-up DEI approach for broadening the lens/perspective of a company’s individual employees, especially their technical staff, to broaden their view of “who” is an engineer, a scientist and a technologist. Furthermore, AI is no longer a tech-only discipline; it requires a multi-disciplinary approach to understand the human impact that AI will have. So companies that are thoughtfully addressing this issue will begin to build teams comprised not only of tech talent, but also with ethicists, sociologists and anthropologists who have the training and perspective to think about the implications of technology beyond the product techs and specs.
- The way the AI industry uses data will have to change to restore faith in science and tech. For years, we’ve seen a movement toward innovation driven enterprises (IDE). But we’re moving toward a new focus on the DDE: the data driven enterprise. This is especially true with AI companies, as AI systems require massive amounts of data to train, test and validate algorithms.
But with massive amounts of data comes massive amounts of responsibility when it comes to ethically collecting, storing, and using such data… and, when it comes to building ethical AI with this data. The market will be driven by these ethical challenges, as consumers increasingly want to be involved in decision-making on how their data will be used, and by whom. The access control to personal data is going to lie with users, not with developers of tech or owners of tech corporations.
AI companies are going to need to pivot with this in mind, finding ways to be more transparent about data collection, storage and usage; and also being mindful of how they use data to ensure AI systems are fair and equitable. This requires building constraints during testing and validation of algorithms, and examining not just overall accuracy based but specifically, how well it generalizes across different demographic groups — for example, for white men vs. black women. Only then can an organization identify specific areas of bias and address issues before AI is deployed in production.
Dr. Eli David, Co-Founder and CTO, DeepCube
We see a clear trend of state-of-the-art deep learning models becoming larger and larger. In 2019, the largest deep learning model had about one billion parameters (weights). In 2020, the largest deep learning model has over one hundred billion parameters, a growth of over 100x in a single year! These larger models improve the accuracy, and so they have a clear advantage. However, their computational and memory requirements are growing at the same pace as well, so solutions such which can dramatically reduce the size of these models and improve their speed will become more and more critical for deployment.
Further reading — Experts Blogs & More
AI Experts Discuss The Possibility of Another AI Winter
Experts AI Advice for Starters
Female Pioneers in Computer Science You May Not Know — Part 2
AI Experts Talk Roadblocks in AI
Experts Predict The Next AI Hub
The AI Overview — 5 Influential Presentations Q3 2020
13 ‘Must-Read’ Papers from AI Experts
‘Must-Read’ AI Papers Suggested by Experts — Pt 2
10 Must-Read AI Books in 2020
10 Must-Read AI Books in 2020 — Part 2
Change Detection and ATR using Similarity Search in Satellites
Top AI Resources — Directory for Remote Learning
Top AI & Data Science Podcasts
30 Influential Women Advancing AI in 2019
30 Influential AI Presentations from 2019
AI Across the World: Top 10 Cities in AI 2020
Female Pioneers in Computer Science You May Not Know
Top Women in AI 2020 — Texas Edition
2020 University/College Rankings — Computer Science, Engineering & Technology
How Netflix uses AI to Predict Your Next Series Binge — 2020
Top 5 Technical AI Presentation Videos from January 2020
20 Free AI Courses & eBooks
5 Applications of GANs — Video Presentations You Need To See
250+ Directory of Influential Women Advancing AI in 2020
The Isolation Insight — Top 50 AI Articles, Papers & Videos from Q1
Reinforcement Learning 101 — Experts Explain
The 5 Most in Demand Programming Languages in 2020
Generative Models — Top Videos & New Papers
Applying AI in Clinical Development of Drugs
What is AI Assurance?
Experts Pick Their Dream AI Panel
Female Pioneers You Might Not Know