Early bird prices are coming to an end soon... ⏰ Grab your tickets before January 17

This article was published on December 28, 2020

Neural’s AI predictions for 2021


Neural’s AI predictions for 2021

It’s that time of year again! We’re continuing our longrunning tradition of publishing a list of predictions from AI experts who know what’s happening on the ground, in the research labs, and at the boardroom tables.

Without further ado, let’s dive in and see what the pros think will happen in the wake of 2020.

Dr. Arash Rahnama, Head of Applied AI Research at Modzy:

Just as advances in AI systems are racing forward, so too are opportunities and abilities for adversaries to trick AI models into making wrong predictions. Deep neural networks are vulnerable to subtle adversarial perturbations applied to their inputs – adversarial AI – which are imperceptible to the human eye. These attacks pose a great risk to the successful deployment of AI models in mission critical environments. At the rate we’re going, there will be a major AI security incident in 2021 – unless organizations begin to adopt proactive adversarial defenses into their AI security posture.

2021 will be the year of explainability. As organization integrate AI, explainability will become a major part of ML pipelines to establish trust for the users. Understanding how machine learning reasons against real-world data helps build trust between people and models. Without understanding outputs and decision processes, there will never be true confidence in AI-enabled decision-making. Explainability will be critical in moving forward into the next phase of AI adoption.

The combination of explainability, and new training approaches initially designed to deal with adversarial attacks, will lead to a revolution in the field. Explainability can help understand what data influenced a model’s prediction and how to understand bias — information which can then be used to train robust models that are more trusted, reliable and hardened against attacks. This tactical knowledge of how a model operates, will help create better model quality and security as a whole. AI scientists will re-define model performance to encompass not only prediction accuracy but issues such as lack of bias, robustness and strong generalizability to unpredicted environmental changes.

Dr. Kim Duffy, Life Science Product Manager at Vicon.

Forming predictions for artificial intelligence (AI) and machine learning (ML) is particularly difficult to do while only looking one year into the future. For example, in clinical gait analysis, which looks at a patient’s lower limb movement to identify underlying problems that result in difficulties walking and running, methodologies like AI and ML are very much in their infancy. This is something Vicon highlights in our recent life sciences report, “A deeper understanding of human movement”. To utilize these methodologies and see true benefits and advancements for clinical gait will take several years. Effective AI and ML requires a mass amount of data to effectively train trends and pattern identifications using the appropriate algorithms.

For 2021, however, we may see more clinicians, biomechanists, and researchers adopting these approaches during data analysis. Over the last few years, we have seen more literature presenting AI and ML work in gait. I believe this will continue into 2021, with more collaborations occurring between clinical and research groups to develop machine learning algorithms that facilitate automatic interpretations of gait data. Ultimately, these algorithms may help propose interventions in the clinical space sooner.

It is unlikely we will see the true benefits and effects of machine learning in 2021. Instead, we’ll see more adoption and consideration of this approach when processing gait data. For example, the presidents of Gait and Posture’s affiliate society provided a perspective on the clinical impact of instrumented motion analysis in their latest issue, where they emphasized the need to use methods like ML on big-data in order to create better evidence of the efficiency of instrumented gait analysis. This would also provide better understanding and less subjectivity in clinical decision-making based on instrumented gait analysis. We’re also seeing more credible endorsements of AI/ML – such as the Gait and Clinical Movement Analysis Society – which will also encourage further adoption by the clinical community moving forward.

Joe Petro, CTO of Nuance Communications:

In 2021, we will continue to see AI come down from the hype cycle, and the promise, claims, and aspirations of AI solutions will increasingly need to be backed up by demonstrable progress and measurable outcomes. As a result, we will see organizations shift to focus more on specific problem solving and creating solutions that deliver real outcomes that translate into tangible ROI — not gimmicks or building technology for technology’s sake. Those companies that have a deep understanding of the complexities and challenges their customers are looking to solve will maintain the advantage in the field, and this will affect not only how technology companies invest their R&D dollars, but also how technologists approach their career paths and educational pursuits.

With AI permeating nearly every aspect of technology, there will be an increased focus on ethics and deeply understanding the implications of AI in producing unintentional consequential bias. Consumers will become more aware of their digital footprint, and how their personal data is being leveraged across systems, industries, and the brands they interact with, which means companies partnering with AI vendors will increase the rigor and scrutiny around how their customers’ data is being used, and whether or not it is being monetized by third parties.

Dr. Max Versace, CEO and Co-Founder, Neurala:

We’ll see AI be deployed in the form of inexpensive and lightweight hardware. It’s no secret that 2020 was a tumultuous year, and the economic outlook is such that capital intensive, complex solutions will be sidestepped for lighter-weight, perhaps software-only, less expensive solutions. This will allow manufacturers to realize ROIs in the short term without massive up-front investments. It will also give them the flexibility needed to respond to fluctuations the supply chain and customer demands – something that we’ve seen play out on a larger scale throughout the pandemic.

Humans will turn their attention to “why” AI makes the decisions it makes. When we think about the explainability of AI, it has often been talked about in the context of bias and other ethical challenges. But as AI comes of age and gets more precise, reliable and finds more applications in real-world scenarios, we’ll see people start to question the “why?” The reason? Trust: humans are reluctant to give power to automatic systems they do not fully understand. For instance, in manufacturing settings, AI will need to not only be accurate, but also “explain” why a product was classified as “normal” or “defective,” so that human operators can develop confidence and trust in the system and “let it do its job”.

Another year, another set of predictions. You can see how our experts did last year by clicking here. You can see how our experts did this year by building a time machine and traveling to the future. Happy Holidays!

Get the TNW newsletter

Get the most important tech news in your inbox each week.

Also tagged with


Published
Back to top