You won't want to miss out on the world-class speakers at TNW Conference this year 🎟 Book your 2 for 1 tickets now! This offer ends on April 22 →

This article was published on July 5, 2019

Humans and AI will work better when they start learning from each other


Humans and AI will work better when they start learning from each other

In the age of big data and breathtaking advances of artificial intelligence, social infrastructure promotes digital engagement and active presence. Digital democracy propagates the participation of a growing number of users to interact with institutions and services, ensuring that decisions made by AI-powered digital tools reflect human values.

Immersed in automation, many choices we make include some form of a computationally-modeled process. This transformation from manual to programmed behavior has started with the introduction of recommendation systems to find similar products according to users’ preferences.

However, today’s AI systems go beyond imposing suggestions and know pretty well what we do and what we want. Using “persuasive computing” and “big nudging,” artificial intelligence and automation steer actions towards more acceptable behaviors, entailing a lack of confidence in a modern vision of digital cooperation.

Reactions towards this phenomena vary from being “unplugged” or simply disconnecting from the automated systems to trying to coexist with AI. Being dependent on many applications in our daily life, it is obvious that we have already chosen the path of symbiosis with automation.

However, the exposure to digital markets and a vast array of solutions naturally contributes to confusion and skepticism in users’ online experiences. Misuse and disuse of AI both bring an additional set of technical challenges for establishing trustworthy human-machine interaction.

Furthermore, suspicion is increasing in a vicinity of recording, transforming, and distributing data, forming extensive and easy-to-access clouds for further use and manipulation. To improve the quality of human-machine symbiosis and adhere to some of the fundamental principles of a digital revolution agenda, it is essential for users to incur integrity and trust automated decisions.

Trust plays a significant role in decreasing the cognitive complexity users face in interacting with sophisticated technology. Consequently, its absence leads to an AI model’s underutilization or abandonment.

Regulating trust comes intuitively with grasping the learning process using interpretability as its measure. However, introducing feedback from both humans and machines increases the complexity of the mentioned challenges. The process becomes even more complex with the introduction of potential user types that could manipulate the machine. Coming from a specific domain or skill sets, ideas or desirable outputs from an AI model, the manifestation of bidirectional user-machine process is different with different users.

Technology will be just as good if all groups understand the evidence behind it and prepare themselves to use it effectively. Domain experts as a first group use AI for scientific purposes with each interplay serving as a knowledge discovery process.

End users, as a second group, are interested in pure outputs, and a quick and easy-to-use product must undoubtedly deliver results. In favor of producing good quality AI models and increasing the use of automation, the final group, architects or system engineers, needs to have a notion regarding automation’s inner processes.

Given all that was just mentioned, what are the mediators in human-machine terms capable of representing interpretability as its measure? Users must be able to easily understand an AI’s performance in order to assess its ability. Conflicting situations are poorly resolved due to unreliable human-machine interactions.

Giving visible effort by the machine could indicate that it is acting in the interest of the user. Such positive behavior of the automated system could be easily understood with visualization which in turn might increase trust. As visualization enhances the comprehension, it may influence perceived functionality and reliability of complex systems. Visualization reduces cognitive information overload and provides better insight into complex functioning. Moreover, communicating risks facilitates credibility and perception on trustworthiness.

human robot cooperation typing

The visual language can be considered as a bridge element between the psychological ”interpersonal” mechanisms and the empirical factors in each interaction. Design can be used to directly affect the trust level and thereby correct tendencies of human operators to misuse and/or disuse the AI system.

Appropriate trust can lead to performance of the joint human-automation system that is superior to that of either the human or the AI system alone. Given that transparent communication is essential for trust building, the use of visualizations directly influences the improvement of human-machine automation.

The potential of bidirectional learning can reach its full potential thanks to visual aspects, placing direct visual-based human-in-the-loop input if a model fails to provide a desirable result. The input and output are in the same (visual) space and effective measure takes place on both sides. Interpreting the internals of the AI model helps efficient control and promotes fairness towards an appropriate end-user whose interests are focused solely on human-based explanations.

However, visualizing stages of machine’s inner processes is not sufficient for its full understanding. The possibility to directly set up parameters or influence the training process of the AI model provides greater level of communication, increases bidirectional learning, and promotes trust. Using interactive visualizations in machine learning enables direct and immediate output generating effective visual feedback during the learning process. This way, all user types can understand an AI model’s actions and performance, opening the space for artificial intelligence to be applied using different media (mobile, desktop, VR/AR).

The idea behind promoting human-machine symbiosis is not to train automation and replace some of our activities. The “mutual” understanding needs to enable good input and trust, for a user to benefit from artificial intelligence and help in knowledge discovery.

Actions have been taken in that direction and platforms such as Archspike are working on providing qualitative human-machine feedback. The platform “understands” users’ intentions and how that “knowledge” changes with consecutive human input over time. The user reacts on results (not suggestions) of interest applied on a large (city) scale that otherwise could not have been received.

Another practical example is a platform called Macaque, which provides multiple synchronized bidirectional loops between the users and AI systems. The major contribution of the platform is increased trust, providing operators the opportunity to easily understand and individually manage complex modules. Macaque introduces self performance-improving by employing both human and AI capacities. The operator chooses a method, assessment is done automatically, and machine follows end users’ reactions based on their interactions with the system. With time, operators get regulated and less biased results based on the multiple synchronized end-user input.

The future environment and its vitality will depend on the ability to use intelligent applications and “systems thinking.” AI architects have to understand the workings of automated systems in order to develop effective feedback and increase model performance. Multiple synchronized or unsynchronized flows of information need to be integrated into efficient bidirectional loops.

The central aspect of every process are human cognitive functions and its further development using automation. AI systems should support objective rational thinking and engage and motivate users instead of imposing recommendations. By using feedback loops, we can measure positive and negative side effects of our interactions and achieve results by means of self-organization. Visualization is crucial in providing insights on how changes affect the AI model and it should be used at any stage of the learning process. In order to understand how to apply effective feedback loops into an effective human-machine interaction, we need to decompose the problem and understand the influence AI has on people separately from the influence that people have on automation processes.

This story is republished from TechTalks, the blog that explores how technology is solving problems… and creating new ones. Like them on Facebook and follow them on Twitter.

Get the TNW newsletter

Get the most important tech news in your inbox each week.

Also tagged with