In an age of more and more powerful autonomous and intelligent systems (A/IS), how does industry retain trust with its customers?
Hardly any means has been available to organizations for credibly communicating to their customers—and maybe even their own employees—the trustworthiness of their operations and use of A/IS. In the absence of such a method, often uninformed fear, uncertainty, and doubt of A/IS has festered. Moving forward, concerns about the unintended consequences of artificial intelligence and A/IS and lack of trust could hold back the advancement of revolutionary technologies with tremendous, far-reaching potential for benefiting humanity—across application areas such as improving disease prevention and diagnosis, boosting agriculture and manufacturing efficiency, addressing climate change, enhancing security… even helping resolve the global COVID-19 crisis.
Certification to consensus criteria by an independent, globally recognized body of experts would serve as a crucial, sense-making tool. The industry has voiced an urgent need to easily and visually communicate whether their A/IS are deemed “safe” or “trusted” via a publicly available and transparent series of marks, and this is why IEEE created The Ethics Certification Program for Autonomous and Intelligent Systems (ECPAIS) in October 2018.
One of the world’s first programs dedicated to the creation of an A/IS certification criteria and marking program supported by a global standards-development organization, ECPAIS has created certification criteria around transparency, accountability, and reduction in algorithmic bias in the development of A/IS. The criteria are intended to enable cities and public and private organizations in diverse vertical industries—healthcare and medical devices, financial services, automotive, manufacturing, elder services, etc.—to identify themselves as being trustworthy and beneficial in their development and operation of A/IS products, services, and systems.
Now, ECPAIS is moving forward to invite additional movers and innovators globally to utilize the ECPAIS criteria in their specific contexts, providing the basis to be applied in overall design frameworks for A/IS and leading toward trustworthy deployed systems in business-to-consumer, -business and -government environments.
Gathering global focus
Attention has intensified globally on questions around how algorithms are shaping society, influencing people, and changing the ways that power is exercised. In turn, governance of A/IS has become an area of growing focus.
In March 2019, for example, The IEEE Global Initiative on Ethics of Autonomous and Intelligent Systems (A/IS) launched the publication, Ethically Aligned Design, First Edition (EAD), a comprehensive in-depth document created by over 600 global experts outlining over 100 specific recommendations to guide design, development, deployment, and use of A/IS. EAD also inspired the creation of education programs, multiple standards working groups within IEEE, the world’s largest technical association dedicated to advancing technology for the benefit of humanity, and ECPAIS itself.
In September 2019, the World Economic Forum published guidelines for public procurement of AI, to help governments safeguard public benefit and well-being. “AI holds the potential to vastly improve government operations and meet the needs of citizens in new ways, ranging from traffic management to healthcare delivery to processing tax forms,” reads Guidelines for AI Procurement. “However, governments often lack experience in acquiring modern AI solutions and many public institutions are cautious about harnessing this powerful technology. Overall, the guidelines aim to guide all parties involved in the procurement life cycle—policy officials, procurement officers, data scientists, technology providers, and their leaders—towards the overarching goal of safeguarding public benefit and well-being.”
Similarly, the European Commission on 19 February 2020 launched a Consultation on Artificial Intelligence with the release of a white paper, On Artificial Intelligence – A European approach to excellence and trust. The white paper reads, “Given the major impact that AI can have on our society and the need to build trust, it is vital that European AI is grounded in our values and fundamental rights such as human dignity and privacy protection.”
The white paper also notes the necessity of “an objective, prior conformity assessment… to verify and ensure” conformance to requirements around training data, data, transparency, robustness and accuracy, and human oversight in high-risk AI applications. “The prior conformity assessment could include procedures for testing, inspection or certification. It could include checks of the algorithms and of the data sets used in the development phase.”
It was out of such globally shared need that the industry-driven ECPAIS program was launched in October 2018 by IEEE. The first year of ECPAIS work concentrated on defining certification criteria:
- accountability criteria addressing issues such as an organization’s stated specifics of its A/IS in use, including tracking systems or product actions, risk parameters and how human agents are able to oversee and control the A/IS;
- transparency criteria addressing issues such as awareness by the public of A/IS interaction across an organization and A/IS in use, confidence of A/IS system behavior and upholding of ethical integrity and clarity of the concept of operation, and
- criteria associated with reduction of algorithmic bias, assessing acceptability of algorithmic bias in a product or system, and providing insights for reduction or avoidance of unacceptable levels.
Today, the ECPAIS community is expanding to encourage interested entities to take the next key steps toward practical applications of the criteria. Companies, governments, and other stakeholders globally are invited to contact IEEE to engage by championing an ECPAIS industry-vertical initiative, initiating a proof of concept or helping shape how ECPAIS criteria specifically honor human-driven values in the context of A/IS use with end-users and stakeholders around the world.
Ultimately, companies and public organizations would be able to leverage ECPAIS to demonstrate to their customers, employees, and the general public their commitment toward building trust in their A/IS—as validated by an independent third party. Governments could use the certification criteria to inform policy and provide a form of public-facing explainability. And educators could use ECPAIS to nurture a more ethics-oriented next generation of engineers, domain experts, and decision-makers.
Fostering trust, applying pressure
“We see great value in what ECPAIS is developing in the important field of ethics for autonomous intelligent systems. We think the results of the first phase of the program are very promising,” Dr. Dietmar Schabus, data scientist with Wiener Stadtwerke, Austria’s largest communal infrastructure provider, which is owned by the City of Vienna, said in a 26 February 2020 IEEE Standards Association press release. “… As a city that’s very human-centric and digitally enabled, we see the work on ethical aspects of A/IS such as ECPAIS as fundamental to this strategy.”
For those organizations already doing their due diligence in advancing responsible A/IS, certification will deliver a greatly needed tool for building trust with their stakeholders. For those organizations who are skirting the hard work necessary to secure citizen data, protect privacy and dignity and uphold human rights, certification will ramp up market pressure to take seriously the most pressing challenges of the algorithmic age.
This article was originally published by Meeri Haataja on TechTalks, a publication that examines trends in technology, how they affect the way we live and do business, and the problems they solve. But we also discuss the evil side of technology, the darker implications of new tech and what we need to look out for. You can read the original article here.
Get the Neural newsletter
Greetings Humanoids! Did you know we have a newsletter all about AI? You can subscribe to it right here.Follow @neural