Save over 40% when you secure your tickets today to TNW Conference 💥 Prices will increase on November 22 →

This article was published on June 28, 2018

Who should get to decide what’s ethical as technology advances?

Technology is ripe with ethical dilemmas. So who should be in charge of making ethical decisions for our technology?


Who should get to decide what’s ethical as technology advances?

Technology is ripe with ethical dilemmas. New tech usually comes with more power and more advanced capabilities; we might be able to reshape the world in new, innovative ways, or we might expose the human mind to conditions it’s never experienced before.

Obviously, this opens the door to ethical challenges, such as determining whether it’s right to edit the human genome or programming self-driving cars to behave in ways aligned with our morals.

I could write an article with thousands of ethical questions we still have to answer, covering artificial intelligence (AI), virtual reality (VR), medical breakthroughs, and the internet of things (IoT). But there’s one bigger question that affects all the others, and we aren’t spending enough time to address it: who gets to decide the answers to these questions?

remote work

The high-level challenges

There are some high-level challenges we have to consider here:

1. Balancing ethics and innovation. Our legislative process is intentionally slow, designed to ensure that each new law is considered carefully before it’s passed. Similarly, it often takes years—if not decades—of scientific research to fully understand a topic. If every tech company waited for scientists and regulators to make an ethical decision, innovation would come to a halt, so we have to find a way to balance speed and thoroughness.

2. Keeping power balanced. We also need to be careful not to tip the scales of power. If one class of people, or one country, gets access to an extremely powerful or advanced technology, it could result in inhumane levels of inequality, or war. If one authority is allowed to make all ethical decisions about tech, those decisions could unfairly work in its favor, at the expense of everyone else involved.

3. Making educated decisions. Ethics are subjective, but shouldn’t be based on a gut reaction, or our feelings on a given subject. They should be exhaustively well-researched and understood before a decision is made; in other words, these decisions shouldn’t be made by someone uneducated in the matter, or by a non-expert.

4. Considering multiple areas. We also have to consider consequences in multiple areas. This isn’t just about safeguarding human life, but also human health, human psychology, and the wellbeing of our planet.

The options

So who could we consider to make ethical decisions for our technology?

Scientists. We could trust scientists, who by nature are objective truth-seekers. The problem is, research takes years to decades to complete, and even then, in-fighting could bring the process to a halt.

Inventors and entrepreneurs. We could trust the inventors and distributors of technology to protect us, and there are plenty of examples of companies doing their best to protect consumers and “do no evil,” but there are also profiteers and glory-seekers keeping some in the industry from behaving ethically.

Regulators. Historically, we’ve trusted politicians and lawmakers to protect the public and make large-scale ethical decisions. However, lawmakers aren’t the most educated on the subject, and may have trouble passing legislation as quickly as we need it, or in a way that satisfies all parties.

The general public. We could trust major ethical decisions to the general public, through a democratic system or through basic consumer decision-making. However, the average member of the public is not an expert in the area of tech ethics, and can’t be expected to make the most logical decision.

External organizations. Finally, we could delegate power to an external body that specializes in tech ethics, appointing or training experts to oversee tech company operations and make ethical decisions for us. This is more balanced than other options on this list, but begs the question: who decides who’s in charge of these organizations?

The knowledge factor

Artificial intelligence

I contend that none of the above options works as an ultimate authority for making ethical decisions for tech; each has some glaring weakness that prevents it from being a suitable candidate, though the neutral, external organization is certainly promising.

So rather than designating an authority to make a decision, we should instead gear our efforts to transforming those decisions—regardless of who’s making them—to be easier to make. The only way to do that is to uncover more information about the technologies we’re creating and using, and make that knowledge more publicly available. Here are three ways to do that:

1. Understanding the consequences. First, we need to work harder to understand the consequences of the technologies we’re already using (and those to be released in the future). Cigarettes were smoked for decades before we realized their true health ramifications; we don’t want a similar level of ignorance to blind us from the repercussions of a technology that’s far more widespread, with a far graver potential for all of mankind. The answer here is almost always more unbiased research. For example, some scientific studies have explored the possibility of unintended mutations when using CRISPR-Cas9 to edit genes in vivo.

2. Informing the public. We can’t simply work to gain new information; we have to distribute that information to the public. This introduces the public as a powerful check to any organizations that might otherwise control the narrative, and ensures that the public can make educated decisions about the technology they utilize before regulators can act to protect them. Public information is also necessary to encourage the public to take collective action when necessary, such as petitioning for legal changes. For example, the Future of Life Institute was founded in part to promote public education about issues of AI safety and the potential impact of technologically advanced weaponry.

3. Dedicating resources. We also need companies to dedicate internal and external resources to improving their understanding of their own technology. For example, Google’s AI project DeepMind has its own dedicated ethics board, working to keep the system operating as ethically as possible. Ethics boards should be built-in to the majority of tech companies, and if they aren’t possible, companies should work to form neutral, third-party organizations meant to discover more about how their products are used, and keep things in careful balance.

No single person or group can overcome the challenge of deciding what’s ethical in tech, but with enough transparent knowledge, we can all decide for ourselves. To keep innovation moving forward without compromising our safety, health, or balance in society, we have to hold our companies and each other accountable to these basic principles, and keep pushing for more information and education.

Get the TNW newsletter

Get the most important tech news in your inbox each week.

Also tagged with