This article was published on December 17, 2019

A tech apocalypse is inevitable without the humanities


A tech apocalypse is inevitable without the humanities Image by: Axel Heimken / DPA

If recent television shows are anything to go by, we’re a little concerned about the consequences of technological development. Dystopian narratives abound.

Black Mirror projects the negative consequences of social media, while artificial intelligence turns rogue in The 100 and Better Than Us. The potential extinction of the human race is up for grabs in Travellers, and Altered Carbon frets over the separation of human consciousness from the body. And Humans and Westworld see trouble ahead for human-android relations.

Narratives like these have a long lineage. Science fiction has been articulating our hopes and fears about technological disruption at least since Mary Shelley’s Frankenstein (1818).

However, as the likes of driverless cars and robot therapists emerge, some previously fictional concerns are no longer imaginative speculation. Instead, they represent real and urgent problems.

What kind of future do we want?

Last year, Australia’s Chief Scientist Alan Finkel suggested that we in Australia should become “human custodians.” This would mean being leaders in technological development, ethics, and human rights.

Finkel isn’t alone in his concern. But it won’t be simple to address these issues in the development of new technology.

Many people in government, industry,  and universities now argue that including perspectives from the humanities and social sciences will be a key factor.

A recent report from the Australian Council of Learned Academies (ACOLA) brought together experts from scientific and technical fields as well as the humanities, arts and social sciences to examine key issues arising from artificial intelligence.

According to the chair of the ACOLA board, Hugh Bradlow, the report aims to ensure that “the well-being of society” is placed “at the center of any development.”

Human-centred AI

A similar vision drives Stanford University’s Institute for Human-Centered Artificial Intelligence. The institute brings together researchers from the humanities, education, law, medicine, business,  and STEM to study and develop “human-centered” AI technologies. The idea underpinning their work is that “AI should be collaborative, augmentative and enhancing to human productivity and quality of life.”

Meanwhile, across the Atlantic, the Future of Humanity Institute at the University of Oxford similarly investigates “big-picture questions” to ensure “a long and flourishing future for humanity.”

The center is set to double in size in the next year thanks to a £13.3 million (A$25 million) contribution from the Open Philanthropy Project. The founder of the institute, philosopher Nick Bostrom, said:

There is a long-distance race on between humanity’s technological capability, which is like a stallion galloping across the fields, and humanity’s wisdom, which is more like a foal on unsteady legs.

What to build and why

The IT sector is also wrestling with the ethical issues raised by rapid technological advancement. Microsoft’s Brad Smith and Harry Shum wrote in their 2018 book The Future Computed that one of their “most important conclusions” was that the humanities and social sciences have a crucial role to play in confronting the challenges raised by AI:

Languages, art, history, economics, ethics, philosophy, psychology and human development courses can teach critical, philosophical and ethics-based skills that will be instrumental in the development and management of AI solutions.

Hiring practices in tech companies are already shifting. In a TED talk on “Why tech needs the humanities,” Eric Berridge – chief executive of the IBM-owned tech consulting firm Bluewolf – explains why his company increasingly hires humanities graduates.

While the sciences teach us how to build things, it’s the humanities that teach us what to build and why to build them.

Only 100 of Bluewolf’s 1,000 employees have degrees in computer science and engineering. Even the Chief Technology Officer is an English major. In the following video, Tech CEO Eric Berridge explains why his company hires humanities graduates.

Education for a brighter future

Similarly, Matt Reaney, the chief executive and founder of Big Cloud – a recruitment company that specializes in data science, machine learning,  and AI employment – has argued that technology needs more people with humanities training.

[The humanities] give context to the world we operate in day to day. Critical thinking skills, deeper understanding of the world around us, philosophy, ethics, communication, and creativity offer different approaches to problems posed by technology.

Reaney proposes a “more blended approach” to higher education, offering degrees that combine the arts and STEM.

Another advocate of the interdisciplinary approach is Joseph Aoun, President of Northeastern University in Boston. He has argued that in the age of AI, higher education should be focusing on what he calls “humanics,” equipping graduates with three key literacies: technological literacy, data literacy, and human literacy.

The time has come to answer the call for humanities graduates capable of crossing over into the world of technology so that our human future can be as bright as possible.

Without training in ethics, human rights and social justice, the people who develop the technologies that will shape our future could make poor decisions. And that future might turn out to be one of the calamities we have already seen on screen.The Conversation

This article is republished from The Conversation by Sara James, Senior Lecturer, Sociology, La Trobe University and Sarah Midford, Senior Lecturer, Classics and Ancient History and Director of Teaching and Learning (ugrad), School of Humanities and Social Sciences, La Trobe University under a Creative Commons license. Read the original article.

 

Get the TNW newsletter

Get the most important tech news in your inbox each week.

Also tagged with


Published
Back to top