This article was published on June 25, 2020

A beginner’s guide to natural language processing and generation


A beginner’s guide to natural language processing and generation Image by: Unsplash

This article is part of Demystifying AI, a series of posts that (try to) disambiguate the jargon and myths surrounding AI.

20 years ago, if you had a database table containing your sales information and you wanted to pull up the ten most sold items in the past year, you would have to run a command that looked like this:

SELECT TOP 10 SUM(sale_total) AS total_sales FROM sales 
WHERE sale_date > DATEADD(day, -365, GETDATE()) GROUP BY item_id

Today, performing the same task can be as easy as writing the following query in a platform such as IBM Watson:

Which 10 items sold the most in the past year?

The <3 of EU tech

The latest rumblings from the EU tech scene, a story from our wise ol' founder Boris, and some questionable AI art. It's free, every week, in your inbox. Sign up now!

From punch cards to keyboards, mice, and touch screens, human-computer interfacing technologies have undergone major changes, and each change has made it easier to make use of computing resources and power.

But never have those changes been more dramatic than in the past decade, the period in which artificial intelligence turned from a sci-fi myth to everyday reality. Thanks to the advent of machine learning, AI algorithms that learn by examples, we can talk to Alexa, Siri, Cortana, and Google Assistant, and they can talk back to us.

Behind the revolution in digital assistants and other conversational interfaces are natural language processing and generation (NLP/NLG), two branches of machine learning that involve converting human language to computer commands and vice versa.

NLP and NLG have removed many of the barriers between humans and computers, not only enabling them to understand and interact with each other, but also creating new opportunities to augment human intelligence and accomplish tasks that were impossible before.

[Read: How quantum computers could make future humans immortal]

The challenges of parsing human language

wood-cube-abc-cube-letters-48898

For decades, scientists have tried to enable humans to interact with computers through natural language commands. One of the earliest examples was ELIZA, the first natural language processing application created by the MIT AI Lab in the 1960s. ELIZA emulated the behavior of a psychiatrist and dialogued with users, asking them about their feelings, and giving appropriate responses. ELIZA was followed by PARRY (1972) and Jabberwacky (1988).

Another example is Zork, an interactive adventure game developed in the 1970s, in which the player gave directives by typing sentences in a command line interface, such as “put the lamp and sword in the case.”

The challenge of all early conversational interfaces was that the software powering them was rule-based, which means the programmers had to predict and include all the different forms that a command could be given to the application. The problem with this approach was that first, the code of the program became too convoluted, and second, developers still missed out plenty of the ways that the users might make a request.

As an example, you can ask the weather in countless ways, such as “how’s the weather today?” or “will it rain in the afternoon?”  or “will it be sunny next week?” or “will it be warmer tomorrow?” For a human, understanding and responding to all those different nuances is trivial. But a rule-based software needs explicit instructions for every possible variation, and it has to take into account typos, grammatical errors and more.

The sheer amount of time and energy required to accommodate for all those different scenarios is what previously prevented conversational applications from gaining traction. Over the years, we’ve become used to rigid graphical user interface elements such as command buttons and dropdown menus that prevent users from stepping out the boundaries of the application’s predefined set of commands.

How machine learning and NLP solve the problem

artificial-intelligence-2228610_1920

NLP uses machine learning and deep learning algorithms to analyze human language in a smart way. Machine learning doesn’t work with predefined rules. Instead, it learns by example. In the case of NLP, machine learning algorithms train on thousands and millions of text samples, word, sentences and paragraphs, which have been labeled by humans. By studying those examples, it gains a general understanding of the context of human language and uses that knowledge to parse future excerpts of text.

This model makes it possible for NLP software to understand the meaning of various nuances of human language without requiring to be explicitly told. With enough training, NLP algorithms can also understand the broader meaning of human-spoken or -written language.

For instance, based on the context of a conversation, NLP can determine if the word “cloud” is a reference to cloud computing or the mass of condensed water vapor floating in the sky. It might also be able to understand intent and emotion, such as whether you’re asking a question out of frustration, confusion or irritation.

What are the uses of NLP?

Digital assistants are just one of the many use cases of NLP. Another is the database querying example that we saw at the beginning of the article. But there are many other places where NLP is helping augment human efforts.

An example is IBM Watson for Cybersecurity. Watson uses NLP to read thousands of cybersecurity articles, whitepapers, and studies every month, more than any human expert could possibly study. It uses the insights it gleans from the unstructured information to learn about new threats and protect its customers against them.

We also saw the power of NLP behind the sudden leap that Google’s translation service took in 2016.

Some other use cases include summarizing blocks of text and automatically generating tags and related posts for articles. Some companies are using NLP-powered software to do sentiment-analysis of online content and social media posts to understand how people are reacting to their products and services.

Another domain where NLP is making inroads is chatbots, which are now accomplishing things that ELIZA wasn’t able to do. We’re seeing NLP-powered chatbots in fields such as healthcare, where they can question patients and run basic diagnoses like real doctors. In education, they’re providing students with on-demand online tutors that can help them through an easy-to-use, conversational interface whenever they need them

In businesses, customer service chatbots use the technology to understand and respond to trivial customer queries and leave human employees to focus their attention on taking care of follow ups and more complicated problems.

Creating output that looks human-made with NLG

Artificial intelligence machine learning

The flip side of the NLP coin is NLG. According to Gartner, “Whereas NLP is focused on deriving analytic insights from textual data, NLG is used to synthesize textual content by combining analytic output with contextualized narratives.”

In other words, if NLP enables software to read human language and convert it to computer-understandable data, NLG enables it to convert computer-generated data into human-understandable text.

You can see NLG in power in a feature Gmail added a couple of years ago, which creates automatic answers for your letters using your own style. Another interesting use of NLG is creating reports from complex data. For instance, NLG algorithms can create narrative descriptions of company data and charts. This can be helpful data analysts that have to spend considerable time creating meaningful reports of all the data they analyze for executives.

The road ahead

In the beginning, there was a huge technical gap between humans and computers. That gap is fast closing, thanks in part to NLP and NLG and other AI-related technologies. We’re becoming more and more used to talking to our computers as if they were a real assistant or a friend back from the dead.

What happens next? Maybe NLP and NLG will remain focused on fulfilling more and more utilitarian use cases. Or maybe they’ll lead us toward real, Turing-complete machines that might deceive humans into loving them. Whatever the case, exciting times are ahead.

This article was originally published by Ben Dickson on TechTalks, a publication that examines trends in technology, how they affect the way we live and do business, and the problems they solve. But we also discuss the evil side of technology, the darker implications of new tech and what we need to look out for. You can read the original article here.

Get the TNW newsletter

Get the most important tech news in your inbox each week.

Also tagged with