The role of natural language processing in AI
What is natural language processing?
Natural language processing (NLP) is a branch of artificial intelligence within computer science that focuses on helping computers to understand the way that humans write and speak. This is a difficult task because it involves a lot of unstructured data. The style in which people talk and write (sometimes referred to as ‘tone of voice’) is unique to individuals, and constantly evolving to reflect popular usage.
Understanding context is also an issue – something that requires semantic analysis for machine learning to get a handle on it. Natural language understanding (NLU) is a sub-branch of NLP and deals with these nuances via machine reading comprehension rather than simply understanding literal meanings. The aim of NLP and NLU is to help computers understand human language well enough that they can converse in a natural way.
Real-world applications and use cases of NLP include:
- Voice-controlled assistants like Siri and Alexa.
- Natural language generation for question answering by customer service chatbots.
- Streamlining the recruiting process on sites like LinkedIn by scanning through people’s listed skills and experience.
- Tools like Grammarly which use NLP to help correct errors and make suggestions for simplifying complex writing.
- Language models like autocomplete which are trained to predict the next words in a text, based on what has already been typed.
All these functions improve the more that we write, speak, and converse with computers: they are learning all the time. A good example of this iterative learning is a function like Google Translate which uses a system called Google Neural Machine Translation (GNMT). GNMT is a system that operates using a large artificial neural network to increase fluency and accuracy across languages. Rather than translating one piece of text at a time, GNMT attempts to translate whole sentences. Because it scours millions of examples, GNMT uses broader context to deduce the most relevant translation. It also finds commonality between many languages rather than creating its own universal interlingua. Unlike the original Google Translate which used the lengthy process of translating from the source language into English before translating into the target language, GNMT uses “zero-shot translate” – translating directly from source to target.
Google Translate may not be good enough yet for medical instructions, but NLP is widely used in healthcare. It is particularly useful in aggregating information from electronic health record systems, which is full of unstructured data. Not only is it unstructured, but because of the challenges of using sometimes clunky platforms, doctors’ case notes may be inconsistent and will naturally use lots of different keywords. NLP can help discover previously missed or improperly coded conditions.
How does natural language processing work?
Natural language processing can be structured in many different ways using different machine learning methods according to what is being analysed. It could be something simple like frequency of use or sentiment attached, or something more complex. Whatever the use case, an algorithm will need to be formulated. The Natural Language Toolkit (NLTK) is a suite of libraries and programs that can be used for symbolic and statistical natural language processing in English, written in Python. It can help with all kinds of NLP tasks like tokenising (also known as word segmentation), part-of-speech tagging, creating text classification datasets, and much more.
These initial tasks in word level analysis are used for sorting, helping refine the problem and the coding that’s needed to solve it. Syntax analysis or parsing is the process that follows to draw out exact meaning based on the structure of the sentence using the rules of formal grammar. Semantic analysis would help the computer learn about less literal meanings that go beyond the standard lexicon. This is often linked to sentiment analysis.
Sentiment analysis is a way of measuring tone and intent in social media comments or reviews. It is often used on text data by businesses so that they can monitor their customers’ feelings towards them and better understand customer needs. In 2005 when blogging was really becoming part of the fabric of everyday life, a computer scientist called Jonathan Harris started tracking how people were saying they felt. The result was We Feel Fine, part infographic, part work of art, part data science. This kind of experiment was a precursor to how valuable deep learning and big data would become when used by search engines and large organisations to gauge public opinion.
Simple emotion detection systems use lexicons – lists of words and the emotions they convey from positive to negative. More advanced systems use complex machine learning algorithms for accuracy. This is because lexicons may class a word like “killing” as negative and so wouldn’t recognise the positive connotations from a phrase like, “you guys are killing it”. Word sense disambiguation (WSD) is used in computational linguistics to ascertain which sense of a word is being used in a sentence.
Other algorithms that help with understanding of words are lemmatisation and stemming. These are text normalisation techniques often used by search engines and chatbots. Stemming algorithms work by using the end or the beginning of a word (a stem of the word) to identify the common root form of the word. This technique is very fast but can lack accuracy. For example, the stem of “caring” would be “car” rather than the correct base form of “care”. Lemmatisation uses the context in which the word is being used and refers back to the base form according to the dictionary. So, a lemmatisation algorithm would understand that the word “better” has “good” as its lemma.
Summarisation is an NLP task that is often used in journalism and on the many newspaper sites that need to summarise news stories. Named entity recognition (NER) is also used on these sites to help with tagging and displaying related stories in a hierarchical order on the web page.
How does AI relate to natural language processing?
Natural language processing – understanding humans – is key to AI being able to justify its claim to intelligence. New deep learning models are constantly improving AI’s performance in Turing tests. Google’s Director of Engineering Ray Kurzweil predicts that AIs will “achieve human levels of intelligence” by 2029.
What humans say is sometimes very different to what humans do though, and understanding human nature is not so easy. More intelligent AIs raise the prospect of artificial consciousness, which has created a new field of philosophical and applied research.
Interested in specialising in NLP?
Whether your interest is in data science or artificial intelligence, the world of natural language processing offers solutions to real-world problems all the time. This fascinating and growing area of computer science has the potential to change the face of many industries and sectors and you could be at the forefront.
Find out more about NLP with an MSc Computer Science with Artificial Intelligence from the University of York.