This month, we will take on the third of the language-oriented APIs: the Natural Language Understanding module. What does this one do?
Last month, we looked at the Natural Language Classifier, which offers a way to look at a piece of text and determine not so much the tone of the section but rather its intent: What is the text really about? What are the core issues it reveals?
Before that, we investigated the Language Translator, which translates written text to and from English, French, German, and a multitude of other languages.
The Natural Language Understanding (NLU) API is the third of the three language-oriented Watson applications. In short, it is targeted at text passages to determine a number of things (discussed below). For each of these things, it returns a confidence rating…because when you get to the very bottom of Watson, it is probability-based.
What kinds of things does NLU look for? There are seven attributes Watson appraises.
First, sentiment. As we will see below, the second thing that NLU looks at is emotion. Normally, I would consider emotion so similar to sentiment that there’s no point in measuring them separately, but Watson sees each differently. Sentiment seems to refer to the closeness that the subject has to the topic, rather than to any intense feelings (or emotions). What I thought was interesting is that, while most of the probability values I have seen are between 0 and 1, if you go to the NLU home page and do the demo, you actually get a negative value for sentiment. I’m not really sure what that means, but it’s kind of scary.
Second, emotion. Emotion, unlike sentiment, has multiple values that it’s rated on. Specifically, joy, anger, sadness, fear, and disgust (that’s my favorite, and sometimes I go to this site and write something awful to put in the demo just to see how high a disgust score I can get). Each emotion value is graded on a scale from 0 to 1 (although I guess, given what happened with sentiment above, negative values from 0 to -1 are possible).
Third, keywords. In some ways this is about intent, so I am not ruling out a future turf war with the Natural Language Classifier. But I think keywords are always a plus, so why not. For the demo passage (which is about a Viking ship buried in the California desert), Watson picks out about two dozen keywords (things like Desert Ship, Canebrake Canyon, and Ancient American Dreamtime) with confidence levels ranging from .96 down to .74. I don’t know how far down the JSON goes, but they stopped showing it at that point. The keywords selected are exactly the kinds of things you would have entered in a search bar, so you can see how this has immediate applicability.
Fourth, entities. Everyone knows what entities are. They are things. And to be specific, they are things that are actual things, not ideas. For Watson, what this means is people, places, organizations, geological features, stuff like that. This is similar to keywords, but it tends to be more oriented around physical things. For example, from the keywords examples above, entities includes Canebrake Canyon, but not Ancient American dreamtime. And each entity includes a classification (e.g., geographic feature, person, location, etc.).
Fifth, categories. This one is kind of weird, at least in my opinion. What it does is present a series of categories, specified as a hierarchy, that this sample text seems to fit into. Each of these categories can go five levels deep, although the furthest this demo goes is four deep. For example, the demo article is about some people hiking in the desert, so the first category starts with Health/Fitness, but from there descends into Disorders, and then to Mental Disorders, and finally to Anxiety/Depression.
The other categories for the hiking demo are Travel and Gardening. The travel I can see, and the gardening fits because flowers are involved (it was spring, and the desert was in bloom).
Sixth, concept. To me, concept should talk about the overall point of the article. To me, the idea behind this demo was searching for something of potentially historical significance or searching for a legend. Watson came up with a list including Colorado desert, Sonoran desert, desert, Imperial County, Salton Sea, and, strangely enough, prima facie (a Latin phrase that means roughly “at first glance”).
Obviously, “concept” to Watson means something different than what it means to me. But maybe that’s just me. (Yeah, never heard that before in my life.)
Seventh, semantic roles. Remember diagramming sentences in eighth and ninth grade? Maybe they don’t do that anymore. I remember diagrams that stretched across the chalkboard and ended up looking like a network diagram for the DOD nuclear response program. The Semantic Roles tab gives you that kind of information. The JSON output from Watson can be parsed down to individual sentences, and each of them can be broken down to identify the Action, Object, and Subject. Now that’s what I call detail.
So far, on the demo, we have been looking only at the Text tab. But what’s probably most disturbing is the URL tab. If you click on that and then click on Analyze, Watson does the same analysis on the content of that URL. Feeling adventurous? Then click on the URL and Analyze buttons and take a look at what it gives you. I dare you. In fact, I double-dog dare you!
Summary
Obviously, Natural Language Understanding provides some really valuable information.
And, equally obviously, if you are anything like me (God help you if you are), then by now you might be confused. We have gone through the Language Translator, the Natural Language Classifier, and now the Natural Language Understanding. So, I think the next thing we will do, next month, is to look at all three at once and try to separate them in your mind. Stay tuned.
LATEST COMMENTS
MC Press Online