IBM has announced that its cognitive intelligence platform Watson has been upgraded with speech, vision and language capabilities, allowing developers to to build smarter apps.
On the language side of things, IBM says Watson can now understand ambiguous language in text through a few different modules. The IBM Watson Natural Language Classifier understands meaning, while IBM Watson Dialog makes for more natural app interactions by tailoring language to the style used by a person asking a question.
Perhaps more interestingly than that though, the new Visual Insights capabilities promise to allow developers to glean insights from images and videos on social media by applying reasoning to the content of the images. In this way, developers can assess trends and patterns to “get a more comprehensive view of what users are communicating to get the ‘big picture.’” For now, this API remains experimental.
Watson’s speech capabilities have also been improved with new tools that allow devs to create apps in multiple languages, including Japanese, Mandarin, Spanish and Brazilian Portuguese. Other new languages will also be added, IBM says.
While the new capabilities are all well and good, they aren’t a lot of use if developers don’t use them. To encourage uptake, IBM has also rolled out a new set of developer tools that it promises will reduce the amount of time required to combine Watson APIs with data sets. The company also previewed its Watson Knowledge Studio, where the company will open up its machine learning and text analytics capabilities in a single tool.
Rounding off the Watson announcements, IBM says it will open a new ‘Watson Hub’ in Silicon Valley in the South of Market (SoMa) area of San Francisco. The office will serve as a development hub for new cognitive computing capabilities, as well as providing the base for IBM Commerce. It’s due to open in “early” 2016.
➤ Watson [IBM]
Featured image credit – IBM/Flickr
Get the TNW newsletter
Get the most important tech news in your inbox each week.