Microsoft improves its Cognitive Services with various APIs

Microsoft, in connection to the strange news that was provided before its flagship Build developer conference, made an announcement that it now has a large number of new pre-built machine learning models designed for its platform of Cognitive Services. The machine learning models include an Application Programming Interface (API) meant to help build features for personalization, a form recognizer for programming data entry, an API that recognizes handwriting and a well-developed service for speech recognition which is focused on transliterating dialogues.


Many believe that the most significant of these newly provided services is the personalization API, this is because, there aren’t a lot of websites and apps that provide its users with the ability to personalize. The main reason is that personalization has to do with building models on the basis of data that cannot be found in one location. Be that as it may, our sources report that Microsoft places its best bet on reinforcement learning which so happens to be a machine learning system that does not require the usual labelled data training that is concomitant of any machine learning. Rather, the way reinforcement learning works is that it seeks to achieve a set goal on the basis of what users do. According to our sources, Microsoft claims that it is the first to company that has provided this service on its Xbox, after it gained an increase of 40 percent in the amount of users of the Xbox content shortly after it (Microsoft) employed the service.

API for Handwriting recognition, or rather Ink Recognizer as Microsoft formally calls it, can recognize any handwriting automatically, and not just that, they recognize common shapes and documents. Note that Microsoft has been working for a long time to develop this especially as it was trying to improve Windows 10 inking abilities. Microsoft Office 365 and Windows already employ the use of this service, and with the help of this API developers will be able to provide their own apps with the same proficiencies.

Conversation Transcription as the name goes functions to transcribe conversations and this is a fragment of existing speech-to-text features that Microsoft already has on the platform of its Cognitive Services. It has the ability to identify and differentiate various speakers, transcribe the conversation in real time as well as oversee and support crosstalk.

The Form Recognizer makes text and data extraction easier than already is from business forms and documents. Although it may not sound that big a deal but, it offers solution to a collective problem. It requires only five samples to understand how it should extra data and users will not need to manually label the documents they wish to extract in the way the systems were built.

Form Recognizer will also be launched into Cognitive Services and thus provide developers with a way to take the models beyond Azure and to edge devices. Same goes for the speech-to-text and text-to-speech devices and also the anomaly detector.

Furthermore, Microsoft announced that its Neural Text-to-Speech, Computer Vision Read and Text Analytics Named Entity Recognition APIs are now available in general.

Some of the services are getting upgrades; Neural Text-to-Speech service now supports five voices and Computer Vision API now understands over 10,000 concepts, scenes and objects and a million celebrities instead of the 200,000 in the former version.



Over 50,000 + Readers

Get fresh content from JustNaira



Leave a Reply