Posted on Leave a comment

Apple Develops a New AI Framework

Apple has developed a new AI framework that allows users to participate in data markup automation in person.

Personal assistants like Apple Siri accomplish their tasks by using natural language commands. However, their underlying components usually rely on supervised machine learning algorithms, which require a large number of manual annotated training data. In order to reduce the time and effort of collecting these data, Apple researchers have developed a framework to automatically create tags for enhanced data using user-involved signals. Their report indicates that annotated data significantly improves the accuracy of production in-depth learning systems when using strategies such as multitasking learning and external knowledge base validation.

“We believe that this is the first time that user participation signals have been used to help generate training data for sequential tagging tasks on a large scale and can be applied to practical settings to accelerate the deployment of new functions without manual annotation data,” the researchers wrote in a paper to be published. “In addition, user engagement signals can help us learn from the mistakes of digital assistants themselves and identify areas for improvement.”

Researchers used a series of heuristic methods to identify behaviors that could indicate active or negative participation. Some of them include clicking on content to participate further (positive response), listening to a song for a long time (another positive response), or interrupting the content provided by intelligent assistants, and manually selecting different content (negative response). These signals are selectively acquired in a “privacy preservation” manner to automatically generate basic authentic annotations, which are then combined with coarse-grained tags provided by human annotators.

In order to merge coarse-grained labels and inferred fine-grained labels into artificial intelligence model, the collaborators of this paper designed a multi-task learning framework, which treated coarse-grained and fine-grained entity labels as two tasks. In addition, they combine an external knowledge base validator consisting of entities and their relationships. Assuming that “something” is a music title and “the Beatles” is a music artist, we can query “Play something by the Beatles” and the validator will expand the search for alternatives to the first-level label and send them to a component that will reorder the prediction and return it. Best alternative.

Researchers used two independent test sets to evaluate tasks performed by multitasking models. They randomly sampled samples from production systems and manually labeled the underlying real labels. They said that in 21 model runs, 260,000 training examples were added, which “consistently” reduced the coarse-grained entity error rate in the prediction task compared with the baseline of all the manual annotated data. In addition, they report that adding fine-grained data with weak oversight can have a greater impact when there are relatively small amounts of manual annotated data (5,000 examples). Finally, they reported that the fine-grained entity error rate decreased by about 50% for any example where the top-level model assumptions passed the knowledge base validator.

In another experiment, the team tried to determine whether a more subtle annotation of the user’s intentions would increase the likelihood that the system would choose to operate correctly. They collected about 5,000 “Play Music” commands, including references to multiple bands, artists and songs, and sent them through a system containing their framework. After that, they asked annotators to classify the responses returned by the system as “satisfactory” or “unsatisfactory”. Researchers reported that the results of the enhanced system were 24.64% lower than the task error rate.

They will continue to explore how to use the participation of individual users to enhance personalization.

“We have observed that our model improves the final results received by users, especially for requests containing difficult or unusual language patterns,” the paper co-authors wrote. “For example, the enhanced system can correctly handle queries such as’Can you play Malibu on Miley Cyrus’s new Album’and’Humble on Kendrick Lamar’. In addition, the enhanced model can identify entities that users are more likely to refer to when they encounter real language ambiguity. For example, in Play one by Metallica, one can be a non-entity tag (meaning any song played by Metallica), or it can specifically refer to a song called One by Metallica. Since most users listen to Metallica’s One when they say’Play One by Metallica’, our model predicts what’One’really means based on the data the user participates in the annotations, so as to better capture the trends and preferences of the user community.

Earlier, a paper described Apple’s artificial intelligence development tool, Overton, whose model handles billions of queries. In addition, Apple recently studied whether users prefer to talk to “chatty” AI assistants.

Advertisements
Leave a Reply

Your email address will not be published.