Meta Unveils AI Project That Powers Conversations in the Metaverse

Meta’s Project CAIRaoke is currently being used on its smart video display platform, Portal.
Image source: Meta

Quick take:

  • Project CAIRaoke is a neural model for building on-device assistants.
  • Mark Zuckerberg said Meta is working on AI research to allow people to have natural conversations with voice assistants.
  • The company is also working on an AI system that can translate between all written languages. 

Meta has announced today its conversational AI project, dubbed Project CAIRaoke. The company is working on artificial intelligence research to power more personal and contextual conversations between humans and voice assistants.

The project is an “end-to-end neural model for building on-device assistants.” In a blog post announcing Project CAIRaoke, Meta states that the company is already using the project on its smart video display platform, Portal. 

“With models created with Project CAIRaoke, people will be able to talk naturally with their conversational assistants, so they can refer back to something from earlier in a conversation, change topics altogether, or mention things that rely on understanding complex, nuanced context,” the blog post states. “They will also be able to interact with them in new ways, such as by using gestures.”

In the future, the company aims to integrate Project CAIRaoke with “augmented and virtual reality devices to enable immersive, multimodal interactions with assistants.”

Speaking at Meta’s live-streamed “Inside the Lab” event today, Meta CEO Mark Zuckerberg said that Project CAIRaoke is a step towards how people will communicate with AI in the metaverse.

He also said that Meta is building a new class of generative AI models that can generate aspects of a world based on a user’s description of it. Zuckerberg showcased Builder Bot, which allows users to describe what they want it to generate. 

In a demo, Zuckerberg showed a legless avatar of himself on an island, where he commanded AI through speech to generate a beach, clouds, trees and a picnic blanket.

“As we advance this technology further, you’ll be able to create nuanced worlds to explore and share experiences with others, with just your voice,” said Zuckerberg.

He also said that Meta’s AI is working on “self-supervised learning” – a process where AI is being fed raw data instead of being trained on labelled data – to prepare for how it could interpret and predict the types of interactions that would take place in the metaverse.

The company is also working on egocentric data, allowing AI to see worlds from a first-person perspective. It has gathered a global consortium of 13 universities and labs to work on the largest ever egocentric dataset, called Ego4D.

Zuckerberg also announced that Meta is working on an AI system that can translate between all written languages, as well as a universal instant text-to-speech translator for all languages.

Meta recently filed a trademark application for “3D conversations in an artificial reality environment” on 27 Jan. According to the abstract stated in the application, the 3D conversation system allows conversation participants in an augmented reality environment to appear “as if they are face to face”.

Stay up to date:

Previous Post

The ‘BOSS’ in the Metaverse: Hugo Boss Files Trademark Application for Virtual Goods

Next Post

Makers of the Film ‘Prospect’ Have Launched an NFT-Based Cinematic Universe ‘The Fringe’

Related Posts
Total
0
Share