Meta is taking a giant leap into artificial intelligence (AI) that can solve problems like a human and without intervention, as companies try to get a footing on the technology.
CEO Mark Zuckerberg said in a video posted on Instagram that his company was developing artificial general intelligence (AGI) and that the company’s long-term vision is to “build general intelligence, open source it responsibly, and make it widely available so everyone can benefit”.
“It’s become clear that the next generation of services required is building full general intelligence, building the best AI assistants, AIs for creators, AIs for businesses and more, that need advances in every area of AI from reasoning to planning to coding to memory and other cognitive abilities,” he said in the video posted on Thursday.
The open-source software model allows the computer code to be freely copied and reused, which gives anyone permission to build their own chatbot. But OpenAI and Google have previously warned that open-source software can be dangerous as the technology can be used to spread disinformation.
At the heart of the project is Llama 3, a large language model (LLM) which is the technology that can comprehend and generate human language text. It is unclear when it will be released but it has been widely speculated that it will be this year.
To be able to power Llama 3, Meta is buying 350,000 H100 graphics cards, responsible for displaying all the images you see on screens, from Nvidia.
Zuckerberg said that the company’s “future roadmap” to meet its AI ambitions is to build a “massive compute infrastructure”.
Zuckerberg also said that the company is bringing two of its AI research teams – FAIR and GenAI – closer together intending to build full general intelligence and open source it as much as possible.
He also talked up the metaverse and the recently launched Meta Ray-Ban smart glasses, saying that “people are also going to need new devices for AI and this brings together AI and metaverse over time”.
“I think a lot of us are going to talk to AI as frequently throughout the day. And I think a lot of us are going to do that using glasses. These glasses are the ideal form factor for letting an AI see what you see and hear what you hear. So it’s always available to help out,” he said.