Anthropic’s quest for better, more understandable AI raises $580 million

- Advertisement -


Less than a year ago Antropic was founded from former OpenAI VP of Research Dario Amodei, who intends to conduct research in the public interest to make AI more reliable and understandable. Back then, her $124 million in funding was amazing, but nothing could have prepared us for the company. raising $580 million in less than a year.

- Advertisement -

“Through this fundraising, we are going to explore the predictable scaling properties of machine learning systems by looking closely at the unpredictable ways in which security opportunities and challenges can arise at scale,” Amodey said in the announcement.

- Advertisement -

His sister Daniela, with whom he co-founded the Public Benefit Corporation, said that in building the company, “we focused on ensuring that Anthropic had the culture and governance to continue to responsibly research and develop secure AI systems as we scaled. ”

Again, that word is scale. Because that’s the category of problem Anthropic was created to study: how to better understand the AI ​​models that are increasingly being used in every industry, as they go beyond our ability to explain their logic and results.

- Advertisement -

The company has already published several papers that examine, for example, reverse engineering the behavior of language models to understand why and how they lead to the results they do. Something like GPT-3, probably the most famous language model, is undoubtedly impressive, but there is something unsettling about the fact that its internal operations remain essentially a mystery even to its creators.

As explained in the new funding announcement:

The goal of this research is to develop the technical components needed to build large-scale models that have better implicit guarantees and require less post-training intervention, and to develop the tools needed to further explore these models to make sure the protection actually works.

If you don’t understand how an AI system works, you can only react when it does something wrong — like exhibiting a facial recognition bias or tending to draw or describe men when asked about doctors and CEOs. This behavior is built into the model, and the solution is to filter its output, not to prevent these incorrect “views” from occurring.

It’s kind of a fundamental change in how AI is built and understood, and therefore requires big brains and big computers – neither of which is particularly cheap. No doubt $124 million was a good start, but apparently early results were promising enough for Sam Bankman-Fried to lead this huge new round, joined by Caroline Ellison, Jim McClave, Nishad Singh, Jaan Tallinn and the Center for new risks. Research.

Interestingly, this group does not include any of the usual deep tech investors, but of course Anthropic is not looking to make a profit, which is sort of a hurdle for VCs.

You can follow the latest Anthropic research here.


Credit: techcrunch.com /

- Advertisement -

Stay on top - Get the daily news in your inbox

DMCA / Correction Notice

Recent Articles

Related Stories

Stay on top - Get the daily news in your inbox