Google and Anthropic seek to dominate AI with new chips – Decryption

Google is forging a deeper alliance with Anthropic, the startup behind ChatGPT rival Claude AI, offering its own specialized computer chips to power its capabilities.

The partnership has been bolstered by a significant financial injection from Google into Anthropic. As Decrypt previously reported, Google’s commitment included a 10% stake purchase for $300 million, followed by additional funding totaling a significant $500 million with a promise of $1.5 billion in additional investment.

“Entropic and Google Cloud share the same values ​​in the development of artificial intelligence,” said Google Cloud CEO Thomas Kurian in an official press release. “This broad collaboration with Anthropic, building on years of working together, brings AI safely and securely to more people and provides another example of how the most innovative and fastest-growing AI startups are being built on Google Cloud.”

Anthropic uses Google Cloud’s fifth generation Tensor Processing Units (TPUs) to perform AI inference, the process by which a trained AI model makes predictions or decisions based on new input data.

Such strategic moves by technology leaders underscore the fierce competition and high stakes in the development of increasingly sophisticated AI. The most notable partnership in the AI ​​space is the collaboration between Microsoft and OpenAI, with $10 billion on the table.

But what do these technological advances mean for AI chatbots and the tools people use every day? This comes down to the fundamental differences between AI training computing workshops: GPU and TPU.

Graphical processing units (GPUs), which are the backbone of artificial intelligence computing tasks, are adept at performing multiple operations simultaneously. They are versatile and widely used, not only in gaming and graphics rendering, but also in accelerating deep learning tasks.

In contrast, Tensor Processing Units (TPUs) are Google’s brainchild, custom-designed to turbocharge machine learning workflows. TPUs simplify specific tasks, providing faster training times and energy efficiency, which are critical when processing the massive data sets that LLMs like Anthropic’s Claude require.

The distinction between these processors is clear: GPUs (such as those used by OpenAI) offer a wide range of applications, but TPUs focus performance on machine learning. This suggests that for startups like Anthropic that rely on massive amounts of data to refine their models, Google’s TPUs may have a compelling advantage, potentially leading to faster developments and more accurate AI interactions.

On the other hand, recent OpenAI developments, especially GPT-4 Turbo, challenge any perceived leadership by Anthropic. The all-new Turbo model has 128K context tokens, a significant jump from the previous milestone of 8K and a blow to Anthropics’ previous superiority with Claudes 100K capabilities.

However, the battle is not without its nuances. These powerful TPUs can help Anthropic develop a more powerful LLM faster. But the larger context window, while interesting, is a double-edged sword, with large notifications leading to poor performance under current conditions.

As the AI ​​race heats up, Anthropic may now have a golden ticket thanks to Google’s heavy backing. But they’ll have to play their cards right, because OpenAI isn’t just riding on its own laurels, they’re also on Microsoft’s fast track right around the corner.

Edited by Ryan Ozawa.

Stay informed about cryptocurrency news, get daily updates in your inbox.

#Google #Anthropic #seek #dominate #chips #Decryption
Image Source : decrypt.co

Leave a Comment