Amazon's AWS chip lab located in Austin is developing its Trainium AI chip. The latest Trainium3 chip offers cost reductions of up to 50% compared to traditional cloud servers at comparable performance levels. These chips are now also optimized for AI inference, providing support for services such as Amazon Bedrock and supporting mainstream AI models including Anthropic Claude, which have already run on over 1 million Trainium2 chips. Amazon recently reached an agreement with OpenAI to provide 2 gigawatts of Trainium capacity. The lab team focuses on rapid "launch" and chip design, aiming to provide a cost-effective alternative to counter Nvidia's dominant GPU position.

Lihat Asli
Halaman ini mungkin berisi konten pihak ketiga, yang disediakan untuk tujuan informasi saja (bukan pernyataan/jaminan) dan tidak boleh dianggap sebagai dukungan terhadap pandangannya oleh Gate, atau sebagai nasihat keuangan atau profesional. Lihat Penafian untuk detailnya.
  • Hadiah
  • Komentar
  • Posting ulang
  • Bagikan
Komentar
Tambahkan komentar
Tambahkan komentar
Tidak ada komentar
  • Sematkan