Google Cloud Deploys AMD-Powered AI Servers, Delivering 80% Speed Boost
AMD Powers Google Cloud's New AI Servers, Promising 80% Boost in Speed
April 25, 2022 - In a move that is expected to revolutionize the field of artificial intelligence, Google Cloud has announced that it is deploying new AI servers powered by AMD's (NASDAQ: AMD) latest EPYC processors. The new servers, designed to accelerate machine learning workloads, are expected to deliver a significant boost in performance, with AMD claiming an 80% increase in speed compared to previous generations.
The new servers, which are part of Google Cloud's A2 AI platform, will be used to support a wide range of AI applications, including natural language processing, computer vision, and predictive analytics. The platform is designed to be highly scalable and flexible, allowing customers to easily deploy and manage AI workloads in the cloud.
According to AMD, the EPYC processors used in the new servers are capable of delivering up to 80% more performance than previous generations, thanks to their advanced architecture and high-performance capabilities. The processors also feature a high level of security, with AMD's Secure Encryption Technology (SET) providing an additional layer of protection for sensitive data.
"We are thrilled to be working with Google Cloud to bring the power of AMD's EPYC processors to the A2 AI platform," said Forresters, Senior Vice President and General Manager of the Computing and Graphics Group at AMD. "Our processors are designed to deliver exceptional performance and security, and we believe they will play a key role in helping Google Cloud's customers to accelerate their AI workloads and achieve their goals."
The new AI servers are expected to be available in the coming weeks, with Google Cloud offering a range of pricing options to suit different customer needs. The company is also offering a range of AI-related services and tools, including data analytics, machine learning, and computer vision, to help customers to get the most out of their AI workloads.