Uber is expanding its artificial intelligence infrastructure by adopting custom chips from Amazon, deepening its reliance on Amazon Web Services (AWS) for high-performance computing.
Under the agreement, Uber will utilize AWS’s Graviton processors for general computing tasks and Trainium chips specifically designed for AI model training. The move is aimed at improving core platform functions such as ride matching, delivery logistics and personalized user experiences.
The integration of custom silicon allows Uber to optimize performance and reduce costs compared to traditional GPU-based systems, while scaling its AI capabilities to handle increasingly complex workloads.
For Amazon, the partnership highlights its strategy to position AWS as a competitive alternative in the AI hardware space, challenging established players by offering in-house chips tailored for machine learning and cloud computing.
The collaboration reflects a broader industry shift, where companies are diversifying away from reliance on a single chip provider and exploring specialized hardware to meet the growing demands of AI-driven applications.
As competition intensifies in both ride-hailing and cloud computing, the adoption of custom AI infrastructure is becoming a key differentiator for companies seeking efficiency and performance at scale.




