Nvidia has developed an AI system known as ChipNeMo that aims to speed up the production of its AI GPUs. It is essentially a Large Language Model (LLM) that harvests data from Nvidia’s internal processes. It then uses this data to help accelerate the design of those very same AI GPUs that it runs on. Yup, just like Skynet, Nvidia has equipped its AI with knowledge on how to build and improve itself. We’re all doomed.
There is an insatiable demand for AI, and Nvidia’s chips are selling like hotcakes. The company is struggling to keep up with the production of its A100 and H100 compute GPUs to meet this hunger. Last quarter, Nvidia’s sales revenue as of Q3 FY24 increased 206% year-on-year, marking a record high for the company. That equates to approximately 18.12 billion in sales, proving AI is an incredibly profitable source of income for Team Green. Nvidia shows no sign of stopping and is projected to exceed half a million units sold by the end of its fiscal year in February. However, the only way Nvidia can achieve this milestone is by increasing production, and that is where ChipNeMo comes in.
The AI Wars
According to the Wall Street Journal (via Business Insider), CHipNeMo is already in use at Nvidia’s production facilities. It has reportedly proven useful for training junior engineers to design chips and easily access information across 100 different teams. Specifically, the AI chatbot can respond to queries related to GPU architecture, efficiently parse data and catch bugs early in the design process. Plus, it can assist in the generation of chip design code. Suffice to say, it’s an incredibly useful tool indeed.
Nvidia is not the only one attempting to use AI to accelerate the design stage of semiconductors. Google’s DeepMind has an AI system it claims could speed up the process of its latest custom SoCs. Software giant Synopsys also launched an AI tool designed to boost productivity among chip engineers. It’s no wonder Intel is also primed to take advantage of the AI revolution.
Nvidia also has to fight among tech giants like AMD, Apple, ARM, Mediatek, Broadcom, and Qualcomm for access to the best semiconductor nodes TSMC has on offer. Therefore, it has no choice but to speed up its own internal processes to ensure a speedier production process in order for it to make enough chips and satiate demand.
Nevertheless, it is surprising to see how far AI deep learning and LLMs have come in such a short space of time. We’ve all had fun trying ChatGPT or Google Bard. However, the process has now evolved in a much more meaningful way, helping to create the very hardware and software it was designed with. What’s equally surprising (and scary) is the willfulness of big tech giants to equip these AI models with so much critical information. This reminds me: I, for one, welcome our new AI overlords.