Copilot will soon run locally on PCs, as Intel and Microsoft nail down AI PC requirements
The evolution of Artificial Intelligence (AI) continues to reshape our interaction with technology, and a significant leap is on the horizon with the introduction of AI-powered personal computers. In a collaborative effort that has spanned several months, tech titans Microsoft and various semiconductor manufacturers have heralded the concept of the “AI PC”. This new class of computers aims to integrate generative AI and large language models in order to enhance user experience and functionality in unprecedented ways. Yet, until recently, the criteria defining an AI PC have remained somewhat nebulous.
In an enlightening session held in Taipei, Intel shared fresh details that provide clarity on what constitutes an AI PC, aligning closely with Microsoft’s vision. The focus of this revelation was the adaptation of Windows PCs to utilize Neural Processing Units (NPUs) for running Microsoft’s Copilot AI assistant directly on the machine, moving away from its current cloud-based operation. This shift towards local processing was highlighted during Intel’s AI summit, marking a significant step forward in PC technology.
Copilot, initially launched for Windows 11 users, has so far been reliant on cloud computing. Nevertheless, with the anticipated 24H2 update, speculated to be released in the upcoming summer, Copilot will start executing many of its tasks directly on the PC using NPUs. This move is set to decrease its dependency on cloud connectivity significantly. Although a specific timeline for this transition wasn’t provided, the initiative is a clear nod towards enhancing privacy and speeding up response times by processing tasks locally.
With the dawn of this new AI PC era, Microsoft plans to introduce a dedicated Copilot key in Windows keyboards, a move Intel is backing by including this feature in its AI PC standards. Prior to this, the minimal requirements for an AI PC comprised a trio of components: CPU, GPU, and NPU. Furthermore, Intel specified a performance criterion of 40 TOPs (tera operations per second) for NPUs in the forthcoming generation of AI PCs, emphasizing the significant role NPUs are slated to play.
NPUs are starting to make their presence known within Intel’s Meteor Lake, AMD’s Strix Point, and Qualcomm’s Snapdragon X elite platform. Parallelly, recent graphics cards from Nvidia, Intel, and AMD have showcased the capability to handle AI tasks like Nvidia’s RTX-powered chatbot locally. According to Intel, the purpose behind embracing NPUs is to offload intensive processes from CPUs and GPUs, a strategy that has demonstrated improvements in battery life. However, AI applications tend to be memory-intensive, and the implications for AI PC system specifications remain to be fully understood.
In addition to hardware advancements, Intel’s initiative in Taipei also introduced a plan to empower developers with the tools necessary to craft innovative software that leverages AI. The current landscape of AI applications spans text and image generations, image processing, and advanced search engines. Observations from recent Windows Insider builds suggest Microsoft is experimenting with features that could allow Copilot to automate navigation through settings, locate items using text descriptions, and support users in streamlining their workflows more effectively.
The efforts of Microsoft and Intel herald a transformative phase in how we engage with our PCs, imbuing them with a level of intelligence and interactivity that was previously the domain of science fiction. As this technology continues to evolve, the potential for AI to amplify our day-to-day computing experiences seems boundless.