Skip to main content

The reason Intel is partnering with more than 100 software developers on more than 300 AI-accelerated features is a simple one: Intel has introduced AI capabilities inside of its 14th-gen “Meteor Lake” Core Ultra chips for laptops, and it needs them to do something.

AI has become synonymous with Bing Chat, Google Bard, Windows Copilot, and ChatGPT all AI tools that live in the cloud. Intel’s new AI Acceleration Program, launching in anticipation of Meter Lake’s official launch on Dec. 14, will try to convince consumers that AI should run locally on their PCs.

That may be a tough sell to consumers, who may not know — or care — where these functions are being processed. Intel, though, desperately does — and has tried to get this message across at its Intel Innovation conference, earnings reports, and more. Intel is trying to encourage developers to either write natively for Intel’s AI engine, known as the NPU, or use the OpenVINO developer kit that Intel helped author but has released as open source.

Only a few of the developers, however, are calling out specific AI features. Deep Render, which uses AI to compress file sizes, claims that AI will allow their algorithm to process video compression five times more than usual, according to Chri Besenbruch, co-founder and CEO, in a statement. Topaz Labs, which uses AI to upscale photos, said it can use the NPU to accelerate its deep learning models.

XSplit, which makes a Vcam app for removal and manipulation of webcam backgrounds, also claimed that it could tap the NPU for greater performance. “By utilizing a larger AI model running on the Intel NPU, we are able to reduce background removal inaccuracies on live video by up to 30 percent, while at the same time significantly reducing the overall load on the CPU and GPU,” Andreas Hoye, chief executive at XSplit.

Some developers may combine local processing as well as cloud-based AI, too. For example, Adobe’s Generative Fill uses the cloud to suggest new scenes based upon text descriptions the user enters, but applying those scenes to an image is performed on the PC. Nevertheless, it’s in Intel’s best interests for you to start thinking of “Intel Inside” and “AI Inside” in the same sentence.