Qualcomm today unveiled its newest CPUs for extended reality (XR) and augmented reality (AR) mediums: the Snapdragon XR2 Gen 2 for mixed reality (MR) and virtual reality (VR) gadgets, and the AR1 Gen 1 made especially for smart glasses. This news comes just in time for the release of the Quest 3 VR headgear by Meta.
A new XR platform from Qualcomm hasn’t been introduced in a while. Although there was talk of an XR3 introduction today, the XR2 Gen 1 is already more than three years old, thus the business chose to stick with the XR2 name. Evidently, Qualcomm believes that there will eventually be other levels under the XR[x] moniker, with the XR1 serving as the high-quality layer and the XR2 serving as the premium quality tier.
While being substantially more energy efficient, the business claims a 2.5x gain in GPU performance and an 8x improvement in AI performance. Additionally, according to Qualcomm, the XR2 Gen 2 was designed to support up to 10 cameras and sensors and two 3K screens, with a pass-through video latency of 12 ms for mixed reality applications.
Prior to the recent announcement, a query about the release cycle was addressed by Hugo Swart, vice president and general manager of Qualcomm’s XR division. He said that both technical and business factors have an impact on how quickly the organisation is developing. Power, latency, size, and performance are the main issues, with performance being closely related to display resolution. However, as the display resolution grows, so does the power needed to run the headset. Swart highlighted that he can only muster a maximum of 20 watts and is unable to deliver 100 watts to anyone’s skull. 10 to 15 is the ideal range for these. How to include everything into a single, reasonably cost piece of silicone is the fundamental issue that is raised.
Swart also took this as an opportunity to poke fun at Apple, its pricey Vision Pro, and its specially made gear. How many individuals can actually afford to purchase a gadget that costs more than $3,000, in my opinion? It must be accessible to everyone so that they may afford to enjoy it.
Approximately 80 devices, from VR to mixed reality, are now powered by Qualcomm’s XR processors, the company claims. In fact, it has gained a lot of momentum in the consumer market because to factors like gaming, fitness, social interaction, entertainment, and live events. However, Swart stressed that it has also gained traction in the enterprise thanks to factors like training, education, medical care, and many others.
If ones pay close attention to Qualcomm product releases, they may recall that the firm officially introduced an AR2 Gen 1 system on a chip at its Snapdragon Summit last year. This brings the focus onto the AR1 Gen 1. The AR1 processors were designed for stand-alone smart glasses more akin to Google Glass with cameras for picture and video capture and one or two screens, despite Qualcomm’s perplexing naming decisions. The AR2 series, on the other hand, has a multichip design and is intended for immersive augmented reality glasses that are more akin to the HoloLens, with support for six degrees of freedom and high-res displays.
Despite the efforts of several vendors and big corporations, smart glasses with screens have remained a niche product. With a more robust image-processing workflow as well as what Qualcomm calls “on-glass AI” for speaking instructions and noise mitigation, for instance, the recently launched platform seems poised to breathe fresh life into this industry, as indicated by the company’s optimistic outlook. Each eye’s display on the AR1 Gen 1 has a resolution of 1280 x 1280 and three degrees of freedom.
The new Stories smart glasses from Ray-Ban will run on the AR1 Gen 1 platform, but unlike the majority of other pairs of smart glasses right now, they won’t have screens, even if the AR1 would support them. Even while adding a camera to glasses would significantly boost the usability of these platforms, head-up displays are a considerably more difficult challenge to tackle.
Qualcomm said that the glasses may be used as a camera, sharing platform, or live-streaming device all in one. Additionally, personal assistant features like voice quality improvement, visual search, and real-time translation are made possible by on-device AI. Last but not least, support for a visual heads-up display may make it possible for users to consume material, including video, that smoothly blends into their field of vision.