Microsoft started delivering HoloLens 2 way back in 2019 and focused on developing mixed reality applications. People can use the HoloLens tools and overlay 3D digital objects onto the real world. Its new sensors make it an immensely effective device, using onboard computer vision hardware and combining sensors to help position users in a room, mixing virtual and physical environments.
A product such as HoloLens 2 is quite appealing and can attract a lot more users than the initially targeted ones. Mixed reality is an efficient tool in several markets and environments, and the underlying hardware will support much more than just combining the digital and the physical.
HoloLens and the Holographic Processing Unit (HPU)
The HoloLens silicon supports various streams of data and parallel image processing. The device does so without resorting to batch processes since it is built for continuous workloads. The Holographic Processing Unit is a custom ASIC whose design blends different modules for virtual signal processing and hardware to handle the computationally intensive work of stabilizing images’ depth while rendering complex images. Through a DNN core in the device, Microsoft can prevent latency and lag while using computer vision algorithms.
The HoloLens 2 is a powerful object, but Microsoft has simplified the majority of the development experience, wrapped and consolidated the sensor data into a bunch of tightly defined APIs and a mixed reality toolkit. That is quite good as for most uses you do not require low-level access to sensors. What you need is the data that lets you develop your applications.
The tech giant has often compared software designing with pizza delivery for a significant number of people. Not everyone will get their desired toppings, but everyone will have melted cheese and tomato sauce. But, the best thing being, you can generally take those pizzas, customize them, and add the relevant software.
Utilizing HoloLens in Research
In scientific research, computer vision and mixed reality are powerful tools. It is an area where one must have access to the device’s all the sensors. Bringing both of them together in a head-mounted and portable computer makes HoloLens 2 exciting, especially since it has different cameras, depth sensors, gyroscopes, magnetometers, and accelerometers. Thus, it does not come as a surprise that Microsoft is allowing access to a lot of these aspects in HoloLens 2’s new Research Mode.
One can use the device as a head-and-eye-tracking platform to solve numerous human interaction issues like tracking every head motion and eye movement that pilots use in a modern cockpit. This will help to understand their cognitive load and ways to redesign the environment to keep the passengers safe.
Accessing the Device’s Data Streams
The product’s sensors do more than just normal head tracking. It uses cameras to map the environment and track hands. This provides researchers with a clearer and better view of the users’ environment. Apart from these, they also present better light sensitivity, thus making it convenient to bring in data from low-light areas.
By providing access to every camera and sensor, the research mode can present a model of the user’s environment and which can be replayed if needed.