Invention for Virtual object positioning for augmented Reality

Invented by Szymon Piotr Stachniak, Hendrik Mark Langerak, Michelle BROOK, Microsoft Technology Licensing LLC

The market for Virtual Object Positioning for Augmented Reality (AR) is experiencing rapid growth and is poised to revolutionize various industries. Augmented Reality, the technology that overlays digital information onto the real world, has gained significant popularity in recent years. With the advancement of AR, the need for accurate and reliable virtual object positioning has become crucial for delivering immersive and seamless user experiences. Virtual object positioning refers to the ability to accurately place digital objects in the real world using AR technology. This technology enables users to interact with virtual objects as if they were physically present in their environment. From gaming and entertainment to retail and education, the applications for virtual object positioning in AR are vast and diverse. One of the key drivers of the market for virtual object positioning in AR is the increasing demand for immersive gaming experiences. Gaming has always been at the forefront of technological advancements, and AR is no exception. With virtual object positioning, gamers can engage in interactive and realistic gameplay, where virtual objects seamlessly blend with the real world. This technology has the potential to revolutionize the gaming industry, providing gamers with a whole new level of immersion and excitement. Another industry that stands to benefit greatly from virtual object positioning in AR is retail. With the rise of e-commerce, brick-and-mortar stores are constantly looking for innovative ways to attract customers and enhance their shopping experiences. By incorporating AR technology with virtual object positioning, retailers can offer customers the ability to visualize products in their own space before making a purchase. For example, customers can virtually place furniture in their living room or try on virtual clothes without physically trying them on. This not only enhances the shopping experience but also reduces the likelihood of returns, ultimately benefiting both retailers and customers. Education is yet another sector that can greatly benefit from virtual object positioning in AR. By bringing virtual objects into the real world, educators can create interactive and engaging learning experiences. For example, students studying biology can virtually dissect a frog or explore the human anatomy in a more immersive and hands-on way. This technology has the potential to revolutionize traditional teaching methods and make learning more enjoyable and effective. The market for virtual object positioning in AR is expected to witness significant growth in the coming years. According to a report by MarketsandMarkets, the global AR market is projected to reach $77.0 billion by 2025, with virtual object positioning playing a crucial role in driving this growth. The increasing adoption of smartphones and the development of advanced AR software and hardware are some of the factors contributing to this growth. However, there are still challenges that need to be addressed for the widespread adoption of virtual object positioning in AR. One of the key challenges is achieving accurate and reliable positioning in various environments and lighting conditions. This requires advanced computer vision algorithms and robust tracking systems. Additionally, privacy and security concerns need to be addressed to ensure the safe and responsible use of AR technology. In conclusion, the market for virtual object positioning in AR is experiencing rapid growth and has the potential to revolutionize various industries. From gaming and retail to education, the applications for this technology are vast and diverse. As technology continues to advance and challenges are addressed, virtual object positioning in AR will become an integral part of our daily lives, enhancing our experiences and transforming the way we interact with the digital world.

The Microsoft Technology Licensing LLC invention works as follows

An augmented-reality device comprises a logic device and a storage device containing instructions that can be executed by the logic device to fit a virtual plane of two dimensions to a real world surface represented in a 3-dimensional representation of an environment real of the augmented-reality device. The request is received to place a three-dimensional virtual object on the surface of a real-world. Each of the plurality of possible placement locations is evaluated to determine whether it is a valid or invalid location. “An invalidation mask that defines valid and invalid locations on the two-dimensional virtual plane is generated.

Background for Virtual object positioning for augmented Reality

Virtual objects can be displayed via portable or stationary display devices. This includes head-mounted displays (HMDs). These devices can be used for augmented reality (AR), virtual reality (VR), and other experiences. “The virtual imagery can be rotated, resized and/or moved based on the user’s input.

This Summary is intended to present a number of concepts that will be further explained in the detailed description. This Summary does not aim to identify the key features or essential elements of the subject matter claimed, nor to limit its scope. The claimed subject matter does not limit itself to solutions that address all or some of the disadvantages mentioned in this disclosure.

An augmented-reality device comprises a logic device and a storage device containing instructions that can be executed by the logic device to fit a virtual plane of two dimensions to a real world surface represented in a 3-dimensional representation of an environment real of the augmented-reality device. The request is received to place a three-dimensional virtual object on the surface of a real-world. Each of the plurality of possible placement locations is evaluated to determine whether it is a valid or invalid location. “An invalidation mask that defines valid and invalid locations on the two-dimensional virtual plane is generated.

The augmented reality computing systems 200 may include sensors and other systems that provide information to an on-board computer. These sensors include but are not restricted to one or two inward-facing image sensors (210A, 210B), one or both outward-facing image sensors (212A, 212B), an inertial measuring unit (IMU), 214 and one or all microphones 216. The one or two inward-facing image sensors 210A and 210B can be configured to obtain gaze tracking data from the wearer (e.g. sensor 210A could acquire image data of one eye, while sensor 210B could acquire image data of the other eye).

The on-board computer may be configured in a suitable way to determine the gaze direction of each wearer’s eye based on information received from image sensors 210A and 210B. The on-board computer and one or more image sensors facing inward 210A,210B may be collectively represented as a gaze detection device configured to determine the wearer’s target gaze on the near-eye 202. In some implementations, another type of gaze sensor/detector may be used to measure one of more eye gaze parameters. One or more gaze sensors may measure a variety of gaze parameters that can be used by on-board computer (204) to determine eye gaze samples. These include eye gaze direction and head orientation. Also, they may measure eye gaze velocity, acceleration, changes in eye gaze angle, etc. Eye gaze tracking can be recorded separately for each eye in some implementations.

The one, or more, outward-facing image sensors 212A and 212B can be configured to measure the physical environment attributes in a physical space. Image sensor 212A, for example, may be a visible light camera that is configured to capture a visible light image of a space. The augmented reality computing systems may also include a pair of stereoscopic visible-light cameras. The image sensor 212B can also include a camera that collects a depth picture of a space. In one example, the infrared depth camera is a time-of flight depth camera. Another example is the infrared structure light depth camera.

Data from the outward-facing image sensors 212A and 212B can be used by the computer on board 204 to detect movement, such as gestures-based inputs, or other movements performed either by the wearer, or by another person or object in the real space. In one example data from the outward-facing image sensors 212A and 212B can be used to detect wearer inputs performed by the wearer, such as gestures. The on-board computer may use data from the outward-facing image sensors 212A and 212B to determine direction/location, orientation, etc. (e.g. from imaging environmental features), which enables the position/motion tracking for the augmented reality system 200 within the real-world. In some implementations data from the outward-facing image sensors 212A and 212B can be used by on-board computer to create still images or video images of the environment from the perspective the augmented reality computing systems 200.

Click here to view the patent on Google Patents.