Invented by Scott Beith, Jonathan Kies, Robert Tartz, Ananthapadmanabhan Arasanipalai Kandhadai, Qualcomm Inc
The market for virtual content generation is expected to grow significantly in the coming years. According to a report by MarketsandMarkets, the global market for virtual reality content is expected to reach $35 billion by 2025. This growth is being driven by the increasing adoption of virtual reality technology in various industries, including gaming, healthcare, education, and entertainment.
One of the main drivers of the virtual content generation market is the gaming industry. Video games have always been at the forefront of technological innovation, and virtual reality is no exception. Many game developers are now creating virtual reality games that offer a more immersive and realistic experience for players. This has led to a growing demand for high-quality virtual content that can be used in these games.
Another industry that is driving the growth of the virtual content generation market is healthcare. Virtual reality technology is being used to create training simulations for medical professionals, allowing them to practice procedures and surgeries in a safe and controlled environment. This has the potential to improve patient outcomes and reduce medical errors.
The education industry is also beginning to adopt virtual reality technology. Virtual reality can be used to create immersive learning experiences that allow students to explore and interact with complex concepts in a more engaging way. This has the potential to improve learning outcomes and make education more accessible to students who may not have access to traditional resources.
The entertainment industry is also seeing the potential of virtual reality technology. Virtual reality experiences are being created for theme parks, museums, and other attractions, offering visitors a unique and immersive experience that they cannot get anywhere else.
Overall, the market for virtual content generation is expected to continue to grow in the coming years. As virtual reality technology becomes more widespread and accessible, the demand for high-quality virtual content will only increase. This presents a significant opportunity for content creators and developers to capitalize on this growing market and create innovative and engaging virtual experiences for a wide range of industries.
The Qualcomm Inc invention works as follows
Systems, devices (or apparatuses), methods and computer-readable mediums are provided for the generation of virtual content. A device (e.g. an extended-reality device) may obtain an image of an environment in the real world, and display virtual content on the display. The device can detect a portion of the physical hand of the user. The device can create a virtual keypad based on the detection of at least a part of the hand. The device can determine the position of the virtual keyboard relative to the extended reality device’s display. The device can show the virtual keyboard in the desired position on the display.
Background for Virtual content generation
Extended reality technologies can present virtual content or combine the real world with virtual environments in order to give users extended reality experiences. Extended reality includes virtual reality, augmented and mixed reality. These forms of extended realities allow users to interact or experience immersive virtual content or environments. An extended reality experience, for example, can allow users to interact with an enhanced or augmented real or physical environment. Extended reality technologies are being used to improve user experiences across a variety of contexts including entertainment, healthcare and retail, education, social networks, etc.
In some examples, computer-readable media, methods and systems are described to generate an extended reality keyboard. This keyboard is also known as a virtual one. A user wearing an extended reality (e.g. an augmented reality headset, such a glasses, or another head-mounted devices) can detect the presence of one or several hands. This is done by using sensors located on or near the hands to detect the presence of one or multiple hands. In response to detecting one or more hands within the field of view of the camera, the extended reality devices can display a virtual keypad on the display. In some examples the virtual keyboard may be displayed on top of images of real content (e.g. a scene from a real environment can be seen through the display of an extended reality device), or it can be over virtual content. In some examples the virtual keyboard is displayed on the display of an extended reality device. (For example, the display may include lenses for extended reality glasses). The viewer can then view and control it while simultaneously viewing the real world through the display. The virtual keyboard appears to be in an open space from the perspective of the extended reality device.
The extended reality device can register a virtual keyboard in relation to one or more of the hands of the user. One or more landmarks on the hands of the user can be used to register the virtual keyboard in real-time. In some implementations the one or two landmark points can be at least three points per finger and at least a point each on the palms of each of the hands of the user. The term “finger” is used in this document. The term “finger” can be used to refer to the five fingers on a hand including a thumb.
In some examples, after a virtual keyboard is placed in a particular position based on the hands registered by the user, it can remain there until a reregistration event is detected. Re-registration events include, for example, a change in location of one or two hands, a change in movement of one or both hands, the expiration of predetermined time period after the virtual keyboard is displayed, or any combination of these.
In some cases, the virtual keyboard may be divided into a first and second part. The first part of the virtual keyboard can be registered in relation to the first hand of the users and the second can be registered in relation to the second hand. The first hand could be the left hand of the user, in which case the first half of the keyboard (or any other portion on the left side) would be included. The second hand could be the right hand of the user, while the second half of the keyboard (or any other portion on the right side) would be included. In these examples, the virtual keyboard’s first part can track the user’s left hand and its second part can track their right hand. As the user moves the first finger, the virtual keyboard’s first part can be moved on the screen relative to that hand. The second part of virtual keyboard can also move on the screen relative to the second hands.
The method includes: obtaining, by an extended reality device, an image of a scene in a real-world environment; the real-world environment is viewable through a display of the extended reality device as virtual content displayed on the display; detecting, by he extended reality device at least s part of e physical hand of u user in th image; generating based on detecting s part o f t he physical hand a virtual keyboard. The method comprises: obtaining an image of an environment in real life through an extended-reality device; detecting a portion of a hand by the user; creating a virtual key board based on the detected part; determining the position of the virtual key board on the display by the device; and displaying the virtual keys on the device.
The one or multiple processors are configured to and can: obtain an image of a scene in a real-world environment, wherein the real-world environment is viewable through a display of the extended reality device as virtual content displayed by that display; detect at least a part of a physical hand (of a user) in the image; generate a virtual keyboard based on detecting at least a part a physical hand, determine a position for a virtual keyboard on a screen, and then display it The one or multiple processors can and are configured to: obtain an image from a scene in a real world environment; the real world environment is viewable by a display on the extended reality device while virtual content is being displayed; detect a portion of a hand of a person within the image; create a virtual key board based on detecting the part; determine the position of the virtual key board on the screen, the position of which is determined relative to the part; and display the keyboard at that position.
The non-transitory computer readable medium in another example has instructions stored thereon that, when executed, cause one or multiple processors to: obtain a scene from a real world environment; the real world environment can be viewed through the display of the extended-reality device while virtual content is being displayed; detect a portion of a hand of a person within the image; create a virtual key board based on detecting the part; determine a location for the virtual key board on the screen, the position determined relative to the based on the based on the based on the based on the image; a virtual content, a a a a a a a a a a a a a based on the a a a based on the a based on the a based on at least of the a a relative to the a based on the based on the part of at
The apparatus includes: means for obtaining an image of a scene; detect at least a part of a physical hand of a user in the image, with at least that part being viewable through displaying. The apparatus comprises: means for obtaining a scene image; detect at a least a portion of a hand of a person in the image; at least that part is viewable on a display; create a virtual key board based on the detected part; determine a location for the virtual key board on the screen, the position determined relative to the at least part of hand; and display the keyboard over the real-world viewable on the display.
The method, apparatuses and computer-readable media described above may also include: detecting one of more landmarks on the hand; determining the location of one of more landmarks with respect to an image capture camera; and determining the position of the virtual keyboard relative to the hand based on one of more landmarks’ locations with respect the camera.
In some aspects, “the method, apparatuses and computer-readable media described above also include: determining the pose of a user’s head; and determining, on the basis of the pose of the a head, the location of the virtual keyboard relative to the head on the display.
In some aspects, the virtual keypad is fixed to the position displayed on the display while the physical hand changes positions.
In some aspects, the methods, apparatuses and computer-readable media described above also include: receiving input related to operation of the virtual key board; and maintaining virtual keyboard in the same position while the virtual key board is operated on the basis of the input received.
In some aspects, the method and apparatuses described above also include: determining that at least a part of a physical hand has been moved to a different position in an additional scene image as compared with a position of at least a part of a physical hand in the original image; and displaying the virtual keyboard in a new position on the screen based on the determination that at least a part of a physical hand was moved.
In some aspects, the methods, apparatuses and computer-readable media described above also include: detecting the expiration after determining the virtual position of the keyboard on the screen; and displaying the virtual position at a different position than the original position, in response to detecting the expiration.
In some aspects, the hand at least includes a point on the hand or a point on an finger.
In some aspects, the hand at least includes one finger and one point at each finger.
In some aspects, a virtual keyboard may include a first and second part. The first part is displayed on the screen at a position relative to the user’s physical hand and the second at a position relative to another physical hand. In some examples the first part moves on display relative the physical hand and the second part moves on display relative the additional hand.
In some aspects, the methods, apparatuses and computer-readable media described above also include: determining that the physical hand in the additional scene image is not there; and removing the virtual keyboard, on the basis of determining that the physical hands are not in the additional scene image.
The virtual keyboard can still be used to provide input even when it is removed from display.
In some aspects, the methods, apparatuses and computer-readable media described above also include deactivating virtual keyboards from being used to provide input.
Click here to view the patent on Google Patents.