Invention for Systems and Methods for Generating Visibility Counts per Pixel of a Texture Atlas associated with Viewer Telemetry Data

Invented by Adam G. Kirk, Oliver A. Whyte, Amit Mital, Omnivor Inc

The market for systems and methods for generating visibility counts per pixel of a texture atlas associated with viewer telemetry data is an emerging and rapidly growing field in the technology industry. This innovative technology has significant applications in various sectors, including gaming, virtual reality (VR), augmented reality (AR), and computer vision.

Texture atlases are widely used in computer graphics to efficiently store and render textures on 3D models. They consist of a collection of smaller textures, or “subtextures,” arranged in a grid-like pattern. However, determining which subtextures are visible to the viewer at any given time is a complex task, especially in dynamic and interactive environments.

Viewer telemetry data refers to the information collected from the viewer’s perspective, such as their position, orientation, and field of view. By combining this data with advanced algorithms and techniques, systems and methods can accurately determine the visibility of each pixel on the texture atlas. This information is crucial for optimizing rendering processes, reducing computational overhead, and enhancing the overall visual experience.

One of the primary applications of this technology is in the gaming industry. Game developers strive to create immersive and realistic virtual worlds, and efficient rendering is essential for achieving this goal. By accurately determining the visibility of each pixel, developers can allocate computational resources more efficiently, resulting in smoother gameplay, reduced loading times, and improved graphics quality.

Virtual reality and augmented reality are also benefiting from systems and methods for generating visibility counts per pixel of a texture atlas. In VR, where the user’s entire field of view is covered by a display, optimizing rendering is crucial for maintaining a high frame rate and reducing motion sickness. By accurately determining which pixels are visible to the user, developers can focus rendering efforts on those areas, resulting in a more immersive and comfortable experience.

In AR applications, where virtual objects are overlaid onto the real world, accurate visibility counts per pixel are essential for seamless integration. By precisely determining which parts of the texture atlas are visible in the user’s field of view, developers can ensure that virtual objects blend seamlessly with the real environment, enhancing the overall AR experience.

Computer vision is another field that can benefit from this technology. Computer vision systems often rely on analyzing images or video streams to extract meaningful information. By accurately determining visibility counts per pixel, computer vision algorithms can focus on the relevant parts of an image or video, improving object recognition, tracking, and other computer vision tasks.

The market for systems and methods for generating visibility counts per pixel of a texture atlas associated with viewer telemetry data is expected to grow significantly in the coming years. As the demand for more immersive gaming experiences, realistic virtual reality, and seamless augmented reality applications increases, developers will seek innovative solutions to optimize rendering processes. Additionally, the rise of computer vision applications in various industries, such as autonomous vehicles and surveillance systems, will further drive the demand for this technology.

In conclusion, the market for systems and methods for generating visibility counts per pixel of a texture atlas associated with viewer telemetry data is a rapidly growing field with diverse applications. From gaming to virtual reality, augmented reality, and computer vision, this technology offers significant benefits in terms of optimizing rendering processes, improving visual experiences, and enhancing computer vision algorithms. As technology continues to advance, we can expect further innovation and development in this exciting field.

The Omnivor Inc invention works as follows

A processor-implemented method of generating a three-dimensional (3D) volumetric video with an overlay representing visibility counts per pixel of a texture atlas, associated with a viewer telemetry data is provided. The method consists of (i) collecting the viewer data; (ii), determining the visibility of every pixel on the texture map associated with 3D content using the viewer data; (iii), generating visibility counts for each pixel on the texture map based upon the visibility of every pixel on the texture map, and (iv), generating the 3D video volumetric with an overlay of atleast one heatmap associated with the viewer data using the atleast one visibility count per pixel.

Background for Systems and Methods for Generating Visibility Counts per Pixel of a Texture Atlas associated with Viewer Telemetry Data

Technical Field

Embodiments in this disclosure are generally related to volumetric video analysis, and, more specifically, to systems and methods for displaying counts of pixels of a texture map, associated with viewer telemetry, for at least two of the following: generating a 3D video with an overlay that is associated with the viewer data, and generating and showing a curated content selection based on viewer data.

Description of Related Art

Volumetric Video is a technique for capturing a three-dimensional environment, such as an event or location. This type of volumeography can be viewed using flat screens, 3D displays, and virtual reality goggles. The consumer-facing formats available are many, and the motion capture techniques required rely on computer graphics and photogrammetry. The viewer experiences the results in real-time and can explore the generated volume directly.

The volumetric video captures surfaces in 3D space and combines photography’s visual quality with the immersion and interaction of 3D content. Volumetric videos can be created by using multiple cameras, which capture the surfaces within a volume defined. This is done by filming multiple angles and interpolating space and time. The volumetric video can also be made from a 3D synthetic model. Volumetric video allows you to see a scene in multiple perspectives and angles.

Video analytics is used to report, measure and analyse the number of videos that a user has viewed online. Video analytics allows online video publishers, advertising companies, media companies, and agencies to better understand the overall consumption patterns for a video shared by another party. Video analytics is a way to capture and analyze data that describes the viewer’s perspective when watching a video.

Data analytics was used in the past to determine a company’s marketing or advertising results, and where it stood among fierce competition. Video analytics for traditional videos are usually limited to the number of views, duration, and segments viewed. The existing video analytics are only compatible with traditional video, not volumetric video.

There is a continuing need for an efficient method to mitigate and/or overcome the drawbacks of current methods.

In view of the foregoing, embodiments herein provide a processor-implemented method of generating a three-dimensional (3D) volumetric video with an overlay representing visibility counts per pixel of a texture atlas, associated with a viewer telemetry data. The method comprises (i) collecting the viewer data (ii), determining the visibility of every pixel within the texture map of the 3D content using the viewer data (iii), generating atleast one visibility count per pixel based upon the visibility of every pixel, and (iv), generating the 3D video volumetric with atleast one heatmap associated with viewer data. The viewer telemetry includes at least one visibility count per pixel and data that describes at least one intrinsic camera parameter and extrinsic cameras parameters, as well as an associated time in a 3D video, or data describing a viewer’s interaction with the 3D video and the time associated during the 3D video. At least one visibility count per pixel in the texture atlas is comprised of one or more of the following: a view counts per pixels, a visual counts per each pixel for at least a single virtual camera position, or set of virtual cameras positions, or a visual counts per each pixel for a viewer’s interaction with the 3D contents.

In some embodiments, the 3D video volumetric with the overlaying of at least a heat map is generated by (i) generating at the least one color map with an RGB color per pixel using the visibility counts per pixel in the texture atlas and (ii), replacing the original texture map for the 3D video content with at the least one associated heat map for each source geometry.

In some embodiments, the process of generating the heat map includes (i) creating at least a visibility histogram using the counts of visibility per pixel; and (ii), converting the visibility histogram to the heat map.

In some embodiments, determining visibility comprises (i) creating at least one of an index map, which is an image of the same size as the textures atlas and assigns a unique colour to each valid frame in the 3D contents, (ii), rendering the image with the index map, which assigns the unique color for each valid frame based on viewer telemetry and at least one texture index map, to obtain an index-rendered image, and (iii), determining visibility by mapping unique colors of The visibility texture is in some embodiments a texture map that contains visibility information for at least a portion of the pixels within the texture. In some embodiments there is a mapping of one-to-one between the unique colors per frame on the index map, and the location visible pixels in a visibility texture atlas.

In some embodiments, the boolean table includes the not visible token values corresponding to each pixel in the transparency texture atlas. The boolean table may include the not visible token values for each pixel of the visibility texture.

In some embodiments, determining visibility includes (i), placing a 3D geometries into a spatial datastructure that supports atleast one ray casting queries, (ii), generating either (a) a 3D pixel for each valid pixel of the visibility texture map, or (b), the 3D pixel and the corresponding bounding boxes using a depth-atlas for every valid pixel within the visibility texture map, and (iii), determining visibility by raycasting the virtual camera associated

The method may include (i) mapping the value of at least a single pixel from the image to the texture map, and (ii), generating at least a visibility histogram based on this mapping.

In one aspect, a processor-implemented method of generating a curated selection of three-dimensional (3D) volumetric content based on a viewer telemetry data is provided. The method comprises (i) collecting the viewer data (ii), determining the visibility of every pixel associated with the 3D contents based upon the viewer data (iii), generating visibility counts for each pixel of a texture atlas using the determined visibility, and (iv), generating curated 3D volumetric data based upon the viewer data by using the generated visibility counts. The viewer telemetry includes at least one visibility count per pixel and data that describes at least one intrinsic camera parameter and an extrinsic cameras parameters, as well as the time associated with the 3D contents, and data recording and describing a viewer’s interaction with the 3D contents. At least one visibility count per pixel is comprised of the following: visibility counts of views per pixels, visibility counts of a virtual position or set of virtual positions, visibility counts of a viewer’s interaction with 3D content and visibility counts of a camera orientation.

In some embodiments, the distance function is given by:nd_ij=alpha*(l2_norm(p_i?p_j))+beta* (dot_product (q_i,q_j))+gamma* (f_i?f_j)” In some embodiments, the distance function is given by:\nd_ij=alpha*(l2_norm(p_i?p_j))+beta*(dot_product(q_i,q_j))+gamma*(f_i?f_j)

In some embodiments alpha, Beta, and Gamma are weighting parameters relative. In some embodiments i, j, and p_i refer to unique views. p_j represents position j. In some embodiments p represents the three degrees freedom of position. q represents the three degrees orientation in an axis angle encoding. f is a field of view. In some embodiments p and Q are 3 dimensional. l2_norm and dot_product functions take N-dimensional vectors and produce scalars. In some embodiments clustering is based on distance function, using the standard clustering algorithms.

In some embodiments, the process of creating the curated 3D content selection includes: (i) creating an initial cluster of views to refine using at least a visibility histogram; (ii), defining a scoring for at least a view among the initial cluster of views; (iii), sampling scores from nearby views based upon the visibility histogram in order to define a gradient and (iv), computing n steps for a gradient descent in order to create the curated 3D content selection based In some embodiments the score is the total of the visibility counts for each pixel in the texture map visible from the view. This sum is then divided by the number of pixels visible. In some embodiments, the number n is an integer.

In some embodiments, determining visibility comprises (i) creating at least one of an index map, which is an image of the same size as the textures atlas and assigns a unique colour to each valid frame in the 3D contents, (ii), rendering the image with the index map, which assigns the unique color for each valid frame based on viewer telemetry and at least one texture index map, to obtain an index-rendered image, and (iii), determining visibility by mapping unique colors of The visibility texture is in some embodiments a texture map that contains visibility information for at least a portion of the pixels within the texture. In some embodiments there is a mapping of one-to-one between the unique colors per frame on the index map, and the location visible pixels in a visibility texture atlas.

In some embodiments, the boolean table includes the not visible token values corresponding to each pixel in the transparency texture atlas. The boolean table may include the not visible token values for each pixel of the visibility texture.

Click here to view the patent on Google Patents.


Comments

Leave a Reply

Your email address will not be published. Required fields are marked *