Invention for Methods for processing content

Invented by John D. Lord, Digimarc Corp

The market for methods for processing content has experienced significant growth in recent years, driven by the increasing demand for efficient and effective ways to handle large amounts of information. With the rise of digital content and the proliferation of online platforms, businesses and individuals alike are constantly seeking methods to process and manage content in a streamlined manner.

One of the key drivers of this market is the need for content curation and organization. As the amount of digital content continues to grow exponentially, businesses are finding it increasingly challenging to manage and make sense of the vast amounts of information available. Methods for processing content, such as content management systems (CMS) and data analytics tools, have emerged as essential tools for businesses to curate, organize, and extract valuable insights from their content.

Content management systems, in particular, have become a staple in the market for processing content. These systems allow businesses to store, manage, and publish their content in a structured manner, making it easier to access and share information across different teams and departments. CMSs also often come with features such as version control, workflow management, and content collaboration, further enhancing the efficiency and effectiveness of content processing.

Another significant aspect of the market for processing content is the need for content analysis and extraction. Businesses are increasingly relying on data analytics tools to extract valuable insights from their content, such as customer preferences, market trends, and sentiment analysis. Natural language processing (NLP) techniques, machine learning algorithms, and text mining tools are commonly used to analyze and extract meaningful information from text-based content, enabling businesses to make data-driven decisions and gain a competitive edge.

Furthermore, the market for methods for processing content is also driven by the need for content personalization and customization. With the abundance of content available online, businesses are striving to deliver personalized experiences to their customers. Methods such as recommendation engines, personalized search algorithms, and targeted advertising rely on content processing techniques to analyze user preferences and behavior, and deliver tailored content to individual users.

The market for methods for processing content is highly competitive, with numerous players offering a wide range of solutions and services. Established technology giants, such as Microsoft, IBM, and Adobe, dominate the market with their comprehensive content management and analytics platforms. However, there are also numerous smaller players and startups that offer specialized solutions for specific content processing needs, such as social media analytics, content moderation, and content enrichment.

In conclusion, the market for methods for processing content is experiencing rapid growth, driven by the increasing demand for efficient content curation, analysis, and personalization. Businesses are recognizing the importance of effectively managing and extracting insights from their content to stay competitive in today’s digital landscape. As the volume of digital content continues to grow, the market for methods for processing content is expected to expand further, offering new and innovative solutions to meet the evolving needs of businesses and individuals alike.

The Digimarc Corp invention works as follows

The disclosed technologies can be used to enable a smart device to respond to the user’s surroundings, for example, by serving as a hearing and sighting device. The detailed arrangements include using radio base station SDR (e.g. at a cell-tower) equipment to perform image recognition for phones, forecasting service requirements from remote processors and delegating the task to a selected service provider in a competitive procedure, using nearby processors such as in an automobile or another phone for remote execution, using processors in a set-top box or other phone for remote execution, phones with separate camera and/or lighting components, phone camera illumination with different colors of lights, using search tree methods Many other features are detailed.

Background for Methods for processing content

The present specification details an interrelated collection of work that includes a wide variety of technologies. Some of them include image processing architectures on cell phones, cloud computing, reverse auction based service delivery, metadata processing and image conveyance of semantic information. Each section of the specification details technologies that should be incorporated into other sections. It is therefore difficult to identify “a beginning”. This disclosure is logically supposed to begin at the beginning. “That said, let’s just dive in.

There is currently a large disconnect between the enormous amount of data that is present in the high-quality image data streamed from a mobile phone camera, and the device’s ability to process that data. ?Off device? Processing visual data is a great way to handle the firehose of data. This is especially true when multiple visual processing tasks are desired. These issues are even more important when’real-time object recognition and interactivity’ is considered. The user expects to see augmented reality graphics on the screen of their mobile device as soon as they point the camera at an object or scene.

According to one aspect of current technology, a network of distributed pixel processing engines serves such mobile device users. This meets the most qualitative “human real-time interactivity” requirements. Feedback is usually received in less than a second. Implementation should provide certain basic features for mobile devices, such as a close relationship between the output pixels of the image sensor and the native communication channel. Some basic levels of?content classification and filtering? The local device performs certain levels of basic?content filtering and classification? pixel processing services. The word “session” is the key. The term’session’ also refers to fast responses sent back to mobile devices. Or ‘interactive? A session is essentially a duplex communication that uses packets. Several incoming ‘pixel packets’ and several outgoing packets are sent every second. Every second, several packets of incoming data (which could be updated pixel packets) and several packets of outgoing data may occur.

The spreading out of applications from this starting point is arguably dependent on a core set of plumbing features inherent to mobile cameras. These features (which are not exhaustive) include: a. higher quality pixel acquisition and low-level processing; b. better device CPU and GPU resource for on-device processing of pixels with subsequent feedback from users; c. structured connectivity to?the cloud? In addition, there is a maturing infrastructure for traffic monitoring and billing. FIG. FIG. 1 shows a graphic view of some of the plumbing features that make up what could be described as a visually intelligent system. For clarity, we have not included the usual details of a mobile phone such as the A/D Converter, modulation and remodulation systems (IF stages), cellular transceiver etc.

It is great to have better GPUs and CPUs on mobile devices, but it’s not enough. Cost, weight, and power concerns seem to favor ‘the cloud’. Cost, weight and power considerations seem to favor getting ‘the cloud? “We will do as much of the?intelligence?

Relatively, it appears that there should be an overall set of ‘device-side’ operations. Operations on visual data will be used for all cloud processes. This includes certain formatting, graphic processing and other rote tasks. It seems that there should also be a standard basic header and address scheme for the communication traffic back and forth (typically packetized).

FIG. The list is not exhaustive but it serves as an example of the many visual processing applications available for mobile devices. It is difficult to not see the analogies between the list and how the visual system and brain work. It is an academic field that is well-studied and deals with the question of how “optimized” we are. The eye-retina, optic nerve-cortex system serves an array of cognitive needs very efficiently. This aspect of technology is about how similar efficient and broadly enabling components can be built into mobile phones, mobile device connections, and network services with the goal to serve the applications shown in FIG. This includes applications like FIG. 2, and any new ones that may appear as technology continues to evolve.

Perhaps, the most important difference between mobile devices networks and human analogy revolves around the concept of the marketplace. Where buyers continue to buy more and better products, as long as the businesses are able to make a profit. Any technology that aims to serve applications listed in FIG. It is a given that hundreds, if not thousands, of businesses will work on the details of commercial offerings with the hope of making money in some way. It is true that a few giants will dominate the mobile industry’s main cash flow lines, but it is also true that niche players will continue to develop niche applications and services. This disclosure explains how a market for visual processing services could develop where business interests from across the spectrum can benefit. FIG. FIG.

FIG. The introduction to the technology is a 4 sprint towards the abstract. We find an abstracted information bit derived from a batch of photons which impinged upon some electronic image sensor. A universe of consumers awaits this lowly bit. FIG. The 4A illustrates the intuitively known concept that individual bits of visual data are not worth much unless they’re grouped together in spatial and temporal groups. Modern video compression standards like MPEG7 and H.264 make good use of this core concept.

The ?visual? Certain processing can remove the visual character of bits (consider, for example, the vector string representing eigenface information). We sometimes refer to ‘keyvector data’. (or ?keyvector strings?) to refer collectively to pixel data, together with associated information/derivatives.

FIGS. The pixel packets, which are addressed and packaged in a body into which keyvector information is inserted, also play a major role in this disclosure. Keyvector data can be a patch or a group of patches or a series of patches or collections. Pixel packets can be smaller than a kilobyte or much larger. It could be a small patch of pixels from a large image or a Photosynth.

When pushed around a network, however, it may be broken into smaller portions?as transport layer constraints in a network may require.) It may, however, be divided into smaller pieces when it is actually sent around a LAN, depending on the transport layer constraints of a LAN.

FIG. 5 is a segue diagram still at an abstract level but pointing toward the concrete. A list of user-defined applications, such as illustrated in FIG. 2, will map to a state-of-the-art inventory of pixel processing methods and approaches which can accomplish each and every application. These pixel processing methods break down into common and not-so-common component sub-tasks. Object recognition textbooks are filled with a wide variety of approaches and terminologies which bring a sense of order into what at first glance might appear to be a bewildering array of ?unique requirements? relative to the applications shown in FIG. 2. But FIG. 5 attempts to show that there are indeed a set of common steps and processes shared between visual processing applications. The differently shaded pie slices attempt to illustrate that certain pixel operations are of a specific class and may simply have differences in low level variables or optimizations. The size of the overall pie (thought of in a logarithmic sense, where a pie twice the size of another may represent 10 times more Flops, for example), and the percentage size of the slice, represent degrees of commonality.

FIG. The 6th step is a big one towards concrete. It sacrifices simplicity. We see the top part of this image labeled “Resident Call Up Visual Processing Services”. This top portion is labeled?Resident Call-Up Visual Processing Services,? A mobile device could be aware or even able to perform the actions in FIG. It is not necessary for all applications to be running at all times. A subset of them is therefore “turned on” The idea is that not all of these applications have to be active at all times, and therefore some sub-set of services are actually?turned on? As a once-off configuration, the turned on applications negotiate to identify common component tasks. This is called the “Common Processes sorter?” The first step is to generate an overall list of all the pixel processing functions that are available for device processing. These routines can be selected from a library containing these basic image processing routines, such as FFT, filtering edge detection, resampling color histogramming log-polar transform etc. Generation of corresponding Flow Gate Configuration/Software Programming information follows, which literally loads library elements into properly ordered places in a field programmable gate array set-up, or otherwise configures a suitable processor to perform the required component tasks.

(Consider the Pepsi promotional game, which invites people to take part in a treasure-hunt in a park. People try to locate a six-pack hidden in a state park using internet clues. They can win $500 if they find it. The clues are distributed via a special app that can be downloaded from the Pepsi.com website (or Apple AppStore). The application includes a prize-verification component that uses image data from the user’s cell phone to identify the unique pattern of the six-pack. The SIFT object recognition (discussed in the following section) is used, with SIFT feature descriptors provided with the application. The cell phone transmits wirelessly the image matching information to Pepsi when it finds a match. The user whose phone is the first to detect the six-pack with the special marking wins. In FIG. In the FIG.

FIG. The picture is now a generic view of pixel service networks, with local device pixel and cloud-based services. Pixel services are symmetrical in their operation. The router shown in FIG. The router in FIG. 7 is responsible for ensuring that any packaged pixel package gets sent to the correct pixel processing location whether it’s local or remote. (The style of fill pattern indicates different component processing functions. Only a few processing functions are shown). Some data that is sent to cloud-based services was processed first by the local device services. The circles show that routing functionality could be based in cloud-nodes. These nodes would distribute tasks to service providers and collect results to send back to the device. These functions can be performed by modules located at the edge, such as at wireless towers. This ensures the fastest response. Pixel Service Manager software receives the results from external service providers and active local processing. It then interacts with device user interface.

Click here to view the patent on Google Patents.


Comments

Leave a Reply

Your email address will not be published. Required fields are marked *