Invention for Methods and systems for monitoring and controlling content in media using machine learning

Invented by Mostafa Tofighbakhsh, AT&T Intellectual Property I LP

The market for methods and systems for monitoring and controlling content in media using machine learning is rapidly expanding as the need for effective content moderation becomes increasingly important in today’s digital age. With the exponential growth of online platforms and social media, there is a pressing demand for automated solutions that can efficiently analyze and filter content to ensure compliance with community guidelines, prevent the spread of harmful or inappropriate material, and protect users from online harassment and abuse.

Machine learning, a subset of artificial intelligence, has emerged as a powerful tool in content moderation due to its ability to learn and adapt from large datasets. By training algorithms on vast amounts of labeled data, machine learning models can identify patterns, recognize objects, and understand context, enabling them to make accurate predictions and classifications.

One of the key applications of machine learning in content moderation is the detection of offensive or inappropriate content. Algorithms can be trained to identify hate speech, nudity, violence, and other forms of harmful content, allowing platforms to automatically flag and remove such material. This not only helps maintain a safe and respectful online environment but also reduces the burden on human moderators who would otherwise have to manually review every piece of content.

Another important aspect of content moderation is the prevention of misinformation and fake news. Machine learning algorithms can analyze the credibility and reliability of sources, fact-check claims, and identify misleading or false information. By automatically flagging such content, platforms can minimize the spread of misinformation and ensure that users are presented with accurate and trustworthy information.

Furthermore, machine learning can be used to detect and prevent online harassment and cyberbullying. Algorithms can analyze text and social interactions to identify abusive or threatening behavior, allowing platforms to take appropriate actions such as issuing warnings, blocking users, or escalating the situation to human moderators if necessary. This helps create a safer and more inclusive online space, particularly for vulnerable individuals who are often targets of online abuse.

The market for methods and systems for monitoring and controlling content in media using machine learning is expected to grow significantly in the coming years. According to a report by MarketsandMarkets, the global content moderation solutions market is projected to reach $11.8 billion by 2025, with machine learning playing a crucial role in driving this growth.

Major technology companies and social media platforms are investing heavily in developing and implementing machine learning-based content moderation systems. They are not only focused on improving the accuracy and efficiency of existing algorithms but also on addressing challenges such as bias and false positives. By continuously refining and fine-tuning these systems, they aim to strike a balance between freedom of expression and the need for responsible content moderation.

However, there are also concerns surrounding the use of machine learning in content moderation. Critics argue that automated systems may lack nuance and context, leading to over-censorship or the suppression of legitimate content. There is also the risk of biases being embedded in algorithms, which could disproportionately impact certain groups or perpetuate existing inequalities. To address these concerns, transparency, accountability, and ongoing human oversight are crucial in ensuring that machine learning systems are used responsibly and ethically.

In conclusion, the market for methods and systems for monitoring and controlling content in media using machine learning is witnessing rapid growth. With the increasing challenges posed by the scale and complexity of online content, machine learning offers a promising solution for automating content moderation processes. However, it is essential to strike a balance between the benefits of automated systems and the need for human judgment to ensure responsible and unbiased content moderation in the digital era.

The AT&T Intellectual Property I LP invention works as follows

Aspects” of the disclosure include, for instance, embodiments which comprise generating a user profile target and obtaining viewing data. Other embodiments involve generating a set of control rule based on the target user profile, and training a machine-learning application based on the viewing history data. A first indication of a media content to be shown to a user target is another embodiment. Aspects of embodiments also include determining, by the machine-learning application, that the media content is not in compliance with the control rules group and providing a notification to the user that the media content is not in compliance. “Other embodiments are disclosed.

Background for Methods and systems for monitoring and controlling content in media using machine learning

Children and parents enjoy downloading and streaming media content from media content providers to their different devices. Parents and children can also access the media content at remote locations. Parents can also equip their children’s media devices with parental controls that limit, filter or otherwise control media content.

The subject disclosure includes, among others, examples of embodiments that monitor and control media content by using machine learning. Embodiments can include obtaining viewing data and creating a user profile for a specific target. Other embodiments include creating a set of control rule based on the target user profile, and training an application based on the viewing history data. A first indication of a media content to be shown to a user target is another embodiment. Aspects of embodiments also include determining, by the machine-learning application, that the media content is not in compliance with the control rules group and providing a notification to the user that the media content is not in compliance. The subject disclosure describes other embodiments.

One or more aspects” of the disclosure concern a device. The device includes a processing system that includes a processor, and a memory that contains executable instructions. When executed by the system, these instructions facilitate performance of operations. These operations may include obtaining viewing data and creating a user profile for a specific target. Other operations include creating a set of control rule based on the user profile of the target and training an application based on the viewing history and the control rules. A first indication can be received that a media content will be presented to the target user. The machine learning application can also determine that the first content is not in compliance with the control rules. Also, operations may include a notification to the user that the first content is not in compliance with the control group.

One or more aspects” of the disclosure comprise a machine-readable medium comprising executables instructions which, when executed by an apparatus including a processor, enable performance of operations. These operations may include receiving viewing history data and generating a user profile. The target user profile can be used to generate control rules and to train a machine-learning application based on the viewing history and control rules. The operations may also include receiving a first indication of media content to be shown to a user target and determining, by the machine-learning application, that it does not comply with the control rules. The operations can also include sending a first notification to a communication system associated with a monitoring person that the media content is not in compliance with the control rules.

One or more aspects” of the disclosure concern a method. The method may include updating or provisioning a target profile by a system that includes a processor and obtaining viewing history data by the system. The method can also include, according to the user profile, determining a set of control rules by the processing system. The method can also include training a machine-learning application by the system based on the viewing history data, and a group of control rule, and receiving a signal from the system that a media content will be shown to a user. The method can also include determining by the machine-learning application that the first content is not in accordance with the control rules. The method can also include the provision of a distraction content by the processing system. The distraction media content is in accordance with the group of control rule.

FIGS. “FIGS. Referring to FIG. The system 100 is comprised of a video node 102 as well as a media server 104, crowd source server, 106 and rating service server, which are all computing devices. The system 100 also includes a device 118 which displays media content from the server 104 for a child 120 in a home 122. The system 100 also includes a monitoring device 112 which receives video control alerts or media content that is provided to the child to be monitored by the parent user (114) located at a remote location, such as 116.

In one or more embodiments the functions of the node 102 for video control can be distributed between several computing devices or contained within one device. Some or all video control functions may be integrated in the media content servers 104, 112, and 118. Further, the functions of the media server 104 may be distributed across multiple computing devices or in a single computing device. The video control function can be integrated into any or all media content servers. Each of the rating service server and crowd source server can be one or multiple computing devices.

In one or several embodiments, parent user 114 or child user 120 may provide input via a user-device such as a monitoring media device 112, 118 or another user-device (e.g. mobile phone, etc.). User input can be provided using any input device, such as a keyboard, mouse or touchscreen. It could also include gestures, image recognition or voice recognition. Media devices 112,118 can include a television, mobile phone, tablet, laptop, portable media player or desktop computer. Parent user 114 can filter, limit or control media content that is viewed by child user 120 in order to prevent inappropriate media content being viewed (violence, sexual content, etc.). Media content that is disturbing or upsetting for the child user. Death of a parent or divorce etc. The user input can also be sent to the video node 102 in order to create a user profile that is suited for the child. The video control node can also obtain viewing history for the child. Media devices 112,118, and/or the media content server 104, can provide viewing history data. In certain embodiments, a child 120 can create a profile of he or she by entering information about media content preferences and demographic data (age, gender etc.). In some embodiments, a parent user may input this information instead or in addition to the information provided by the child. In other embodiments, a child’s profile can be created based on viewing history data.

In one or several embodiments, the node video control 102 can create a set of control rules based on the user profile and viewing history data for the child 120 or parent user 114. The child input or parent input can be a set of media content examples that are appropriate for child user 120. The video control node can also train an application for machine learning (e.g. Software can be programmed to respond to viewing history data, control rule groups and additional input from either the parent user or the child user. A machine learning application that has been trained can be used by the parent user 114 or child user 120 to filter, limit or control the media content the child is viewing on the device 118. The viewing data can be used for training a machine-learning application that allows a child 120 to view media contents that are similar, the same or related to media content from the viewing data (e.g. Similar or related media can include media with the exact same actors, production staff, or company. It could also be media with the exact same themes, characters, rating, crowd source response etc. The location of the machine learning application for filtering, limiting, and/or controlling content can also be virtual or physical. This can include one or more physical machines and/or virtual machines placed at the network edge or near the user device.

In one or more embodiments viewing history data may be used to train a machine learning application. In some embodiments the video control node can identify the rating for the media content, and then control the media content that is viewed by the media device. Media content can, for example, be assigned a TV rating created by the television industry. TV-Y, TV-Y7, TV-Y7-FV, TV-G, TV-PG, TV-14, TV-MA, etc.) A film rating can be assigned to a media or content by the Motion Picture Association of America. G, PG, PG-13, R, NC-17, etc.). The video control server 102 can get a rating for media content in the viewing data by using the viewing data, metadata, a crowd-source server 106 or a ratings service server 108. The video control node can also request a rating of media content within the viewing history data, by providing the title or description to the crowd source server or rating service server.

In one embodiment or more, users of the crowd-source website hosted by crowd-source server 106 may provide a numerical (e.g. TV-PG, TV-MA, etc.) or a qualitative rating (e.g. Review that says media content contains violence, the death of a child, etc. The rating server 108 may be operated by an industry-wide rating service, such as Motion Picture Association of America. The rating service server can also identify the rating. or quantitative?e.g. “Critic review”) according to the title or description of the media content and provide the rating at the video control node 102.

In one or several embodiments, rating metrics are identified for media content within the viewing history data and compared with certain thresholds. Over 50% of the media content (thresholds) in the viewing data are rated TV G, control rules (e.g. “Any media content that has a TV-PG rating or higher is determined to not be appropriate media content for target users.

In one or more embodiments, either a child user 120 or a parent user can input control rules via the media monitoring device 112, the media device 118 or another user device. The parent user 114 and the child user can both provide rules to prevent the media device 118 from displaying media content showing the death of a mother or father, as this would upset the child 120. In some embodiments, a video control node can analyze the content description using textual analysis/keyword search techniques, analyze audio content with audio processing techniques or analyze video content with image processing techniques in order to determine if there are any scenes that suggest a death of parent (or other inappropriate content according the control rules). In other embodiments the video control node can obtain media content, analyze audio content from the media content with speech recognition and result in audio content analyses. The machine learning application can be used by the video control node to determine that the audio content analysis indicates the media content is not in compliance with the control rules. The audio content analysis can identify offensive words which do not comply with the control rules. offensive language, word ?divorce?, words ?death? The word ‘divorce?, the words ‘death? In other embodiments, the video control node can include obtaining media content and analyzing image content of the media content using image recognition resulting in image analysis. In other embodiments the video control node may include obtaining the media content, and then analyzing the image content of that media content with image recognition resulting to image content analysis. The machine learning application also determines, based on the image content analysis, that the media does not comply with the control rules. The image content analysis is able to identify offensive visual material, such as sex, violence and other forms of abuse. sex, violence, etc.).

In one or several embodiments, the node video 102 can receive a qualitative review/rating from the rating service server or crowd source server. The qualitative rating or reviews can be text, audio and/or videos. Video control node 102 uses textual analyzing/keyword searching techniques to analyze a qualitative text rating or review in accordance with control rules. This allows it to determine if the media content that is associated with this qualitative text rating or review, should be displayed to child user 120 on the media device 118. The video control node can also use audio recognition to identify keywords in audio or video according to control rule to determine if the media content related to the qualitative review or rating should be presented to child user 120. The keyword search or audio techniques, for example, can identify the phrase “death mother” The media content can be determined to include subject matter relating to the death of a mother or father. In addition, video control node 102 can use image processing/recognition techniques to identify images according to control rules in the video of the qualitative rating or review to determine whether the media content associated with the qualitative rating or review should be presented to the media device 118 for child user 120. For example, the image processing/recognition techniques can analyze video to determine whether there is any review suggesting the media content includes inappropriate scenes for the child user (e.g. sex, violence, etc.).

A control rule could have been created by the video control node or provided by either the parent user or child user to restrict the presentation of media that deals with the death of a parents as this subject matter would upset child user 120.

The video control node can train the machine-learning application in one or more embodiments based on the ratings information accessed from the rating server 108.

In one or more embodiments the child user can attempt to view media content on their media device 118 alone at their home 122 without parental supervision, while the parent user is at their office 116. The system 100 may be configured so that, in response to the child user requesting media from the server 104 via the media devices 118, the server 104 or the media devices 118 can send a message or other notification to the video node 102. In other embodiments, the device 118 can have a media viewing application which can be accessed by the child when logging into the device. The application can also be accessed using the child’s login credentials. The media device 118, or media content viewing app can be linked to the child profile and the control rules associated with it. This is done after logging in. The video control 102 is notified that media content has been requested by a child user. This includes or directs video control 102 to child user profiles so that video control 102 can access control rules for the child profile. The video control node, after receiving the indication that media content is requested and accessing control rules for the child profile, processes the media using the machine-learning application as described in this document to determine whether the media does not comply with the control rule associated with the profile of the child. Text, audio, image, and other processing methods can be used to determine rating. The video control node can also send a notification that the content of the media does not comply with the control rules to the media devices 112 and/or 118. The notification can include a title or description of requested media content, and whether the content is in compliance with parental control rules. (In other embodiments the video control device 102 can be configured so that it notifies the parent user via media device 112 every time media content will be requested or provided to a child user regardless of whether the content requested or provided conforms or not to control rules). In some embodiments the parent user can send instructions via the media device 112 for the video control node to prevent the presentation of media content to child user 120. In some embodiments, video control node can be configured to prevent the presentation of media content to child user 120 via media device 118 even without additional instructions from the parent user 114. In other embodiments, parent user 114 can instruct the video control node 102 to present the media content on media device 118 despite the fact that the requested content does not comply with the control rules.

In one or more embodiments the child user may be in a residence other than their own (e.g. The child user 120 may be at a residence that is not their home (e.g. television, computer, etc.). Media device 118 can relay the presence information to the video node 102. The video control node can also be informed of the presence of media device 112 by the media device 118. The video control node is also provided with information that a media device 112 belongs to a parent user (114) and that a media device 118 belongs to s child user (120). The presence information provides information about the location of media devices 112 and 118. The video control node determines, by comparing media device 112 with media device 118 presence information, that it is unlikely that parent user is 114 in the same place as child user 120 and therefore not under parental supervision. In other embodiments the video node 102 will not use the machine learning application to apply control rules unless it determines that media device 112 does not reside in the same place (within a certain distance) as device 118.

In one or more embodiments a request for media content may be sent by a device connected to the residence, such as a mobile phone. A friend’s home of the child user 120. In some embodiments the video control node and media content server can be integrated or co-located in the same computing device, or distributed among several computing devices. The media content server or video control node determines that a device at the same residence as media device 118 is requesting media content. The video control node is then informed of the request for media content so that it can present the content to the child 120. The video control node determines, using machine learning application, that the media does not comply with the control rules. In some embodiments a notification may be sent to the parent user’s media device 112 to warn them that media content that does not comply with the control rules could be shown to child user 120. The parent user 114 may also send instructions via the media device 112 directly to the video node 102 to prevent the presentation of media content to child user 120 through the media device 110 associated with residence (e.g. In other embodiments, the video control node 102 can be configured to prevent presenting media content to the child user 120 on any media device located within a threshold distance of media device 118 without further instructions from parent user 114. In other embodiments the video control node can be configured to prevent the presentation of the media content on any media devices located within a certain distance from media device 118, even if the residence is not the same as the child’s 120 home (such as a friend?s house). In other embodiments, parent user 114 can instruct the video control node that the media device at the residence is an exception to control rules.

In one or more embodiments the video control node can notify the parent user 114 via the media device 112. This notification will inform the parent that the media content requested by the child user 120 has been presented. The parent user can instruct the video node to send the requested media content directly to the parent device 112 via the video device 112. This allows the parent to monitor the content without having to give instructions on whether to prevent or allow the presentation of the content to child user 120. Parent user 114 might want to monitor media content if they are unfamiliar with the content. They may also wish to see the content in order to decide if it is suitable for child user 120. Media content can be displayed on the media device 112 before, after or simultaneously with the presentation of the media content to child user 120. In such embodiments, the parent user can monitor the content of the media and decide to give further instructions via the media device to the video node 102 to stop the presentation of the content to the user child 120 at any time. The parent user 114, for example, can monitor the content of the media device 112. The media content may suddenly display a disturbing scene to child user 120. The parent user may send instructions to video control node, 102 if the media is presented on a delay between parent user 114 & child user 120. This can be to instruct the video node to either stop showing the media to child user 120 OR to fast-forward the disturbing scene. The video control node can instruct the server of media content 104 or the device of media content 118, according to instructions from the parent user 114, to stop or not show a portion of media content.

In some embodiments, when determining whether media content that is not in compliance with the control rules should be shown to the child 120, the video content server 104 can be instructed to present distraction media content. Media content that is compliant with the control rules can also be considered distraction media content. Additionally, distraction media can be selected media content from the viewing data associated with child user 120’s media device 118. When the media content requested does not comply with control rules, distraction media content may be automatically provided. In other embodiments the parent user 114 receives notification of the media content request and gives further instructions to video control node 116 to provide or cause distraction media content to be provided to the child.

Click here to view the patent on Google Patents.


Comments

Leave a Reply

Your email address will not be published. Required fields are marked *