Movatterモバイル変換


[0]ホーム

URL:


WO2024159269A1 - Data communications network and method for administering automated censorship of shared online content - Google Patents

Data communications network and method for administering automated censorship of shared online content
Download PDF

Info

Publication number
WO2024159269A1
WO2024159269A1PCT/AU2024/050055AU2024050055WWO2024159269A1WO 2024159269 A1WO2024159269 A1WO 2024159269A1AU 2024050055 WAU2024050055 WAU 2024050055WWO 2024159269 A1WO2024159269 A1WO 2024159269A1
Authority
WO
WIPO (PCT)
Prior art keywords
content
content item
data communications
proposed
user
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
PCT/AU2024/050055
Other languages
French (fr)
Inventor
Thomas Donaghey
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Individual
Original Assignee
Individual
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from AU2023900217Aexternal-prioritypatent/AU2023900217A0/en
Application filed by IndividualfiledCriticalIndividual
Publication of WO2024159269A1publicationCriticalpatent/WO2024159269A1/en
Anticipated expirationlegal-statusCritical
Pendinglegal-statusCriticalCurrent

Links

Classifications

Definitions

Landscapes

Abstract

A data communications network including data communications devices and method of operating same for administering automated censorship of shared online content. One or more processors connected to the data communications network perform steps including establishing an account for a user requesting, via a data communications device, access to an interface for sharing online content; receiving a request from the data communications device to publish content in said interface that is viewable by other users, wherein proposed content is text, video, still or moving images, and/or audio recording(s); analysing, using an online artificial intelligence analysis tool, the proposed content to determine whether it contains illegal material according to pre-defined definitions, and if it does, preventing publication of the content in the interface and notifying the user via the data communications device of the proposed content's ineligibility, or if it does not contain illegal material, automatically publishing the content in the interface.

Description

DATA COMMUNICATIONS NETWORK AND METHOD FOR ADMINISTERING AUTOMATED CENSORSHIP OF SHARED ONLINE CONTENT
FIELD OF THE INVENTION
[0001] The present invention relates to a data communications network and a method of operating same to administer automated censorship of shared online content. In particular, the present invention provides a platform and associated interface that is accessible by user devices connected to the network to post (publish) content, the platform including automated censorship functionality that causes online content proposed for sharing to be automatically assessed according to predefined criteria relating to the presence of illegal content, and automatically published or prevented from publication based upon whether the predefined criteria is satisfied.
BACKGROUND OF THE INVENTION
[0002] With the proliferation of social media networks and the propensity of individuals to endorse, share or otherwise express opinions regarding social media content, there is an ever-growing volume of content (in the form of text, video, still or moving images, audio recordings, etc) that requires auditing by administrators before allowing content to be published. In this regard, administrators of social media platforms will typically administer their own censorship policies, and the level of content auditing will be dependent upon the terms of such policies. Human operators are often required to conduct such audits, however, the need for human operators to review content and enforce censorship policies prevents, or at least hinders, the publication of content resulting in frustration on behalf of users seeking to have their content published in substantially real-time.
[0003] The publication of content may not only be delayed (based upon the involvement of human operators) but may be prevented altogether if the content fails to satisfy a particular platform’s censorship policies. For example, online social media sharing platforms such as Tiktok® prevent the publication of content that is identified as comprising political views, dangerous stunts, misogyny, hate speech, fat shaming, racism, etc. Many users view such interference by content administrators as frustrating and a limitation upon their ability to freely express themselves and share ideas. This problem is exacerbated when users spend a significant amount of time creating content, only to see their content rejected, or published for a very short period of time before their content is removed.
[0004] There are many content creators who consider that the only data that should be restricted from publication is that which comprises material containing illegal content. However, the censorship policies implemented by most social media platforms extend beyond the removal of solely illegal content. For example, content that seeks to question or challenge laws, rules or regulations endorsed by Government will often be considered to contravene censorship policies administered by most mainstream social media platforms. In addition, punitive action can often be taken against content creators who regularly post content that contravenes censorship policies, including preventing such users from earning advertising revenue, and banning users for a limited period of time or indefinitely.
[0005] Whilst there are presently platforms that administer less restrictive censorship regimes, the content posted will still typically be subject to audit by a human operator who must decide whether or not the content is appropriate for publication irrespective of whether it contains illegal content, hence such platforms are not truly uncensored in the sense that it remains possible for content other than that which contains illegal material to be removed or prevented from publication.
[0006] A technical problem also arises when content creators devote hours creating and posting content, typically using personal computers or portable data communications devices such as smartphones and tablets connected to a network, and human operators employed by social medical organisations operate their own computing devices to assist auditing the content and either accepting or rejecting publication of same. This has the effect of consuming significant computer data processing and memory resources, and causes substantial data communications bandwidth to be consumed, giving rise to significant computer and network resource usage. Inefficient use of the available resources, including bandwidth of data communications networks, is clearly undesirable since slow and/or unreliable data communications are frustrating (and expensive) for users. This problem is exacerbated during times of high network traffic which consume available bandwidth. [0007] The present invention seeks to mitigate the problems discussed herein, or at least provide an alternative solution to existing data communications networks and methods for managing and reducing wasted resources arising from the submission of digital content and the review/audit of that content prior to adherence to one or more policies regarding published digital content.
[0008] The reference to any prior art in this specification is not, and should not be taken as, an acknowledgement or any suggestion, that the prior art forms part of the common general knowledge.
SUMMARY OF THE INVENTION
[0009] In one aspect, the present invention provides a data communications network including connected data communications devices and method of operating same, the method including, receiving, by one or more processors operably connected to the data communications network, details regarding a user seeking to establish a user account to access an interface for sharing online content, the request from a data communications device of the connected data communications devices, establishing, by the one or more processors, an account for the user based upon verification of said details, receiving, by the one or more processors, a request from the data communications device to publish one or more proposed content items in said interface that is viewable by one or more other users, to enable the one or more other users to view the published content item(s), wherein each proposed content item contains one or more of, text, video, still or moving images, and audio recording(s), analysing, by the one or more processors, using an online analytical processing tool that implements one or more artificial intelligence techniques, the proposed content item(s) to determine whether each proposed content item includes content containing illegal material according to pre-defined definitions, and based upon the online analytical processing tool assessing a content item proposed for publication as including any illegal material, preventing, by the one or more processors, the item from being published in the interface, and transmitting a notification to the data communications device associated with the user regarding the ineligibility of the proposed content item for publication, or based upon the online analytical processing tool assessing the content item proposed for publication as absent any illegal material, automatically publishing the content item based upon the eligibility of the content item for publication, such that the item is viewable by the one or more other users in the interface.
[0010] In an embodiment, the interface further enables the user to submit a request to publish a proposed content item, and to review the result of their request in substantially real-time, including whether the proposed content item is eligible or ineligible for publication.
[0011] In an embodiment, the proposed content item is partially complete such that the result provided to the user in substantially real-time indicates whether the partially completed content item is likely to be eligible, or ineligible, for publication upon completion. [0012] Based on the result indicating that the proposed content item is, or is likely to be, ineligible for publication, the method preferably further includes providing, by the one or more processors for display in the interface, reasons justifying the result.
[0013] Based on the result indicating that the proposed content item is, or is likely to be, ineligible for publication, the method preferably further includes providing, by the one or more processors for display in the interface, one or more recommendations for editing the content item such that the content item edited in accordance with the one or more recommendations will be eligible for publication.
[0014] Based upon receiving a user request to publish one or more proposed content items, the method preferably further includes analysing, by the one or more processors, the one or more proposed content items to automatically determine one or more categories for each proposed content item, the categories according to one or more attributes including one or more of, author, location, subject, post, photo, video, and date of posting.
[0015] Based on a failure to automatically determine one or more categories for a proposed content item, the method preferably further includes prompting, by the one or more processors, the user to categorise the proposed content item, according to the one or more attributes.
[0016] In an embodiment, the method further includes, providing, by the one or more processors, a search facility enabling the one or more other users to conduct searches regarding posted content items of interest, including searching according to keywords.
[0017] In an embodiment, the search facility provides the one or more other users with the ability to select from a range of search filters, including filters according to content item categories.
[0018] In an embodiment, the method further includes, providing, by the one or more processors, a content item creation and editing facility enabling the user to create and edit content items.
[0019] In an embodiment, editing a content item includes any one or more of, adding content features, removing content features, controlling playback of video images using functions including fast forward, queue, review, rewind, and pause, and adjusting and/or improving a feature of the content item that affects viewing experience.
[0020] In an embodiment, adding a content feature includes adding one or more graphical representations within the content item, including the addition of any one or more of subtitles, emojis and/or music automatically recommended for inclusion in the content item or selected by the user from a library of graphical representations.
[0021] In an embodiment, the method further includes, conducting, by the one or more processors, an automated analysis of proposed content items for evidence that the content has previously been edited using third party software, and based on evidence indicating that the content has previously been edited using third party software, preventing, by the one or more processors, such content items from being published.
[0022] In an embodiment, the method further includes, based on receiving a pre-defined number of requests from a user to publish content items that have been assessed as ineligible for publication, restricting or preventing the user from requesting publication of additional content items.
[0023] In an embodiment, the one or more artificial intelligence techniques implemented by the online analytical processing tool to determine the presence of illegal content includes utilizing one or more of, Chat GPT, Google DeepMind, and image and/or character recognition.
[0024] In an embodiment, the assessment conducted by the online analytical processing tool regarding whether a proposed content item includes illegal content further utilizes one or more external resources to enable comparisons between the proposed content item and other published material.
[0025] In an embodiment, illegal content includes any one or more of, potential intellectual property infringements including copyright and trade mark infringements, inappropriate visual images depicting illegal activities, and statements that are considered defamatory and/or inciting violence.
[0026] In an embodiment, the method further includes, providing, by the one or more processors, a scheduling facility enabling users to schedule content item postings according to a preferred date and/or time, thereby enabling users to pre-schedule individual or multiple posts.
[0027] In another aspect, the present invention provides a data communications network including one or more connected data communication devices including one or more computer processors configured to administer automated censorship of shared online content, including, receiving details regarding a user seeking to establish a user account to access an interface for sharing online content, the request from a data communications device of the connected data communications devices, establishing an account for the user based upon verification of said details, receiving a request from the data communications device to publish one or more proposed content items in said interface viewable by one or more other users, to enable the one or more other users to view the published content item(s), wherein each proposed content item contains one or more of, text, video, still or moving images, and audio recording(s), analysing, using an online analytical processing tool that implements one or more artificial intelligence techniques, the proposed content item(s) to determine whether each proposed content item includes content containing illegal material according to pre-defined definitions, and based upon the online analytical processing tool assessing a content item proposed for publication as including any illegal material, preventing the item from publication in the interface, and transmitting a notification to the data communications device associated with the user regarding the ineligibility of the proposed content item for publication, or based upon the online analytical processing tool assessing the content item proposed for publication as absent any illegal material, automatically publishing the content item in view of the eligibility of the content item for publication, such that the item is viewable by the one or more other users in the interface.
[0028] In a still further aspect, the present invention provides a non-transitory computer- readable medium including computer instruction code stored thereon that, when executed on a computer, causes one or more processors of the computer to perform the steps of, receiving details regarding a user seeking to establish a user account to access an interface for sharing online content, the request from a data communications device of the connected data communications devices, establishing an account for the user based upon verification of said details, receiving a request from the data communications device to publish one or more proposed content items in said interface viewable by one or more other users, merely enabling the one or more other users to view the published content item(s), wherein each proposed content item contains one or more of, text, video, still or moving images, and audio recording(s), analysing, using an online analytical processing tool that implements one or more artificial intelligence techniques, the proposed content item(s) to determine whether each proposed content item includes content containing illegal material according to pre-defined definitions, and based upon the online analytical processing tool assessing a content item proposed for publication as including any illegal material, preventing the item from publication in the interface, and transmitting a notification to the data communications device associated with the user regarding the ineligibility of the proposed content item for publication, or based upon the online analytical processing tool assessing the content item proposed for publication as absent any illegal material, automatically publishing the content item based upon the eligibility of the content item for publication, such that the item is viewable by the one or more other users in the interface.
BRIEF DESCRIPTION OF THE DRAWINGS
[0029] Embodiments of the invention will now be described in further detail with reference to the accompanying Figures in which:
[0030] Figure 1 provides an overview of a data communications network according to an embodiment of the present invention showing, in particular, the interaction of various network components;
[0031] Figure 2 illustrates a diagram associated with an exemplary server component of the network illustrated in Figure 1 ;
[0032] Figure 3 illustrates an exemplary flow diagram of a process that enables users to download and install a software application, and subsequently access, or register to use, the software application for interaction with the network illustrated in Figure 1 , including for the purpose of creating user accounts and profiles;
[0033] Figure 4 illustrates an exemplary flow diagram of a process that enables content creators to create/edit and publish content based on a determination that the content does not comprise material that contains illegal content;
[0034] Figure 5 illustrates an exemplary flow diagram of a process that enables users seeking to access particular content to conduct a search using the software application and to receive search results, including relevant published content posted from a plurality of different users, in substantially real-time; and
[0035] Figure 6 illustrates an exemplary flow diagram of a process that enables a content creator to tag and/or categorise their content and to schedule publication of content items.
DETAILED DESCRIPTION OF EMBODIMENT(S) OF THE INVENTION
[0036] For simplicity and illustrative purposes, the present disclosure is described by referring to embodiment(s) thereof. In the following description, numerous specific details are set forth to provide a better understanding of the present disclosure. It will be readily apparent, however, that the current disclosure may be practiced without limitation to the specific details described in relation to embodiments of the invention. In other instances, some features have not been described in detail to avoid obscuring the present disclosure.
[0037] According to an embodiment, the present invention provides a computer- implemented data communications network and a method of operating the network to administer automated substantially real-time censorship of online content shared by users (30A) (also referred to herein as content creators) such that the content is viewable by one or more other users (30B) who have an interest in viewing such content, without the delay and frustration associated with conventional implementation of censorship policies. In this regard, only those content items determined to include illegal content will be prevented from being published. The network and method provide a platform that hosts a computer-executable software application (40), wherein the application (40) is accessible by a plurality of registered users (30), including content creators (30A) and content viewers (30B). In particular, the network utilises a central server (20) in communication with data communication devices (50) associated with users (30).
[0038] It will be appreciated that general reference to “users (30)” herein is intended to indicate a reference to either one or both of users (30A) and (30B).
[0039] The central server (20) maintains one or more processors and/or databases for performing functions, including receiving details regarding users (30A) seeking to establish a user account and gain access to an interface (170) for sharing online content (60), and establishing an account for the user (30A) based upon verification of such details. The users (30A) may subsequently request to publish one or more proposed content items (60) using interface (170), which will enable one or more other users (30B) to view the published content items (60). Each proposed content item (60) may contain one or more of text, video, still or moving images, and audio recordings. The central server (20), through the use of an online analytical processing tool, analyses the proposed content item(s) (60) to determine whether the item(s) (60) include any material containing illegal content according to definitions that are pre-defined in the online analytical processing tool. As described in greater detail below, the online analytical processing tool may utilise artificial intelligence (Al) techniques including machine learning (ML) and natural language processing (NLP) to increase the accuracy of detecting unlawful content, thereby protecting original material from unjustified censorship.
[0040] Based upon a determination that the content item(s) (60) do not include any illegal content, the content item(s) (60) may be automatically (and in substantially real-time) published based upon the eligibility of the content item for publication, such that the item(s) become viewable by the one or more other users (30B) conducting searches using a search interface (180). In the event the content item (60) proposed for publication is determined to include illegal content, the publication is prevented from being published, and a notification is transmitted to a data communications device (50) associated with a user (30A) who shared the content regarding the ineligibility of the proposed content item (60) for publication.
[0041] The skilled person will appreciate that the above-described embodiment of the present invention provides a platform that removes the need for a human operator to review and apply a censorship policy, since the platform automatically processes content items (60) and assesses same according to well-defined criteria and only prevents publication in the event that content includes illegal material. Since it is solely content including illegal material that is prevented from publication, there are minimal limitations upon each user’s ability to freely express and share ideas which may otherwise not be possible in view of censorship policies implemented by human operators who assess content for various current social media/sharing sites.
[0042] Figure 1 is divided into Segments 200 to 600 which are further expanded in subsequent Figures 2 to 6, respectfully. In particular, Segment 200 of Figure 1 shows the server component (20) with which the software application (40) operating on each data communication device (50) is configured to communicate. It will be apparent to the person skilled in the relevant field of technology that the software application (40) may be a mobile application or a web application and that, similarly, data communication devices (50) utilised by users (30) may be mobile devices or fixed location computing devices. Examples of mobile devices include mobile phones or computer tablets, and examples of fixed location computing devices include workstations or personal computers. The server component (20) is additionally detailed in Figure 2.
[0043] The skilled person will further appreciate that the steps described herein may be executed by the devices (50), wherein such operations are facilitated by the software application (40) operating on each device. According to another implementation of the present invention, the server (20) may be programmed to provide all, or most, of the processing functions described herein, where they cannot be provided locally on the user device (50) or where it may be commercially or technically impractical to implement such an arrangement. In other words, the steps described herein as performed by the device (50), or components thereof, may be associated with hardware located externally of the device (50) such as the remote central server (20) for example (i.e. in a distributed architecture). Different arrangements are possible in this regard, and alternate variations will be apparent to the person skilled in the relevant field of technology.
[0044] Segment 300 of Figure 1 shows a user (30), which may include either of user (30A) or (30B), downloading and installing the software application (40) and subsequently accessing the application (40) to establish a user account and profile, including submission of various details and preferences relating to their interaction with the software application (40), as further detailed in Figure 3. Segment 400 of Figure 1 illustrates an example interface (170) which may be utilised by users (30A) to create content items (60) and to also edit content items, including adding one or more additional representations (65) prior to the publication of content items (60), as further detailed in Figure 4. Segment 500 of Figure 1 illustrates a search interface (180) and a search results interface (190) which may be accessed by users (30B) who have an interest in viewing content published by other users (30A), including the ability to search by keyword, item category, etc, as further detailed in Figure 5. Finally, Segment 600 of Figure 1 illustrates additional interface (210) enabling users (30B) to tag (85) or create new categories (90) for tagging content items (60), and interface (220) enabling the scheduled publication (70) of content items (60) using a scheduling facility (95), as further detailed in Figure 6.
[0045] As mentioned above, Figure 2 depicts in greater detail Segment 200 of Figure 1 and, in particular, Figure 2 details the server component (200 which includes infrastructure upon which the platform of the present invention operates. The infrastructure may be local or cloud-based. [0046] The central server (20) may operate one or more computer processors and maintain one or more databases to enable the following functionality and/or storage:
• User account register (100) storing user information and details uploaded by users (30) through interface (160) including, but not limited to, name, age, address, contact details, and any additional data which may be relevant for the purpose of identifying each user (30), as well as details relating to user preferences selected by individual users (30);
• Data processing functionality (105) for processing user input commands and received data to generate relevant outputs for display. For example, data processing functionality (105) may be responsible for administering a range of applications including, but not limited to, verifying details uploaded by users (30), analysing (using artificial intelligence techniques, including with respect to image and character recognition, implemented by the online analytical processing tool) proposed content items (60) to determine whether such items include material containing illegal content according to definitions that are pre-defined in the online analytical processing tool, as well as conducting automated analysis of proposed content items (60) to ensure they have not been edited using third-party software, as described in greater detail below;
• Content library (1 10) for storing created content in addition to additional tools and graphical representations that may be required to provide users (30A) with the ability to create and/or edit their content for publication (70), including additional graphical representations (65) such as subtitles, emojis, music, etc;
• Application programming interface (API) (120) to allow access to external resources to effect a range of functions, including enabling comparisons to be made between proposed content items (60) and other published material to further improve the assessment regarding whether proposed content includes illegal material by the online analytical processing tool; and
• Search functionality (130) enabling users (30B) to conduct searches regarding posts (80) by users (30A) including by keyword, category, etc, and for generating and presenting relevant search results. [0047] Figure 2 also depicts server (20) configured to enable communication (140) with the user devices (50) and, in particular, the software application (40) operating on each user device (50). Such communications may occur via the internet or other similar data communications network.
[0048] Figure 3 illustrates in greater detail Segment 300 of Figure 1 and in particular, the steps associated with a user (30) installing the application (40) which may be achieved by downloading the application (40) from an application store. Each user (30) may create an account using the application (40) and the account information may be stored in the user account register (100). As described above, the user account register (100) may capture information sufficient to enable each user (30) to be correctly identified, and such details may be subsequently verified using the data processing functionality (105) associated with the central server (20), which may also access one or more external resources using the API functionality (120) for the purpose of verifying particular data received from users (30) including identification details and the like.
[0049] Verification of user details may also be assisted using one or more artificial intelligence techniques giving rise to stronger security and user legitimacy. To secure user data, the network may incorporate strong encryption and anonymization methods into the artificial intelligence processing levels such that strict audit trails and access controls may be used to prevent and detect data breaches.
[0050] Based upon registering with the software application (40), users (30) may be requested to specify whether they are content creators (30A) or users (30B) who have an interest in accessing and viewing published content (80), including content items (60) that have previously been prepared by users (30A) and authorized for publication by the platform according to a successful assessment described herein. In this regard, where users (30A) acknowledge that they are content creators, such users (30A) may be directed to interface (170) shown in Figure 4, and where a user (30B) acknowledges they are using the platform for content viewing, such users (30B) may be directed to a search interface (180), as shown in Figure 5.
[0051] The process of installing the application (40) is indicated by arrow (150), and interface (160) allows each user to download and install the application (40) to access the functionality thereof, including creating and maintaining a user account and profile. In other words, once the application (40) has been accessed by a user (30), the user (30) may be presented with an interface, identical or similar to interface (160), to allow the user (30) to create their profile, including providing the user (30) with the ability to add/edit details and access functionality of the application (40) once their details have been verified. When establishing a user profile, each user (30) may also be requested to select preferences relating to their future interaction with the software application (40) including preferences with respect to privacy.
[0052] Figure 4 shows in greater detail Segment 400 of Figure 1 and, in particular, a content item creation/editing interface (170) that provides users (30A) with the ability to create and edit content items (60), including text, video, still or moving images, audio recordings, etc, and to submit requests to publish such items (60). The interface (170) is also useful since it allows users (30A) to review the results of their requests in substantially real-time, including whether proposed items (60) are eligible or ineligible for publication.
[0053] The editing facility may further provide users (30A) with the ability to control playback of video images with functions including fast-forward, queue, review, rewind and pause, and additional editing facilities may include tools to adjust and/or improve the viewing experience regarding posted video items (60). The editing facility also enables users (30A) to include additional graphical representations within their content, including the addition of subtitles, emojis, music, etc. Such additional graphical representations may be selected from content library (1 10).
[0054] Editing may also be improved by implementing artificial intelligence techniques that can be used to generate recommendations for improvements or alterations to usergenerated content, such as appropriate music or emojis for users (30A) to select, or subtitle optimization, thereby improving the effectiveness and appeal of user content and increasing user engagement. In addition, regular feedback may be generated and provided to the user during content creation based on certain elements that have been introduced into the content failing to satisfy the content eligibility requirements.
[0055] Modifications to content may be recommended in order to ensure the content adheres to platform guidelines and legal requirements. In this way, content creators (30A) may be provided with warning notifications or the like stating that their content is likely to be rejected, which may cause the user (30A) to cease further work on the content or remove a particular problematic feature from the content, prior to committing additional time and effort only to have their content ultimately rejected. The recommendations may also be accompanied by personalized notifications including justification for the recommendation provided. This may include, for example, justification along the lines that the use of a particular word or image included in the content is likely to cause the content to be rejected in view of a particular law, regulation or guideline.
[0056] Where a platform administrator prefers that content items (60) submitted for publication are original and have not been edited using third-party software, the data processing functionality (105) may cause an automated analysis of proposed content items (60) to be conducted in order to determine whether the content has been edited using third-party software, and in the event of determining such editing, preventing such content items (60) from being published. Once again, the use of artificial intelligence may improve this process by identifying minute indicators of third-party modification with more accuracy and thereby preserving content integrity by preventing the publication of, for example, content that has been altered or published without authorization.
[0057] In the event the online analytical processing tool determines that a content item (60) proposed for publication includes illegal material, such items (60) will be prevented from publication in the interface (170). As mentioned above, a notification may be transmitted to the user (30A) regarding the ineligibility of the proposed content item (60) for publication. However, in the event the content item (60) is determined to be eligible for publication, the interface (170) may provide users (30A) with confirmation that the item satisfies publication requirements and is eligible for publication.
[0058] In the event a pre-defined number of requests from a user (30A) to publish content items (60) are assessed as ineligible for publication, there may be adverse consequences for the user (30A) including, for example, restricted access to, or deactivation of, the user’s account. Using artificial intelligence techniques to analyse user behaviour patterns may make it easier to detect repeat offenders or malevolent actors, hence platform safety is enhanced and there are reduced prospects for dissemination of unlawful content. [0059] It is to be understood that illegal material may include one or more of potential copyright or trade mark infringement, inappropriate visual images depicting illegal activities, statements that are considered defamatory and/or are inciting violence, etc. When a content item (60) is created by a user (30A), which may be achieved by uploading content or by using hardware associated with their data communications device (e.g. using an image/video capture device associated with their data communications device (50)), the data processing functionality (105) associated with server (20) may utilise the online analytical processing tool to assess the proposed content using pre-emptive technology for scanning the items (60) and enabling an assessment thereof, (ie. utilising visual and character recognition techniques).
[0060] In this regard, artificial intelligence techniques such as the use of ChatGPT and Google DeepMind, may be utilised in order to process enormous volumes of data (in any language) and better interpret sentiment, context, linguistic nuances, and subtle language that may exist in created content which will decrease false positives and negatives during the process of content filtering. In this way, the accuracy of detecting unlawful or unsuitable content will improve. By comprehending intricate patterns and subtleties in text, video, still/moving images, and audio recordings, Al technologies can improve the process of content analysis and classification, including by spotting minute clues or suggestions of illicit content.
[0061] Further, by providing near real-time analysis and response by automating and streamlining the decision-making process for content release, publication delays are reduced, and platform effectiveness and user satisfaction is improved.
[0062] Where necessary, such an assessment may be facilitated by accessing API functionality (120) to engage with one or more external resources to enable relevant comparisons to be made with other published material, thereby improving the reliability of the assessment provided by the online analytical processing tool responsible for making a determination regarding the presence, or otherwise, of illegal material in content.
[0063] Figure 5 shows in greater detail Segment 500 and, in particular, a search interface (180) providing a searching facility to users (30B), thereby enabling users (30B) to conduct searches regarding content posts (80) by users (30A) including by keyword searching. Users (30B) may select from a range of filtering criteria for displaying search results, including one or more of exact matches, similar matches, date of posted content, author/influencer, location (75) of the content and/or user (30), type of item posted (e.g. video only, text/image only), etc. It will be appreciated that based upon such searching, users (30B) may be presented with search results in a search results interface (190) that provides details of every post (80) that is relevant to the search query which has been published (shared) by a plurality of different users (30A), in substantially real-time.
[0064] The searching facility may be improved by the use of Al-powered search algorithms to improve search relevancy and accuracy, particularly when filtering and seeking to comprehend new search queries. Similar techniques may also be used to improve the searching for and location of relevant results, improving content discoverability.
[0065] The selection of particular posts by users (30B) may also give rise to interactions between users (30A) who view the same content, or between the viewing users (30B) and content creators (30A). In this regard, the search facility may also include a live chat functionality that enables users (30) to engage in substantially real-time communications with other users, including utilising any available form of communications such as text, emojis, GIF functions, and live video links. Such chats may be moderated using artificial intelligence techniques that ensure that platform policies with respect to user interactions are satisfied, thereby preserving a polite and secure online atmosphere for users.
[0066] Figure 6 depicts in greater detail Segment 600 of Figure 1 and, in particular, an example tagging/categorization interface (210) which enables users (30A) who create and publish content to tag (85) their content using existing search categories based upon one or more attributes of the content item (60), including but not limited to location, subject, post, photo, video, and date of posting. Such users (30A) may also be prompted to create new categories (90) where the existing categories may be insufficient to enable the user (30A) to appropriately tag (85) a particular content item (60). Alternatively, proposed content items (60) may be automatically categorised according to detected attributes that are automatically identifiable associated with the proposed content item (60). In this regard, artificial intelligence may be utilized to classify content automatically according to attributes and context, thereby simplifying and improving the efficiency of the categorization and enhance the user experience. New content may be more accurately classified for improved organization.
[0067] A scheduling interface (220) is also shown in Figure 6 which provides a scheduling facility (95) that enables users (30A) to schedule content item posting according to a preferred date and/or time, which also enables users (30A) to pre-schedule individual posts or multiple posts according to a “bulk schedule” function. The scheduling facility may also be made viewable to users (30B) seeking to view content, and the preference to share or to what extent to share schedules regarding publications of content may be based on preferences selected by the content creators (30A) via the software application (40). The facility (95) may also utilise artificial intelligence to enhance scheduling and provision of recommendations to users (30A), including to identify user engagement peaks and recommend the best publishing times for particular content, thereby increasing content exposure and interaction.
[0068] The incorporation of artificial intelligence techniques as described herein will significantly improve the overall efficiency of the data communications network, the user experience, moderation and content analysis capabilities, freeing up human resources for more complicated decisions requiring nuanced judgement. In this way, the data communications network will handle growth based on an increasing number of registered users without requiring a corresponding increase in the amount of resources allocated. Such techniques will also prepare the platform for new difficulties and problems that may arise in relation internet content sharing and censorship. Furthermore, the scalability and adaptability of the platform is improved through the use of such techniques, such that the platform is engaged in learning and automatic change based on changes on user behaviour, user expectations, and content trends.
[0069] In order to ensure that content moderation complies with current laws and rules, the network may include a means of interpreting and implementing ongoing legal and regulatory changes which may be automatically incorporated into the decision-making algorithms described herein. Changes to content policies may be automatically generated and implemented or recommendations along these lines may be automatically issued to platform administrators. [0070] Real-time usage statistics may be used to optimize resource allocation and system performance to ensure optimal performance even during peak usage times, eg. by leveraging Google DeepMind’s sophisticated analytics to predict system load and dynamically assign resources. Insights into user behaviour, content trends, and system performance may also be generated such that information on user activities, content submissions, and moderation results may be provided for visual display including via dashboards and reports for the purpose of strategic planning and platform enhancement.
[0071] As used herein, the term “server”, “computer”, “computing system” or the like may include any processor-based or microprocessor-based system including those utilizing microcontrollers, reduced instruction set computers (RISC), application specific integrated circuits (ASICs), logic circuits, and any other circuit or processor including hardware, software, or a combination thereof capable of executing the functions described herein. Such are exemplary only, and are thus not intended to limit in any way the definition and/or meaning of such terms.
[0072] The one or more processors as described herein are configured to execute a set of instructions that are stored in one or more data storage units or elements (such as one or more memories), in order to process data. For example, the one or more processors may include or be coupled to one or more memories. The data storage units may also store data or other information as desired or needed. The data storage units may be in the form of an information source or a physical memory element within a processing machine.
[0073] The set of instructions may include various commands that instruct the one or more processors to perform specific operations such as the methods and processes of the various embodiments of the subject matter described herein. The set of instructions may be in the form of a software program. The software may be in various forms such as system software or application software. Further, the software may be in the form of a collection of separate programs, a program subset within a larger program or a portion of a program. The software may also include modular programming in the form of object- oriented programming. The processing of input data by the processing machine may be in response to user commands, or in response to results of previous processing, or in response to a request made by another processing machine. [0074] The diagrams of embodiments herein illustrate one or more control or processing units. It is to be understood that the processing or control units may represent circuits, circuitry, or portions thereof that may be implemented as hardware with associated instructions (e.g., software stored on a tangible and non-transitory computer readable storage medium, such as a computer hard drive, ROM, RAM, or the like) that perform the operations described herein. The hardware may include state machine circuitry hardwired to perform the functions described herein. Optionally, the hardware may include electronic circuits that include and/or are connected to one or more logic-based devices, such as microprocessors, processors, controllers, or the like.
[0075] Optionally, the one or more processors may represent processing circuitry such as one or more of a field programmable gate array (FPGA), application specific integrated circuit (ASIC), microprocessor(s), and/or the like. The circuits in various embodiments may be configured to execute one or more algorithms to perform functions described herein. The one or more algorithms may include aspects of embodiments disclosed herein, whether or not expressly identified in the figures or a described method.
[0076] It will be appreciated by persons skilled in the relevant field of technology that numerous variations and/or modifications may be made to the invention as detailed in the embodiments without departing from the spirit or scope of the invention as broadly described. The present embodiments are, therefore, to be considered in all aspects as illustrative and not restrictive.
[0077] Throughout this specification and claims which follow, unless the context requires otherwise, the word “comprise”, and variations such as “comprises” and “comprising”, will be understood to imply the inclusion of a stated feature or step, or group of features or steps, but not the exclusion of any other feature or step or group of features or steps.

Claims

The claims defining the invention are as follows:
1. A data communications network including connected data communications devices and method of operating same, the method including: receiving, by one or more processors operably connected to the data communications network, details regarding a user seeking to establish a user account to access an interface for sharing online content, the request from a data communications device of the connected data communications devices; establishing, by the one or more processors, an account for the user based upon verification of said details; receiving, by the one or more processors, a request from the data communications device to publish one or more proposed content items in said interface that is viewable by one or more other users, to enable the one or more other users to view the published content item(s), wherein each proposed content item contains one or more of: text, video, still or moving images, and audio recording(s); analysing, by the one or more processors, using an online analytical processing tool that implements one or more artificial intelligence techniques, the proposed content item(s) to determine whether each proposed content item includes content containing illegal material according to pre-defined definitions; and based upon the online analytical processing tool assessing a content item proposed for publication as including any illegal material, preventing, by the one or more processors, the item from being published in the interface, and transmitting a notification to the data communications device associated with the user regarding the ineligibility of the proposed content item for publication, or based upon the online analytical processing tool assessing the content item proposed for publication as absent any illegal material, automatically publishing the content item based upon the eligibility of the content item for publication, such that the item is viewable by the one or more other users in the interface.
2. A data communications network according to claim 1 , wherein the interface further enables the user to submit a request to publish a proposed content item, and to review the result of their request in substantially real-time, including whether the proposed content item is eligible or ineligible for publication.
3. A data communications network according to claim 2, wherein the proposed content item is partially complete such that the result provided to the user in substantially real-time indicates whether the partially completed content item is likely to be eligible or ineligible for publication upon completion.
4. A data communications network according to either claim 2 or claim 3, wherein according to the result indicating that the proposed content item is, or is likely to be, ineligible for publication, providing, by the one or more processors for display in the interface, reasons justifying the result.
5. A data communications network according to any one of claims 2 to 4, wherein according to the result indicating that the proposed content item is, or is likely to be, ineligible for publication, providing, by the one or more processors for display in the interface, one or more recommendations for editing the content item such that the content item edited in accordance with the one or more recommendations will be eligible for publication.
6. A data communications network according to any one of the preceding claims, further including: based upon receiving a user request to publish one or more proposed content items, analysing, by the one or more processors, the one or more proposed content items to automatically determine one or more categories for each proposed content item, the categories based on one or more attributes including one or more of: author, location, subject, post, photo, video, and date of posting.
7. A data communications network according to claim 6, wherein based on a failure to automatically determine one or more categories for a proposed content item, prompting, by the one or more processors, the user to categorise the proposed content item, the prompt requiring the user to categorise according to the one or more attributes.
8. A data communications network according to either claim 6 or claim 7, further including: providing, by the one or more processors, a search facility that enables the one or more other users to conduct searches regarding posted content items of interest, including based on a keyword search.
9. A data communications network according to claim 8, wherein the search facility provides the one or more other users with the ability to select from a range of search filters, including filters according to the content item categories.
10. A data communications network according to any one of the preceding claims, further including: providing, by the one or more processors, a content item creation and editing facility that enables the user to create and edit content items.
1 1. A data communications network according to claim 10, wherein editing a content item includes one or more of: adding content features; removing content features; controlling playback of video images utilizing functions including any one or more of fast forward, queue, review, rewind, and pause; and adjusting and/or improving a feature of the content item that affects a user’s viewing experience.
12. A data communications network according to claim 1 1 , wherein adding a content feature includes adding one or more graphical representations within the content item, including the addition of any one or more subtitles, emojis and/or music automatically recommended for inclusion in the content item or selected by the user from a library of graphical representations.
13. A data communications network according to any one of the preceding claims, further including: conducting, by the one or more processors, an automated analysis of proposed content items for evidence that the content has previously been edited using third party software, and based on the evidence indicating that the content has previously been edited using third party software, preventing, by the one or more processors, such content items from being published.
14. A data communications network according to any one of the preceding claims, further including: based on receiving a pre-defined number of requests from a user to publish content items that have been assessed as ineligible for publication, restricting or preventing the user from requesting publication of additional content items.
15. A data communications network according to any one of the preceding claims, wherein the one or more artificial intelligence techniques implemented by the online analytical processing tool includes determining the presence of illegal content using one or more of:
Chat G PT,
Google DeepMind, and image and/or character recognition.
16. A data communications network according to claim 15, wherein the assessment conducted by the online analytical processing tool regarding whether a proposed content item includes illegal content further utilizes one or more external resources to enable comparisons between the proposed content item and other published material.
17. A data communications network according to any one of the preceding claims, wherein illegal content includes one or more of: potential intellectual property infringements including copyright and trade mark infringements, inappropriate visual images depicting illegal activities, and statements considered defamatory and/or inciting violence.
18. A data communications network according to any one of the preceding claims, further including: providing, by the one or more processors, a scheduling facility that enables the user to schedule content item postings according to a preferred date and/or time, thereby enabling the user to pre-schedule individual or multiple posts.
19. A data communications network including one or more connected data communication devices including one or more computer processors configured to administer automated censorship of shared online content, including: receiving details regarding a user seeking to establish a user account to access an interface for sharing online content, the request from a data communications device of the connected data communications devices; establishing an account for the user based upon verification of said details; receiving a request from the data communications device to publish one or more proposed content items in said interface viewable by one or more other users, to enable the one or more other users to view the published content item(s), wherein each proposed content item contains one or more of: text, video, still or moving images, and audio recording(s); analysing, using an online analytical processing tool that implements one or more artificial intelligence techniques, the proposed content item(s) to determine whether each proposed content item includes content containing illegal material according to predefined definitions; and based upon the online analytical processing tool assessing a content item proposed for publication as including any illegal material, preventing the item from publication in the interface, and transmitting a notification to the data communications device associated with the user regarding the ineligibility of the proposed content item for publication, or based upon the online analytical processing tool assessing the content item proposed for publication as absent any illegal material, automatically publishing the content item in view of the eligibility of the content item for publication, such that the item is viewable by the one or more other users in the interface.
20. A non-transitory computer-readable medium including computer instruction code stored thereon that, when executed on a computer, causes one or more processors of the computer to perform the steps of: receiving details regarding a user seeking to establish a user account to access an interface for sharing online content, the request from a data communications device of the connected data communications devices; establishing an account for the user based upon verification of said details; receiving a request from the data communications device to publish one or more proposed content items in said interface viewable by one or more other users, merely enabling the one or more other users to view the published content item(s), wherein each proposed content item contains one or more of: text, video, still or moving images, and audio recording(s); analysing, using an online analytical processing tool that implements one or more artificial intelligence techniques, the proposed content item(s) to determine whether each proposed content item includes content containing illegal material according to predefined definitions; and based upon the online analytical processing tool assessing a content item proposed for publication as including any illegal material, preventing the item from publication in the interface, and transmitting a notification to the data communications device associated with the user regarding the ineligibility of the proposed content item for publication, or based upon the online analytical processing tool assessing the content item proposed for publication as absent any illegal material, automatically publishing the content item based upon the eligibility of the content item for publication, such that the item is viewable by the one or more other users in the interface.
PCT/AU2024/0500552023-01-312024-01-31Data communications network and method for administering automated censorship of shared online contentPendingWO2024159269A1 (en)

Applications Claiming Priority (2)

Application NumberPriority DateFiling DateTitle
AU2023900217AAU2023900217A0 (en)2023-01-31System and method of administering automated censorship of shared online content
AU20239002172023-01-31

Publications (1)

Publication NumberPublication Date
WO2024159269A1true WO2024159269A1 (en)2024-08-08

Family

ID=92145530

Family Applications (1)

Application NumberTitlePriority DateFiling Date
PCT/AU2024/050055PendingWO2024159269A1 (en)2023-01-312024-01-31Data communications network and method for administering automated censorship of shared online content

Country Status (1)

CountryLink
WO (1)WO2024159269A1 (en)

Citations (7)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
US20150295870A1 (en)*2012-12-272015-10-15Tencent Technology (Shenzhen) Co., Ltd.Method, apparatus, and system for shielding harassment by mention in user generated content
US20190160382A1 (en)*2017-11-272019-05-30Sony Interactive Entertainment America LlcShadow banning in social vr setting
US10599774B1 (en)*2018-02-262020-03-24Facebook, Inc.Evaluating content items based upon semantic similarity of text
US20200364727A1 (en)*2019-05-132020-11-19Google LlcMethods, systems, and media for identifying abusive content items
US20210058352A1 (en)*2019-08-222021-02-25Facebook, Inc.Notifying users of offensive content
US11019015B1 (en)*2019-08-222021-05-25Facebook, Inc.Notifying users of offensive content
WO2022043675A2 (en)*2020-08-242022-03-03Unlikely Artificial Intelligence LimitedA computer implemented method for the automated analysis or use of data

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
US20150295870A1 (en)*2012-12-272015-10-15Tencent Technology (Shenzhen) Co., Ltd.Method, apparatus, and system for shielding harassment by mention in user generated content
US20190160382A1 (en)*2017-11-272019-05-30Sony Interactive Entertainment America LlcShadow banning in social vr setting
US10599774B1 (en)*2018-02-262020-03-24Facebook, Inc.Evaluating content items based upon semantic similarity of text
US20200364727A1 (en)*2019-05-132020-11-19Google LlcMethods, systems, and media for identifying abusive content items
US20210058352A1 (en)*2019-08-222021-02-25Facebook, Inc.Notifying users of offensive content
US11019015B1 (en)*2019-08-222021-05-25Facebook, Inc.Notifying users of offensive content
WO2022043675A2 (en)*2020-08-242022-03-03Unlikely Artificial Intelligence LimitedA computer implemented method for the automated analysis or use of data

Similar Documents

PublicationPublication DateTitle
Sanchez et al.The ethical concerns of artificial intelligence in urban planning
Morley et al.From what to how: an initial review of publicly available AI ethics tools, methods and research to translate principles into practices
Ma et al." How advertiser-friendly is my video?": YouTuber's Socioeconomic Interactions with Algorithmic Content Moderation
DiakopoulosAlgorithmic accountability: Journalistic investigation of computational power structures
LiAlgorithmic destruction
US20240249023A1 (en)System and method of automated determination of use of sensitive information and corrective action for improper use
US10528611B2 (en)Detecting, classifying, and enforcing policies on social networking activity
US20160321582A1 (en)Device, process and system for risk mitigation
US11379424B2 (en)Edit interface in an online document system
US8881307B2 (en)Electronic file security management platform
US11853701B2 (en)Method for recommending and implementing communication optimizations
Bradford et al.Report of the Facebook data transparency advisory group
ChenThe making of a neo-propaganda state: China's social media under Xi Jinping
US20250200219A1 (en)Delegated signing using sensitivity classification
Kaushik et al." How I Know For Sure": People's Perspectives on Solely Automated {Decision-Making}({{{{{SADM}}}}})
MatiasInfluencing recommendation algorithms to reduce the spread of unreliable news by encouraging humans to fact-check articles, in a field experiment
ContiniUnboxing generative AI for the legal professions: functions, impacts and governance
Rajput et al.Content moderation framework for the LLM-based recommendation systems
EspinosaUnveiling the features of a regulatory system: The institutional grammar of tobacco legislation in Mexico
Sullivan-PaulHow would ChatGPT vote in a federal election? A study exploring algorithmic political bias in artificial intelligence
NagNavigating ethical dilemmas in generative AI: Case studies and insights
WO2024159269A1 (en)Data communications network and method for administering automated censorship of shared online content
Li et al.Reducing organizational inequalities associated with algorithmic controls
GiacaloneAI and the Future of Private Dispute Resolution Mechanisms
SullivanAlgorithmic governance of ‘terrorism’and ‘violent extremism’online

Legal Events

DateCodeTitleDescription
121Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number:24749425

Country of ref document:EP

Kind code of ref document:A1

NENPNon-entry into the national phase

Ref country code:DE


[8]ページ先頭

©2009-2025 Movatter.jp