FIELDThe present invention relates to the field of apparatus, systems and methods for big data analysis, in order to score the reliability of online information with a high efficiency and a real-time availability of the processed information.
BACKGROUNDConsumers regularly consult information relating to almost any topic on the World Wide Web. However, large volume of information is returned to the consumer with an unpredictable quality. In order to qualify the information, different technologies have been developed.
For example, U.S. Pat. No. 2009/0125382 provides an indication of a data source's accuracy with respect to past expressed opinions. The data source is assigned with predication scores based on the verified credibility of historical documents. A reputation score is assigned for a new document as a function of the predication scores from the historical documents, data source affiliations, document topics and other parameters.
Another example is U.S. Pat. No. 7,249,380 providing a model to evaluate trust and transitivity of trust of online services. The trust attributes are categorized in three categories, which relate to contents, owner of the web document and the relationships between the web document and certificate authorities.
U.S. Pat. No. 7,809,721 describes a system for ranking data including three calculations: firstly, the quantitative semantic similarity score calculation which shows the qualitative relevancy of the particular location to the query; secondly, the general quantitative score calculation which comprises a semantic similarity score, a distance score and a rating score; thirdly, the addition of the quantitative semantic similarity score and the general quantitative score to obtain a vector score.
However, all the existing methods work in a passive mode: the calculation is carried out only when a query is launched. These technologies take long time for the calculation and are not optimized for the dynamic update of the information on the World Wide Web. In the context of the fast development of social networks in particular, all users can constantly update information by broadcasting comments in all types of multimedia.
Technical difficulties result in a huge number of data and information sources that have to be taken into account in order to calculate a relevant score, with the additional difficulty of the continuously change of the scope of information. Calculating the score on the fly for a document requested by a user would need too many resources and calculation time. Furthermore, attributing a score to each document that may be requested by a user and refreshing all these scores every time a new document or information becomes available is also too complicated.
SUMMARYThe present invention dynamically provides the reliability of multimedia documents by applying a series of intrinsic criteria and extrinsic criteria. All the pre-calculated reliability scores of a subset of the existing multimedia documents are stored in a database so customers just need to retrieve the scores already pre-calculated, thus it is less time consuming than triggering the calculation process of the full set of existing documents. These scores can be updated according to the publishing of various sources including the comments of the social networks and communities. Additionally, all the subsets of multimedia documents are cross-checked among different sources.
The inventive subject matter provides apparatus, systems and methods in which multimedia documents are attributed a reliability score. One aspect of the inventive subject matter includes a method to provide a customer at least one multimedia document associated with a reliability score calculated by applying a first category of intrinsic criteria and a second category of extrinsic criteria. It comprises first steps of pre-calculating the reliability score for at least a set of multimedia documents of at least one pre-selected source of documents, second steps of updating this reliability score by applying the extrinsic criteria, and a last step for providing, in response to a customer's request, the multimedia documents from the pre-selected sources associated with the updated score and the multimedia documents from the other sources associated with a score conditionally calculated.
A calculation of the score for the multimedia documents from the other sources is activated by an action from the customer. Besides, it is also activated by the detection of at least one request coming from the customer device in the case of getting multimedia documents that do not have a pre-calculated score. In addition, the score of multimedia documents coming from the other sources are pre-calculated when a threshold of interest is reached. The processing of pre-calculation of the score of the multimedia documents within a pre-selected source is prioritized according to an interest indicator, where the interest indicator is weighted by the number of requests and the measure of engagements.
The reliability scores are time-stamped and associated to the related time-stamped version of the documents. So when a discrepancy is detected between the time-stamp of the reliability score and the time-stamp of the related document, the pre-processing of the document is updated. This detection of the discrepancies could be performed by the customer's device.
The representation of the pre-calculated scored documents is computed by the customer device through an aggregation of the multimedia documents coming from the source's servers and the related scores coming from the score processing server. The document acquisition can be performed by both the score processing server and the customer device.
The method of scoring the reliability of online information comprises additional steps of computing, for at least one source, a global reliability score of the source, based on the reliability score of its multimedia documents and the number of its visionary documents. The method for scoring the reliability of online information comprises a step of filtering according to a white list, so as to exclude the checking of an existing reliability score for multimedia documents coming from sources not belonging to the white list; or according to a black list, so as to exclude the checking of an existing reliability score for multimedia documents coming from sources belonging to the black list.
BRIEF DESCRIPTION OF THE DRAWINGSFIG. 1 is a flow chart that describes the general steps for scoring the reliability of online information.
FIG. 2 is a flow chart that describes in details the various steps for creating, updating and distributing the reliability score.
FIG. 3 is the diagram of the hardware for the implementation of the function for document retrieval.
FIG. 4ais a flow chart that describes in details the identification, consolidation and weighting of relevant words.
FIG. 4bis an example of document partitioning with an example the different parts of a typical news document on the World Wide Web.
FIGS. 5aand5bare flow charts that describe the document association using respectively technical classification and clustering.
FIG. 6 is the diagram of the hardware for the distribution of the reliability score through different clients.
FIG. 7 illustrates a display of a reliability score to customers.
FIG. 8 illustrates an exemplary computer (electronic circuit) hardware diagram.
DETAILED DESCRIPTION OF THE PREFERRED EMBODIMENTSAn example with detailed description is given hereafter, but the realization of this invention is not limited to the example illustrated.
FIG. 1 provides an overview of the general computer-implemented process for creating and distributing a reliability score. Atstep100, the document is retrieved by computer from a media (from a web site or collected via proprietary API for example). The document can be any multimedia information on the World Wide Web, e.g. a text, an image, an audio and/or a video registration. Then at step200, this retrieved document is computer-analyzed according to the type of document, e.g. text, audio, video, etc. Afterwards atstep300, a reliability score is calculated by a computer according to the analyzed result of the documents. In this step, a series of intrinsic criteria and extrinsic criteria stored in memory of the computer are applied successively to calculate the reliability score to qualify the information. Finally atstep400, this reliability score is electronically distributed to different clients.
FIG. 2 describes in more details this multimedia document scoring process. Thesteps110,120 and130, each computer-implemented, constitute the computer-implementedstep100 inFIG. 1. According to the interest of customers, a document selection step110 is performed. It can be all kinds of information, as for example a news article.
Thestep120 shows the document retrieval, which is the process of automatic collection of information from the World Wide Web or other sources. This could be achieved by connecting to proprietary APIs, by using existing RSS news feeds or by crawling the World Wide Web.
A multimedia document (for example, a news article) can be updated by the source after the reliability analysis has been performed. As the pre-calculated scores are time-stamped, customers are notified that the multimedia document has been updated but the score has not yet been re-calculated. The processing server is informed of this update in order to trigger a new cycle of multimedia document retrieval, analysis and reliability score calculation.
Thestep130 illustrates the process of multimedia document cleaning, formatting and classifying according to its type (text, image, audio or video) before the next step of multimedia document analysis.
FIG. 3 presents the diagram of the hardware for the implementation of the function for document retrieval.110a,110b,110cand110drepresent different document source servers, e.g. the serve of “Washington Post”, the serve of “Fox news”. Different applications like RSS feeds, crawling, APIs collect the information from the serves via network as indicated by step120a. The collected information goes through the multimedia document retrieval and dispatcher server as indicated bystep140 and is then classified either in the normal documents queue as shown in step150aor in the priority documents queue as shown in step150b. This information serves to calculate the reliability score in the processing server as indicated instep160. Finally, this reliability score is saved in the databases as indicated instep170. For additional discussion of exemplary computer-implemented hardware configurations see discussion with reference toFIG. 8, below.
The document retrieval process is configured according to the frequency and the number of times a source can be solicited. Information that has already been processed must be frequently updated, over an undefined period of time. For example, this multimedia document retrieval tool is adapted so as to collect breaking news emails in order to quickly retrieve urgent information from email alerts sent by newspapers. This will allow a quick response to frequent changes of that information. Documents collected via breaking news will be automatically dispatched into the priority queue displayed inFIG. 3. Information is then assigned a new pre-calculated reliability score with every update.
The computer-implemented step200 of multimedia document content analysis containssteps210,220,230 and240, each being computer-implemented. This analysis is performed in different ways depending on the type of document. If the content is a text, then morphology, syntax and semantic analysis are performed as indicated instep220. If the content is in the form of an image, the analysis is performed by clone, masking, etc. as indicated in step210. For the audio documents, the content is transformed to text via speech-to-text technologies before applying the text analysis as indicated instep240. For the video documents, the analysis is a combination of the image and audio analysis previously explained as indicated bysteps230 and231.
Morphology and syntax analyses are used to determine the category each word belongs to: whether a verb, an adjective, a noun, preposition, etc. . . . To do this it is often necessary to disambiguate between several possibilities. For example, the word ‘general’ can be either a noun or an adjective. The context helps disambiguate between the two meanings. During the relevant words identification and consolidation step221 a named entity recognition is performed to locate and classify atomic elements in the text into predefined categories such as the names of persons, organizations, locations, expressions of times, quantities, monetary values, percentages, etc. as indicated inFIG. 4a.
Here, the text analysis is taken as an example.FIG. 6 illustrates how to identify the weight of words found in the document which is to be scored.Steps221a,221b,221cand221dallow to select the most relevant named entities and events for a given news document. The selection method relies on the contrast between two types of textual units: on the one hand the named entities that denote referential entities that are well-identified in the specific document (e.g. Organization, Location, Person) and on the other hand the terms that represent events on the other hand.
Two levels of relevant named entities and events are extracted:
- The first level relies on the relevant named entities and events that appear in the title and in the first paragraph of the document.
- The second level relies on words co-occurring with the identified relevant named entities and events. The co-occurrence is the connection of two words together in the same sentence.
Different named entities and events have different relevance weightings according to their relevance with the content represented bystep221e. The weighting takes into account the number of occurrences in the text. The more a word appears in the text the more its weighting will be important. For documents comprised of a title and a header, weighting will be heavier for:
Words that are found in the title.
Words that are found in the header.
Words that are semantically close to the title (for example, the concept “prison” is close to “justice”).
Finally, the step221fshows a multimedia document's detailed definition based on the relevance weighting calculated in thestep221e.
FIG. 4brepresents a typical online news article. Different parts of this article, which are represented in different colors, have different degrees of importance. For the document cleaning andformatting step130, only the grey parts, e.g. title, date, author and text are used to calculate the reliability score of the document. To carry out the named entities and eventsrelevance weighting step221e, the words founded in the title and in the first paragraph are considered more important than those in other paragraphs.
In the reliability score calculation,step300, the computer receives as input the original document and the metadata of the detailed document definition obtained in the process ofFIG. 4a. It represents the calculation of the reliability score of a document by applying a first category of intrinsic criteria and a second category of extrinsic criteria. The intrinsic criteria are related to the document information itself: the content, the context in which the information is published, its date of publication/update, the use of conditional tense, lack of precision while providing the sources, etc. The extrinsic criteria include users-related criteria like the comments in related social networks, the cross-checking with other sources dealing with the same information so as to detect inconsistencies, etc.
Thestep310 for the intrinsic criteria computer calculation depends on the criteria linked to the document itself, like the use of the detection of the conditional tense use, inconsistencies between the title and the body of the text, spelling mistakes and so on, which are described by the steps311-314.
Thestep311 of fact checking is an extrinsic criterion that can be calculated by the computer without document classification or clustering. All facts identified inside the document are verified with a knowledge database, e.g. created from corporate reporting, Wikipedia's infoboxes, and governmental figures, amongst others. Here is a non-exhaustive list:
Geographical data: Town T belongs to this or that country, River R goes through this or that continent, there are Y inhabitants in this or that country and so on.
Dates: when a public figure was born, when a company was founded, beginning and end of a political phase, when a place/invention was discovered, when an album was released and so on.
Corporate figures: stock values, forecasts, unemployment figures, taxes,
Public figures: unemployment figures, taxes and so on
Characters/directors in films, authors, musicians.
People belonging to a political party, government, company, organization and so on.
If a multimedia document mentions a fact that contradicts the knowledge base, the computer lowers its reliability score.
Thestep312 illustrates the detection of the conditional tense, an intrinsic criterion related to the document itself. When an author is unsure of the reliability of a piece of information, he may use the conditional tense to protect himself. Once the morphological, syntactic and semantic analyses have been carried out, it is necessary to analyze the conjugation of the verbs linked to the most relevant words. Identifying the verb carrying the meaning is crucial. It is necessary to simultaneously look for textual clues in the document (such as “it seems that”) bearing the same conditional function. If the verb tenses of the principal information are different in the title, header and text, but especially if the title is in the present or present perfect while the header and text are in the conditional, the document's reliability score will be lowered. Though titles may not contain a verb, some nouns can replace a verb. For example in the title “looting in Montpellier supermarket”, the computer replaces the word “looting” with the expression “was looted”. Similarly, a document with the title “Mr. X, possible candidate for town hall” where “possible” implies the information is in conditional tense.
Thestep313 shows other criteria such as inconsistencies between the title and the content inside a text document.
If the document is not a text one, other criteria are carried out, such as the detection of digitally modified images as illustrated onstep314. When news article contain a photo it is necessary to check that it has not undergone substantial alterations such as cloning, masking, adding or deleting of fictional elements and so on. In the case of event-related news (for example, terrorist act, strike, accident) it is necessary to verify when the photo was taken. The computer thus lowers the reliability score of the article if it was taken a long time before the events or comments on the photo. This may be relevant for example when photos are used to illustrate a strike.
Document thoroughness is another intrinsic criterion. A reliable document should contain the 5 “W” relating to its principal piece of information: Who, What, Where, When and Why about an event. If several “W” are missing, the document's reliability score will be lowered.
After thestep310, the multimedia document individual analysis is finished. Thestep320 represents the multimedia document classification and clustering. A cluster contains all multimedia documents coming from one or several sources dealing with the same information. For example, two multimedia documents created by different sources and describing the arrest of the same person will be integrated in the same cluster.
The aim of this task is to identify multimedia documents dealing with the same topic in order to group them together. Various elements will be used to achieve this. It is crucial that, for example, a multimedia document announcing the launch of a new product by a company must not be associated with another multimedia document describing the acquisition of this company by another one. In order to identify the principal topic of a multimedia document, the detailed document definition described onFIG. 4ais needed. In some cases, for 2 documents to be grouped together they must have a close date of creation. For example, for some topics such as sports, a 12 hour window can be adopted while for other “long trend” topics the grouping windowing could reach several days.
There is no limit to the number of multimedia documents that can be associated in the same cluster. When the identification and consolidation of relevant named entities (persons, organizations, locations, etc.) and events of a multimedia document are finished, the computer system must classify them among the existing clusters:
- If the multimedia document contains the same relevant named entities and events found in an existing cluster, it will be classified into that cluster. If there are inconsistencies between the new multimedia document and the already existing multimedia documents in the cluster, the processing of some extrinsic criteria will be triggered and thus the reliability score will be updated. If no inconsistencies are detected between the new multimedia document and the already existing multimedia documents in the cluster, the reliability score of the existing multimedia documents will not be affected.
- If no existing cluster is identified as being relevant, the multimedia document will constitute a new cluster.
To avoid having isolated multimedia documents because they are being processed simultaneously and therefore cannot be associated in real time, a verification and consolidation process of existing clusters is necessary. A process of background consolidation of clusters (e.g. to find and unify multimedia documents that are isolated in different clusters but deal with the same topic) is triggered to improve the precision of results. When this process succeeds in unifying several clusters, some of the extrinsic criteria will be recalculated and thus the reliability score will also be updated.
Each cluster contains a representative multimedia document that defines most adequately the association. This avoids having to compare new entries with all the multimedia documents that are already present in a cluster. Of course when a cluster contains a single multimedia document, that multimedia document will be the representative. Multimedia documents using different languages are also clustered together. Inside a cluster, different sub-clusters can be created depending on their language, country of origin, sources and so on.
The steps mentioned above are presented inFIGS. 5aand5bwith flow charts for multimedia document classification process and clustering consolidation process respectively. Thestep341 describes the end of document intrinsic analysis. Then the document intrinsic score, which is represented by the reliability score, is stored in thestep342. Subsequently, an assessment is done instep343 to classify the document into the relevant cluster. When such a cluster already exists, the flow goes to “Yes”, the new multimedia document is added to this cluster as indicated instep344. If there is no relevant cluster, the flow goes to “No” and a new cluster is created as indicated instep345.
After thestep344, the new added multimedia document is compared with the other multimedia documents already existing in the cluster. Extrinsic document criteria like omission and inconsistency are calculated as indicated bystep346. Finally, a decision is made in step347: whether to change the reliability score of the existing cluster multimedia documents. If “Yes”, the updated score is stored for all the multimedia documents in this cluster as indicated in step248; if “No”, only the score of the new multimedia document is saved as shown instep349.
FIG. 5brepresents an asynchronous cluster consolidation process to detect and merge the equivalent clusters. It starts with a comparison ofstep350, if there are equivalent clusters with the answer “Yes”, it goes to step351 to perform cluster consolidation and unification. If “NO”, it goes to step352 to end the process.
Afterstep351, the reliability score of all the multimedia documents in the new unified cluster are re-calculated based on the inconsistencies between the documents. Finally, the decision to change the scores of new document clusters scores is done in step354: if “Yes”, the concerned multimedia document ratings are updated by storing the updated reliability score instep355; if “No”, only the score of the new multimedia document is stored instep356.
For thesteps346 and353 for the re-calculation of the reliability scores of the multimedia documents in a cluster, there are several possibilities such as the omission of some information, inconsistences, etc. Thestep330 for the extrinsic criteria calculation depends on the criteria linked to other documents, like the reliability score of the sources where the news come from, the inconsistency relative to other documents in the same cluster, the comments on the other social media and net works and so on, which are represented by the steps331-334.
As in the step331 of source score, a source can be a publisher of a newspaper, here also refer to a recognized author. In general, the score of a source will depend on the score obtained by the multimedia documents created by that source. For example, the score of a news website is constituted by the weighted-average of the newspaper's various sections, and the score of the sections is constituted by the weighted-average of the section's various multimedia documents over a period of time.
Each source has a reliability score that evolves according to the weighting of the rate obtained of its various multimedia documents. The documents that were right from the beginning are called visionary documents and are given a better reliability. The reliability score of sources and authors evolves in time and is not based solely on the values manually assigned to them during the launching period.
As represented in thestep332 regarding inconsistencies between multimedia documents, the aim of this criterion is to detect factual information that varies from one document to another. If an inconsistency is detected, a warning is triggered on the reliability score.
Different types of inconsistencies can be detected: the first one verifies the information's meaning in order to detect, for example, if the tone of one document is positive about an event when another is negative. Another type of inconsistency concerns the facts relating to the words that have been identified in the text (non-exhaustive list):
- Different figures (company forecasts, unemployment rate, and number of people on strike, etc.). The difference must be sufficiently large to be relevant.
- Different locations.
- Different names of people.
- Different dates.
- Different brand names.
- Different genders (male/female).
The application of this inconsistency criterion requires that at least 2 documents be associated in a same cluster.
The following is an example of inconsistency detection through the comparison of two different news articles. Let's say that in the first article we have the sentence “23 Egyptian policemen killed in Sinai Peninsula by suspected militants” and in the second article we have “Militants kill at least 24 police officers in Egypt”. The tool associates together both articles as it understands that “Sinai Peninsula” is compatible with “Egypt” and it detects there is an inconsistency between the “23 Egyptian policemen” and the “at least 24 police officers” properties extracted from the two sentences.
There are many other criteria represented bystep333, one of them is Rumor detection, which is another important criterion. Although rumors sometimes end up being true, the fact that it is a rumor necessarily raises doubts. The idea is not just to look for the word rumor/hoax in the text but to detect whether the principal piece of information is centered around a rumor, and whether the author is sure of its reliability or not.
User-related criteria calculation is part of the extrinsic criteria. The users here are in the general sense, meaning social networks, comments written on information websites or the opinion of a community. The idea behind these criteria is to detect the temperature of a community regarding the reliability of a multimedia document as represented bystep334. In conclusion and returning toFIG. 2,step340 represents the final multimedia document scoring, the process that takes into account and weights all the editorial criteria (both intrinsic and extrinsic) analyzed during the process.
Returning toFIG. 1, thestep400 illustrates the reliability score distribution. The main goal is to display a pre-calculated reliability score associated with a multimedia document. A tool to distribute and display the reliability score must retrieve this score and associated metadata from a distant database and display them on a device. This tool can be a web browser extension or any other multimedia application compatible with devices such as PC, mobile phone, tablet, TV, radio, etc. And these tools can also integrate the functions of update, retrieval and aggregation of multimedia documents.
FIG. 7 illustrates an example of such a web browser extension or “add-on”410 rating the content in awebpage420. The content of the news is chosen arbitrarily as an example and plays no importance in understanding this invention.
The add-on410 is composed of several parts: theheader411, theoverall reliability score412 and the set of intrinsic andextrinsic criteria413. Indicated as415, the user is able to quickly give his opinion regarding the reliability of the document. Additionally,416 indicates the related multimedia documents as well as related social network messages such as tweets.
Every time a customer browses a web page that contains a multimedia document with a pre-calculated score in the database, an icon is automatically positioned in the document to notify the customer that the pre-calculated score is available. When the customer clicks on this icon, a pop-up window appears and provides a first level of information on the reliability of the multimedia document. This score is always time-stamped. A “more information” link redirects the customer to a website where extra information, links to related documents and the community's comments are available.
Once the add-on is installed on the customer's browser, it is easily accessible thanks to an icon dynamically positioned in the multimedia document as well as a button in one of the browser's menu. When the pre-calculated score is not available for a multimedia document, customers can manually request its processing via the add-on.
Every time a customer browses a web page, a request is made to the reliability score database to check whether the pre-calculated score is available or not. In order to avoid querying the database with multimedia documents that have with no interest to be analyzed (web mail pages, e-commerce websites, etc.), the add-on filters up-front the document sources having already been rated by the system using both a whitelist and a blacklist. In addition, customers can optionally configure the sources they want to be processed.
The process of score distribution in the form of an interactive widget displaying the reliability scores of multimedia documents and sources to be included in websites. The display will consist of a listing of the different multimedia documents reliability scores offering the possibility to see progressions and regressions in the scores or rankings over time. The process of score distribution in the form of a daily, weekly or monthly newsletter showing the reliability scores and rankings of multimedia documents and sources. The process of real-time score distribution in the form of an alerting service informing of the reliability scores and rankings of multimedia documents and sources.
The diagram of the hardware for the distribution of the reliability score to different customers is represented inFIG. 6. In one hand, the documents are retrieved fromdocument source server 1, 2 . . . N; at other hand, the reliability scores are read from the processing server. At the different applications of clients, e.g. PC, Smartphone, Tablet or TV, the documents are displayed with the corresponding reliability scores. Additionally, the reliability score processing server contains sub-databases: user databases, white/blacklists databases, reliability score databases and knowledge databases.
The methods and apparatus or system to provide a customer with multimedia documents, tagged with a reliability score are computer-implemented, as discussed above. While a variety of different computer hardware (electronic circuit) embodiments are envisioned,FIG. 8 illustrates some of these.
As shown inFIG. 8, theprocessor500 of a multimedia document retrieval computer or server is coupled to communicate with a network, such as theInternet502. Attached toprocessor500 is a memory circuit (e.g., RAM memory) in which has been stored webcrawler engine code506. Such code may be written in a variety of different computer languages, such as Python, C++, Java, and the like. Alternatively a publicly available web crawler system such as PolyBot, UbiCrawler, C-proc, Dominos, or the like may be used.
Processor500 is thus programmed to crawl the network (Internet502) to retrievemultimedia documents507, whichprocessor500 stores in astorage device508 attached.Storage device508 may be configured as a database as discussed above.
Once a relevant corpus ofmultimedia documents507 has been collected and stored, each multimedia document is scored as discussed above. To perform such scoring, aprocessor500a(which could be the same physical processor asprocessor500, or a different processor) executesscore calculation code514 stored in thememory504aattached toprocessor500a. Ifprocessor500aandprocessor500 are the same device,memory504amay be an allocated portion ofmemory504.
Theprocessor500a, is configured to access the database ofmultimedia documents507. Thusprocessor500amay either access thesame storage device508 as attached toprocessor500, or it may have its own attachedstorage device508a. InFIG. 8, one graphical representation has been provided in association with bothreference numerals508 and508ato illustrate that the storage device functionality may be the implemented using same physical device or implemented as separate physical devices.
In executing the score calculation code, based on the score calculation discussion above, theprocessor500auses intrinsic criteria and extrinsic criteria. These criteria are both stored inmemory504a, as at510 and512, respectively. As each multimedia document is scored, itscalculated reliability score516 is associated with that multimedia document and stored as part of the database record for that document within astorage device508b. Similar to the explanation above, if desired, the functionality ofstorage device508bcan be implemented using the same physical storage device as connected toprocessor500a(and/or processor500).
With the corpus ofmultimedia documents507 now each having an associatedreliability score516, they are ready to be accessed by a user or customer. This may be effected by providing access via a web server. To implement this,processor500bis coupled to the network (e.g. Internet502). Processor506bmay be physically separate fromprocessors500aand500, or it may be the same physical device asprocessors500 and/or500a. Attached toprocessor500bismemory504bin which executable web server code is stored. Suitable web server code may be implemented using publicly available Apache HTTP web server code, for example.
Processor500bis attached tostorage device508b, which may be the same physical storage device asdevices508aand/or508, or which may be a separate device, storing a copy of the data transferred fromdevice508a. The user or customer accesses the web site established on the network byprocessor500band through this connection the user or customer enters his or her requests for data, specifying any special criteria as discussed above. Theprocessor500bdelivers selected multimedia content from the corpus ofmultimedia documents507 that meet the user or customer's requirements, as more fully explained above. If desired the executable instructions for some or all of the functions described above (e.g., the executable code stored at506,514 and518), as well as the data structures and schema of the database configured within storage device(s)508,508a,508band as well as the data structure definitions in which theintrinsic criteria510 andextrinsic criteria512 are stored may be stored in non-transitory computer readable media.
The foregoing description of the embodiments has been provided for purposes of illustration and description. It is not intended to be exhaustive or to limit the disclosure. Individual elements or features of a particular embodiment are generally not limited to that particular embodiment, but, where applicable, are interchangeable and can be used in a selected embodiment, even if not specifically shown or described. The same may also be varied in many ways. Such variations are not to be regarded as a departure from the disclosure, and all such modifications are intended to be included within the scope of the disclosure.