CROSS REFERENCE TO RELATED APPLICATIONSThis application is:
a continuation-in-part of U.S. application Ser. No. 14/286,561 filed on May 23, 2014, which is a continuation of U.S. application Ser. No. 12/105,143, filed Apr. 17, 2008 and issued as U.S. Pat. No. 8,751,918 on Jun. 10, 2014, which claims the benefit of U.S. provisional application No. 60/912,243, filed Apr. 17, 2007;
a continuation-in-part of U.S. application Ser. No. 14/834,011 filed on Aug. 24, 2015 which is a continuation of U.S. application Ser. No. 13/290,439, filed on Nov. 7, 2011 and issued as U.S. Pat. No. 9,117,167 on Aug. 25, 2015, which claims the benefit of U.S. provisional application no. 61/410,395 filed Nov. 5, 2010;
a continuation-in-part of U.S. application Ser. No. 13/919,751 filed on Jun. 17, 2013, which is a continuation-in-part of U.S. application Ser. No. 12/798,487, filed on Apr. 5, 2010, which claims the benefit of U.S. provisional application No. 61/166,579 and is a continuation-in-part of U.S. application Ser. No. 12/105,143, filed Apr. 17, 2008 and issued as U.S. Pat. No. 8,751,918 on Jun. 10, 2014, which claims the benefit of U.S. provisional application No. 60/912,243;
a continuation-in-part of application Ser. No. 14/093,229 filed on Nov. 29, 2013, which claims the benefit of Provisional Application No. 61/732,326; and
a continuation-in-part of application Ser. No. 14/740,528, filed on Jun. 16, 2015, which is a continuation-in-part of application Ser. No. 14/093,229, filed on Nov. 29, 2013, which claims the benefit of provisional application No. 61/732,326 filed on Dec. 1, 2012, a continuation-in-part of U.S. patent application Ser. No. 13/919,751, filed Jun. 17, 2013, which is a continuation-in-part of U.S. patent application Ser. No. 12/798,487, filed Apr. 5, 2010, which is a continuation-in-part of U.S. patent application Ser. No. 12/105,143, filed on Apr. 17, 2008 and issued as U.S. Pat. No. 8,751,918 on Jun. 10, 2014 and is a continuation of provisional application No. 61/166,579, filed Apr. 3, 2009, U.S. patent application Ser. No. 12/105,143 being a continuation of provisional application No. 60/912,243, filed Apr. 17, 2007, a continuation-in-part of U.S. patent application Ser. No. 14/286,561, filed May 23, 2014, which is a continuation of U.S. patent application Ser. No. 12/105,143, filed on Apr. 17, 2008, issued as U.S. Pat. No. 8,751,918 on Jun. 10, 2014, which is a continuation of provisional application No. 60/912,243, filed Apr. 17, 2007, and a continuation-in-part of U.S. patent application Ser. No. 13/290,439, filed on Nov. 7, 2011, issued as U.S. Pat. No. 9,117,167 on Aug. 25, 2015, which claims the benefit of Provisional Application No. 61/410,395, filed Nov. 5, 2010.
The entirety of all related applications listed above are incorporated by reference herein.
TECHNICAL FIELDEmbodiments described herein relate to modeling of complex systems, for example, those with two or more levels of structure. Such embodiments may include recognizing features on data capture, integrating those features in a distributed fashion and displaying them in such a way that hidden system dynamics are revealed and can be manipulated.
BACKGROUNDReasoning systems work with facts, by logical and probabilistic methods building structures from them to produce conclusions and insights. Typically, the facts are acquired by means separated from the analytical tools that will be used; in most cases, the ‘facts’ are extended from data. The source data is simply collected from the world without close coupling with the later reasoning system.
Independently, expert systems as a class of reasoning systems depend on engineering a balance between limiting the ontological domain and limiting the logical scope. It is simply not possible to reason comprehensively over the ‘open world.’ An open world by definition includes entities and phenomenon you know little or nothing about.
Therefore, a large class of probabilistic and neurally inspired systems have been devised to create likely connections. But because these are not based in semantics that are native to the problem, the results are correlative and cannot well indicate causal relationships.
A related set of technical limits prohibits distributed reasoning at the semantic level over vast networks of computerized systems, with vast amounts of data, media, facts and conclusions.
Yet another related problem is that the current art is incapable of understanding overarching systems in the world of interest using models that have distinct features and dynamics that are not simply composed from constituents. This applies in any domain but is acutely felt in the biological research domain where biological systems are poorly modeled.
An unrelated problem is the matter of defining model abstractions that are sufficient to address the concerns above and still be presented to users in a way that provides deep, intuitive insight into all stages and levels of the process, allowing the user to intervene, control and change all elements of the system.
Another problem is that we currently have only immature support for streaming, dynamic information sources, whether data or semantically registered facts. In particular, we have no way to manage streams that deliver elements that retroactively change previously interpreted situations, sometimes radically changing selected conclusions.
A final problem is that many phenomena are composed of agents that organize as systems that themselves have agency. This system agency cannot be determined by examining the components. Such systems are supported by the logical framework of situation theory but not well implemented in computing systems.
Therefore, a need exists for a system and method that has a consistent model formalism that spans all these concerns. The need further exists for a computing system and method which allows extraction of features from sources including streaming sources, where the sources can be millions or more of streams, and millions or more of collaborative computing resources. Such a system will support a parallel, collaborating but not composed set of features that can be used to model systems of the world of interest and the distributed system's state. Such a system and model will be employed to reason over the ‘open world,’ forming inferences from unknown elements and dynamics.
As well, such a computing system and its model will present itself to a user at all stages and levels by the same features in an intuitive way. This will include a display of unknowns and unknown effects, computational effect and causal relationships at both system and primitive levels, and rationale of why system dynamics emerge.
Some embodiments of the invention described herein are a novel synthesis of functional programming techniques, category theory as it applies to computer science and independently applies to modeling techniques. Some also use a novel application of situation theory using recent innovations in cognitive narratology to structure situations as categories.
BRIEF DESCRIPTION OF THE DRAWINGSFIG. 1aillustrates an embodiment of the system architecture;
FIG. 1billustrates a computer network according to embodiments described herein;
FIG. 2 illustrates a related system architecture according to an embodiment described in U.S. Pat. No. 8,751,918;
FIG. 3 illustrates a related system architecture according to an embodiment described in U.S. Pat. No. 9,117,167;
FIG. 4 illustrates a related system architecture according to an embodiment described in application Ser. No. 13/919,751;
FIG. 5 illustrates a related system architecture according to an embodiment described in application Ser. No. 14/740,528;
FIG. 6 illustrates steps for specifying futures;
FIG. 7 illustrates an example of a user interface for specifying futures;
FIG. 8 illustrates examples of steps for modifying Ontology Graphs;
FIG. 9 illustrates an example of a user interface for modifying Ontology Graphs;
FIG. 10 illustrates examples of steps for relating ontologies;
FIG. 11 illustrates an example of a user interface for relating ontologies;
FIG. 12 illustrates examples of topology and functor operations;
FIG. 13 illustrates examples of overlapping Ontology Graphs;
FIG. 14 illustrates an example of a categoric cell;
FIG. 15 illustrates an example of modifying the Concept Lattice;
FIG. 16 illustrates an example of composition on a Space-Time view;
FIG. 17 illustrates examples of layers of a Space-Time view;
FIG. 18 illustrates an example of infon nesting;
FIG. 19 illustrates an example of a Concept Lattice and its Half-Dual;
FIG. 20 illustrates an example of a Concept Lattice on a Space-Time view;
FIG. 21 illustrates an example of an annotated Space-Time view;
FIG. 22 illustrates an example of an Ontology Graph on a Narrative Model;
FIG. 23 illustrates an example of a text outline;
FIG. 24 illustrates an example of an infon outline;
FIG. 25 illustrates examples of Concept Lattice nodes;
FIG. 26 illustrates an example of a Concept Lattice;
FIG. 27 illustrates an example of a Concept Lattice with Ontology Graphs;
FIG. 28 illustrates an example of a Concept Lattice with Governing Influence;
FIG. 29 illustrates an example of a Symmetric Representation Cell;
FIG. 30 illustrates an example of a Symmetric Representation Substrate;
FIG. 31 illustrates an example of an adjusted Concept Lattice;
FIG. 32 illustrates an example of an adjusted Concept Lattice with Governing Influence;
FIG. 33 illustrates an example of a Typed Link with Governing Influence;
FIG. 34 illustrates an example of an immersive Concept Lattice user interface; and
FIG. 35 illustrates an example of a working instance of an adjusted Concept Lattice.
DETAILED DESCRIPTIONIn the following detailed description, reference is made to the accompanying drawings which form a part hereof and illustrate specific embodiments that may be practiced. In the drawings, like reference numerals describe substantially similar components throughout the several views. These embodiments are described in sufficient detail to enable those skilled in the art to practice them, and it is to be understood that structural and logical changes may be made.
Embodiments described herein include a computer system. The computer system may be any computer system, for example, a small wearable, a smartphone, a tablet, a personal computer, a minicomputer, or a mainframe computer. The computer system will typically include a processor, a display, at least one input device and random access memory (RAM), but may include more or fewer of these components. The processor can be directly connected to the display, or remotely over communication channels such as radio, sound or light waves, cable, telephone lines or local area networks. Embodiments may include both commercial off-the-shelf (COTS) configurations, and special purpose systems designed to work with the embodiments disclosed herein, so long as the hardware used is capable of performing the tasks required by specific embodiments.
Item numbers in the figures are keyed to the figure number; thus,item1102 is part ofFIG. 11. In some cases, a figure is derived from a figure in a previous filing, in which case the item number sequence is preserved. For example,items1301,1302,1303 fromFIG. 13 correlate toitems401,402,403 respectively in FIG. 4 of Ser. No. 13/919,751.
At least some embodiments described herein are an alternative system and method for principles shared with non-provisional application Ser. No. 14/093,229.
FIG. 1aillustrates an example system architecture. Information elements are ingested on the left where four representative copies ofInformation Sources101 are drawn. A great many of these can exist, on the order of millions or more. Four example types are illustrated, being data as in databases of different degree of structure; knowledge bases, implying some semantic structure, pre-parsed natural language or executable code; multimedia documents which could be discrete items or continuous flows of documents as in news feeds or email; streams such as video and sensor streams or any synthetic stream as composed by a stream processing system. These are not exhaustive types, and are intended to indicate the ability to ingest any information one would encounter.
These are fed to the DistributedFunctional Processors102 that support a Functional Fabric of instructions. Application Ser. No. 13/919,751 terms this a Functional Reactive Fabric. The DistributedFunctional Processors102 can be in a central computer or supported by distributed, connected processors. This Functional Fabric may be implemented using functional reactive programming techniques as described below, or implemented using a message-passing concurrent programming paradigm.
One task cluster within the Functional Fabric of the DistributedFunctional Processors102 is the task of extracting and assigning features to elements of the ingested information. This is performed with continuous awareness of the current and anticipated situations in the Functional Fabric of the DistributedFunctional Processors102 as a whole, referencing the more stable and large Distributed Situation andSituation Dynamics Store112. The identification of these features from the source information and/or the assignment based on global knowledge provides the ability to compose systems. Situations in this context can be systems as we defined in the biological context. Details on this are illustrated in further diagrams.
Another task cluster is the DistributedRegular Reasoning Processors106 performs reasoning on the information, in this case being reasoning as ordinarily understood, using rules, logics of various kinds, algebraic operations and probabilistic analyses such as Bayesian analysis. This list is not exhaustive. The point is that any analytical method currently used in a domain can be incorporated here, either re-implemented in the Functional Fabric of the DistributedFunctional Processors102, or connected as a legacy system through an instance ofInformation Sources101. The information flow between instances ofInformation Sources101 and the DistributedFunctional Processors102 is typically two-way.
A significant novelty of embodiments of the system is the ability to reason over and about situations, using a dedicated cluster of DistributedSituation Reasoning Processors104 within the Functional Fabric. In the example implementation, this is a category theoretic reasoning system using functors and morphisms among categories as functions in the Functional Fabric of the DistributedFunctional Processors102. The purpose is to provide a second, integrated reasoning system that reasons at an abstract level about situations. Situations in this context inform the feature abstraction of the DistributedFeature Extraction Processors103, so that the features can work with the DistributedRegular Reasoning Processors106. They also modify the Ontology Graphs and networks managed by the DistributedOntology Computation Processors105. System models of the type previously described can emerge from situations. This may be a long term system concept such as the innate immune system in a biomedical model, or a temporal system, for example a complex alarm system that judges the severity of an infection and signals an extensive response.
The DistributedSituation Reasoning Processors104 support novel extensions of situation theory and constitute a formally integrated two-sorted reasoning system with the DistributedRegular Reasoning Processors106. The DistributedSituation Reasoning Processors104 draw from and teach a persistent store of categoric patterns in the Distributed Situation andSituation Dynamics Store112 that inform the DistributedSituation Reasoning Processors104.
The mechanism by which the two reasoning systems are integrated is a dynamic ontology network that is held in active memory as part of the Functional Fabric. The DistributedOntology Computation Processors105 interact with regular reasoning system in a fashion current in that art. Logical reasoning, for example that may model and reason about semantically represented causal dynamics at the omics level is supported in the DistributedRegular Reasoning Processors106. Such systems require an ontological framework that is consulted to asses meaning. Such an ontological framework is maintained in an active state by the DistributedOntology Computation Processors105. Users can directly view and modify this ontology by a novel user interface managed by110.
The semantic networks, axioms, rules and description logic of the DistributedOntology Computation Processors105 are themselves information that is modified by the ‘second sort,’ the DistributedSituation Reasoning Processors104. The ontology hosted by the DistributedOntology Computation Processors105 is effectively modified by the DistributedSituation Reasoning Processors104 as different situations come to govern. Many such ontological changes will modify previous results of the DistributedRegular Reasoning Processors106. All of the reasoning of the DistributedSituation Reasoning Processors104 and the DistributedRegular Reasoning Processors106 is maintained live in the fabric, so that shifting goverance can modify inferences. In a circular fashion, changing insights managed by the DistributedRegular Reasoning Processors106, for example coupled behavior of elements at the omics level, will modify feature assignments managed by the DistributedFeature Extraction Processors103 and thereby adjust composition of situations in the DistributedSituation Reasoning Processors104.
For example, an experimenter may be working with a concept of an innate immune system and a synthesized bodily system that balances inflammation. Such a system will overlap many others: circulatory and adaptive immune for instance and also overlap with the situations of genetic profile and virus infection. This experimenter may be guided to test for a specific CD8+ T-cell infiltrate in inflamed tissue and fail to find it. Perhaps this line of investigation was informed by causal dynamics among the systems and situations suggested and managed by the DistributedSituation Reasoning Processors104. The experimenter would enter that finding into the model (via an instance of Information Sources101) and many things may adjust. A new definition of an inflammation management system within the body may appear.
The collection of user interface services are shown as DistributedDisplay Processors107. These need not be functionally or reactively coded, though they can be integrated into the DistributedDisplay Processors107. They are shown here separately because the described embodiment is coded on general purpose hardware, using common user interface frameworks. The processes that interact with the DistributedDisplay Processor107 structure the view into the Functional Fabric for delivery to one ofmany Displays111. TheseDisplays111 can be screens or immersive interfaces.
The Ontology Graphs maintained by the DistributedOntology Computation Processors105 are accessible to a user via presentations created by the OntologyGraph Display Processors110. For example, when our experimenter enters a new result or related piece of information, he or she will want to assure that what the system understands is what the experimenter means. The new information is therefore registered in the ontology using the services of the OntologyGraph Display Processors110. The system will already know of CD8+ T-cells and their behavior in certain circumstances. Very precise new behavior in this specific situation will extend that knowledge, and in our example modify features associated with it, changing the model of the biological inflammation management system.
Another user interface service is supported by the Outliner/Lattice Display Processors108. They support tailored outliner and related lattice views that serve as a collection of created and machine assembled notebooks. High levels of the outline are situations, states and systems. Lower, child entries are information related to omic behavior. The notebook integrates with the Ontology Graphs as described in later figures.
Among the most novel of the interface views is that supported by the services of the EideticFlow Display Processors109. This presents a view of the Functional Fabric as a flow, the form of which depicts intersystem dynamics. Any element of this presentation can be zoomed into for inspection on outline or Ontology Graph view.
FIG. 1bis similar toFIG. 2bfrom application Ser. No. 14/093,229. It illustrates an example network architecture for the combined system described inFIG. 1a.
Computing Device127 supports the DistributedRegular Reasoning Processors106. ThisComputing Device127 hasStorage128, wherein among other information is stored progressive results of the DistributedOntology Computation Processors105. TheComputing Device127 is connected byCommunicative Connection129 to aNetwork130 that supplies and stores external information while also providing additional computational services.Network130 supports the interaction withInformation Sources101.
This system has aClient Computing Device124, connected to theComputing Device127 by aCommunicative Connection126 that supports a user directing or monitoring the reasoning. TheClient Computing Device124 supports the DistributedDisplay Processors107 consisting of the Outliner/Lattice Display Processors108, EideticFlow Display Processors109, OntologyGraph Display Processors110. It hasStorage125 to support its functions, and aDisplay123 among other interface devices that supports Displays111.
TheComputing Device127 is connected byCommunicative Connection131 to a computing system which supports the DistributedSituation Reasoning Processors104. It consists of aComputing Device120, attachedStorage121 and is attached byCommunicative Connection122 to aNetwork132 that supplies and stores external information while also providing additional computational services.Storage121 supports the Distributed Situation andSituation Dynamics Store112.
This system has aClient Computing Device117, connected to theComputing Device120 by aCommunicative Connection119 that supports a user directing or monitoring the reasoning. TheClient Computing Device117 supports the DistributedFeature Extraction Processors103 and management ofInformation Sources101. It hasStorage118 to support its functions, and aDisplay116 among other interface devices.
Collectively, the computing systems includingComputing Device120 and127 withClient Computing Device117 and associated components support the processes of DistributedFunctional Processors102.Client Computing Device124 and associated components support the processes of DistributedDisplay Processors107.
TheCommunicative Connection131 need not be a direct connection as shown inFIG. 1b, and can be any known connection between two computers including, but not limited to, a connection through any computer or computers, routers, firewalls, public networks (e.g., the Internet) and/or private networks.
The system illustrated is one example of a hardware system, chosen for clarity. TheComputing Devices120,127 andClient Computing Devices124,117 may be any device capable of performing the programmed operations. They need not havelocal Storage118,121,125,128 as described, but have information stored by other means known in the art, including distributed stores or hard drives residing inside or outside the Computing Device.
EachComputing Device120,127 andClient Computing Device124,117 need not be discrete, instead being a collection of connected computing devices acting in unison. Similarly,Computing Device120,127 andClient Computing Device124,117 need not be separate computing devices. Functions can be combined in any manner, including the functionality of one or more ofComputing Device120,127 andClient Computing Device124,117 being combined in one machine. For example, theClient Computing Device117 serving as a modeling system client to theComputing Device120 supporting other functions of the ontology derivation system can be combined into one computing system.
The system as illustrated shows Displays116,123 to support human users. Either client can be directed by non-human agents controlling the process. The interface systems can be displayed in other parts of the system, forexample Display123, or other displays for other users not shown.
Both the Computing Device117 (withDisplay116 and Storage118) and the Computing Device124 (withDisplay123 and Storage125) may be multiple systems supporting multiple collaborating users. Some elements of the system are not shown; forexample Computing Devices120,127 may have user input devices similar toDisplays116,123, andClient Computing Devices117,124 may have direct or indirect connections to outside resources similar toCommunicative Connections122,129. Other connections may exist, for example,Client Computing Devices117 and124 may have direct or indirect connections similar toCommunicative Connection131.
FIG. 2 illustrates the system from FIG. 6 of U.S. Pat. No. 9,117,167 rearranged to show the equivalence of the hardware system of U.S. Pat. No. 9,117,167 toFIG. 1ahere. U.S. Pat. No. 9,117,167 teaches in part a system for collaborative feature recognition and synthesis that employs a novel implementation of situation theory.
Streams and Other Information200 (U.S. Pat. No. 9,117,167 terms these ‘Multiple Streams’) enter aComputing System211. Massive instances of Streams andOther Information200 are possible. Distributed Feature Extraction Processors202 (U.S. Pat. No. 9,117,167 terms these ‘Recognition Units’) employ Internal Feature References203 (U.S. Pat. No. 9,117,167 terms these ‘Recognition Unit References) to identify and model features. These are used by Distributed Situated Reasoning Processors supporting aWreathing Engine204 to produce computed results in the form of related facts deduced from the universe of features from the universe of Streams andOther Information200. These are delivered to a userinterface presentation service208, a component of a unified presentation processor environment209 for presentation on a display.
The Wreathing Engine of U.S. Pat. No. 9,117,167 204 employs a Distributed Situation andSituation Dynamics Store211 within whichsituations206 are stored. These are created on the fly by features presented by the DistributedFeature Extraction Processors202 usingreference situation templates207.
In addition, theWreathing Engine204 employs aSituation Control Unit206 for identified entities. This Ontology Store205 (U.S. Pat. No. 9,117,167 terms these ‘Storage Unit’) is also updated by theWreathing Engine204.Situation Control Units206, employ a Situation Reference (U.S. Pat. No. 9,117,167 terms these ‘Reference’).
Routers208 within DistributedDisplay Processors212 Process and direct information to Displays213.
By comparingFIGS. 1 and 2, an ordinarily skilled practitioner will recognize the system disclosed in U.S. Pat. No. 9,117,167 as representative of that described here inFIG. 1a.
In the context of U.S. Pat. No. 9,117,167, the computing system of theComputing Device127,Storage128,Communicative Connection129,Network130 ofFIG. 1bsupports ontological processing required for the identification of Semantic Features served by theOntology Store205. In the context of U.S. Pat. No. 9,117,167, the computing system of theClient Computing Device124,Storage125,Display123 ofFIG. 1bsupportsComputing System208, DistributedDisplay Processors212, Displays213 ofFIG. 2. In the context of U.S. Pat. No. 9,117,167, the computing system of theComputing Device120, attachedStorage121 and is attached byCommunicative Connection122 to aNetwork132 ofFIG. 1bsupports interface with Streams andOther Information200, theWreathing Engine204, Situation References207,Computing System208,Computing System211 ofFIG. 2. In the context of U.S. Pat. No. 9,117,167, the computing system of theClient Computing Device117,Storage118,Display116 ofFIG. 1bsupports the management of Streams andOther Information200, the DistributedFeature Extraction Processors202, Internal Feature References203, the display associated with theWreathing Engine204 ofFIG. 2.
FIG. 3 illustrates the system from FIG. 1a of application Ser. No. 13/919,751 rearranged to show the equivalence of the hardware system of application Ser. No. 13/919,751 toFIG. 1ahere. Application Ser. No. 13/919,751 teaches in part a Functional Fabric that is distributed among many processors usingInformation Servers303 andTopoiesis Servers305 to support the functions of Feature Extraction, Situated Reasoning Ontology Computation and Display.
Information is stored in distributed instances inInformation Stores301, available to anyInformation Server303 in any processing node. Similarly, Situations and Situation Dynamics are stored inMetainformation Stores302 in distributed computing nodes that may be separate or shared withInformation Servers303 by Channels304 (application Ser. No. 13/919,751 terms these ‘Links’).
Topoiesis Servers305 perform fractional, functional processing via communication with information servers via Channels306 (application Ser. No. 13/919,751 terms these ‘Links’) and deliver coherent results to distributedClients307 via Channels308 (application Ser. No. 13/919,751 terms these ‘Links’).
By comparingFIGS. 1aand3, an ordinarily skilled practitioner will recognize the system disclosed in application Ser. No. 13/919,751 as representative of that described here inFIG. 1.
In the context of application Ser. No. 13/919,751, the computing system of theComputing Device127,Storage128,Communicative Connection129,Network130 ofFIG. 1bsupports theTopoiesis Servers305 ofFIG. 3. In the context of application Ser. No. 13/919,751, the computing system of theClient Computing Device124,Storage125,Display123 ofFIG. 1bsupportsClients307,Channels308 ofFIG. 3. In the context of application Ser. No. 13/919,751, the computing system of theComputing Device120, attachedStorage121 and is attached byCommunicative Connection122 to aNetwork132 ofFIG. 1bsupportsInformation Stores301,Metainformation Stores302,Information Servers303,Channels304 ofFIG. 3. In the context of application Ser. No. 13/919,751, the computing system of theClient Computing Device117,Storage118,Display116 ofFIG. 1bsupports the DistributedFeature Extraction Processors103 consisting of Information and Topoiesis Servers,Channels306,Channels308 ofFIG. 3.
FIG. 4 illustrates the system from FIG. 1 of application Ser. No. 14/740,528 rearranged to show the equivalence of the hardware system of application Ser. No. 14/740,528 toFIG. 1ahere. Application Ser. No. 14/740,528 teaches in part a means of creating, displaying, navigating and manipulating entity, spatial and temporal features within a situated context on a model of developing processes.
FIG. 4 shows Information Feeds415 (application Ser. No. 14/740,528 terms these ‘Videos’) from external sources such as aVideo Library403. These can be many feeds and possibly a great number. They will have previously been structured situationally. This structuring may be done by any number of means; the embodiment of application Ser. No. 14/740,528 shows commercial films assembled by creative teams. This process could be wholly or partially supported by systems such as those described in U.S. Pat. No. 8,751,918, U.S. Pat. No. 9,117,167, application Ser. No. 13/919,751 or application Ser. No. 14/093,229 separately or in combination. This information is delivered via Information Feeds415 to a Feature Extraction Processor405 (application Ser. No. 14/740,528 terms these ‘Video Processors’).
Using Information Feeds417 (application Ser. No. 14/740,528 terms these ‘Network Connections’) from theSituation Reasoner408, the processor enriches the information in theInformation Feed415 by providing information in theinformation Feed417 about situated governance to theFeature Extraction Processor405. The enriched information from theFeature Extraction Processor405 is delivered byInformation Feeds416 to the Knowledge Store406 (application Ser. No. 14/740,528 terms this a ‘Data Store’) from whence via Information Feeds418 combined information can be delivered to the Computing Device410 (application Ser. No. 14/740,528 terms this a ‘Server’) that composes the display for delivery by Information Feeds430 (application Ser. No. 14/740,528 terms this a ‘Connection’) to aClient Workstation411 for display on aDisplay Device427. Both theComputing Device410 andClient Workstation411 can be distributed systems.
TheSituation Reasoner408 performs the function of recognizing relevant situations, their relative governance and how components (here shown as ‘annotations’) are related. It receives information from three sources: an Ontology Store and Reasoning System404 (application Ser. No. 14/740,528 terms this an ‘Ontology Library’) viaInformation Feeds419 to a Situation Store and Associated Reasoning System407 (application Ser. No. 14/740,528 terms this a ‘Situation Knowledge Base’) viaInformation Feed420 and theComputing Device410 via Information Feed422 (application Ser. No. 14/740,528 terms this a ‘Network Connection’) that provides real time updates.
It provides processed results to two sources. One is theAnnotation Library409 here shown as an external store viaInformation Feed421. It need not be so, but is described so in the embodiment of application Ser. No. 14/740,528 for simplicity. The other result is viaInformation Feed417 to theFeature Extraction Processor405 as previously described.
Similarly, information from theAnnotation Library409 enters the system characterized primarily as unsituated information. This information is delivered via Information Feed423 (application Ser. No. 14/740,528 terms this a ‘Network Connection’) to aComputing Device410 to perform distributed reasoning processing. It also references situated information from theSituation Reasoner408 as previously described. Thus, a loop of continuously situated information is established via Information Feeds421,422 and423.
By comparingFIGS. 1 and 4, an ordinarily skilled practitioner will recognize the system disclosed in application Ser. No. 14/740,528 as representative of that described here inFIG. 1.
In the context of application Ser. No. 14/740,528, the computing system of theComputing Device127,Storage128,Communicative Connection129,Network130 ofFIG. 1bsupports theAnnotation Library409 andComputing Device410 ofFIG. 4. In the context of application Ser. No. 14/740,528, the computing system of theClient Computing Device124,Storage125,Display123 ofFIG. 1bsupportsClient Workstation411,Display Device427, Information Feeds421,423,430 ofFIG. 4. In the context of application Ser. No. 14/740,528, the computing system of theComputing Device120, attachedStorage121 and is attached byCommunicative Connection122 to aNetwork132 ofFIG. 1bsupportsVideo Library403, Ontology Store andReasoning System404,Knowledge Store407,Situation Reasoner408, Information Feeds415, Information Feeds417, Information Feeds419, Information Feeds420, Information Feeds422 ofFIG. 4. In the context of application Ser. No. 14/740,528, the computing system of theClient Computing Device117,Storage118,Display116 ofFIG. 1bsupports theFeature Extraction Processor405,Knowledge Store406, Information Feeds415,416,417,418 ofFIG. 4.
FIG. 5 illustrates the system shown in FIG. 2a of application Ser. No. 14/093,229, the disclosure of which teaches a system of situation definition, governance and ontology manipulation, rearranged to show the equivalence of the system of FIG. 2a of application Ser. No. 14/093,229 toFIG. 1ahere.
In this version ofFIG. 1, information is ingested by Information Channel510 (application Ser. No. 14/093,229 terms this as ‘Sequentially Appearing Facts’) as designated into theOntology Derivation System501 or theConventional Reasoning System505. TheOntology Derivation System501 computes what situations are relevant, what their composition is (consisting of facts and other situations) what the relative governance is and how that modifies ontologies that affect the system's inferences.
Ontology Derivation System501 uses known templates of situations, known instances of situations, templates of governance and known governing dynamics from ExistingOntology512. Two core services assist: aModeling System514 handles the rules required for practical understanding of situated interpretation. It constrains the scope to what is needed. TheConventional Reasoning System505 performs logical/probabilistic/neural reasoning using ExistingFacts504 and may be a collection of hosted legacy systems. In this context, it constrains the scope of what inferences and ontology are considered.
TheOntology Derivation System501 computes governance in discrete states, saving each state and the difference of each state as snapshots ofOntology Structures508. These ontology structures determine the meaning of facts and inferences. As they change, they produce direct influence that is similarly saved as dependent states ofFacts507 each state derived in part from the previous state.
A novelty in the system of application Ser. No. 14/093,229 is the result of situated reasoning, supplementing what is supportable under the current art. In the described embodiment, the user can see and manipulate what is going on, and thus requires an interace service to support this in Reasoning Client andInterface515.
By comparingFIGS. 1 and 5, an ordinarily skilled practitioner will recognize the system disclosed in application Ser. No. 14/093,229 as representative of that described here inFIG. 1.
In the context of application Ser. No. 14/093,229, the computing system of theComputing Device127,Storage128,Communicative Connection129,Network130 ofFIG. 1bsupports import of ExistingFacts504, theConventional Reasoning System505, management ofFacts507,progressive Ontology Structures508 ofFIG. 5. In the context of application Ser. No. 14/093,229, the computing system of theClient Computing Device124,Storage125,Display123 ofFIG. 1bsupports the Reasoning Client and Interface515 ofFIG. 5. In the context of application Ser. No. 14/093,229, the computing system of theComputing Device120, attachedStorage121 and is attached byCommunicative Connection122 to aNetwork132 ofFIG. 1bsupports theOntology Derivation System501,Ontology Structures508, Information Channel510, ExistingOntology512 ofFIG. 5. In the context of application Ser. No. 14/093,229, the computing system of theClient Computing Device117,Storage118,Display116 ofFIG. 1bsupports theModeling System514, and interface with Information Channel510 ofFIG. 5.
In summary, the previous filings U.S. Pat. No. 8,751,918; U.S. Pat. No. 9,117,167; application Ser. No. 13/919,751; application Ser. No. 14/740,528 and application Ser. No. 14/093,229, disclose different functionalities of a comprehensive system described in part inFIG. 1.
This comprehensive system shown inFIG. 1 supports a two-sorted reasoning system. One ‘sort’ deals with representations and inferences supported by the current art. It is primarily supported in the DistributedRegular Reasoning Processors106 ofFIG. 1.
The second sort deals with metalevels, narrative abstraction, implicit facts and situation governance. This is primarily supported in the DistributedSituation Reasoning Processors104 ofFIG. 1.
To support the integration between these two levels, the explicit information in the first sort must be structured in a specific way. Novel user interfaces are employed to establish structure among elements of the first sort to bridge to the second sort. This is accomplished in structure stored by the DistributedOntology Computation Processors105 ofFIG. 1. The process supported by this combination of user interface and internal storage has the additional benefit of modeling the known facts and inferences with more clarity than the current art because of the implicit use of situation theory.
FIG. 6 illustrates a flow chart for one such function. In this example, a user has available a partially structured and situated set of facts and is in the process of creating structure with a focus on the linearized narrative structure of facts illustrated later inFIG. 19 as the ‘causal lattice.’
The user is presented with work in progress which appears as an outline that is predominantly explanatory text. Other illustrative forms of information may be included, such as images, video, graphs, models, tables and so on without restriction. The task at hand is to structure precedence, building a multipath story.
Referring toFIG. 1aas the reference system, the user accesses the system by aDisplay111. The service that is accessed is the Outliner/Lattice Display Processors108, the outliner/lattice display, within the unified set of DistributedDisplay Processors107. The information that is presented is preprocessed in this example by the functional processors of the DistributedFunctional Processors102.
The figure illustrates a flow of tasks performed in the operation of building and curating a type-linked narrative as a causal concept lattice. Processes handled by the second sorted, DistributedSituation Reasoning Processors104 ofFIG. 1aare on the left ofFIG. 6. Those by the situation-aware DistributedRegular Reasoning Processors106 are on the right and those executed by the human user, supported by the DistributedDisplay Processors107 in the center.
The user-centric task is straightforward: the user locates a point in a story or described process atStep603. He modifies some detail atStep608. New options about what happens or might happen next are presented inStep611, from which choices are made inStep607 and everything adjusts accordingly inStep613.
This requires a coordinated set of processes from both reasoning systems. The primary steps are illustrated in the figure.
Our example medical research user begins a session atStep601 with a certain point in a specific process in mind. The DistributedFunctional Processors102 recall what it knows about how the user interacts with that sort of information, knowing the kinds of issues he works with, recent history with tentative conclusions and perhaps even factoring in the day of the week and time ofday602. This information is stored in the Situation Store andAssociated Reasoning System407 and delivered viaInformation Feed420 to theSituation Reasoner408 for late assembly. The behind the scenes operation managed bySituation Reasoner408 creates a view of the process that is tailored for his immediate purpose.
Within this view, the user will locate a specific point, a state, in the process atStep603, using the user interface service of theClient Workststaion411 in the Outliner/Lattice Display Processors108. This is presented as a structured narrative. His process of locating this state resituates thenarrative606, producing a new state of the assembled facts atStep605, shown as Facts507 (FIG. 5) computed by the DistributedRegular Reasoning Processors106 using a Topoiesis Server305 (FIG. 3).
A new outline displaying there is shown inStep607, using facts fromStep605 structured by situations fromStep606. The user then modifies some content. A wide variety of modifications are possible; in this example, an existing dependency is modified resulting in a new configuration of the concept lattice from Ontology Structures508 (FIG. 5).
The result is that facts are reindexed atStep609 and a dialog is initiated between the situations ofStep610 that ‘linearize’ the facts ofStep609 in concept lattices. In other words, the system refines its understanding of what the user requires in the next steps of the ‘story so far’ and presents a new set of prioritized options from the user to specify what comes next in the sequence.
The user selects one of these options, and possibly indicates that others need to be preserved as possible alternatives in a later relinearization. The system then takes this new knowledge, reincorporates it in the situation store atStep612, displays the result atStep613, and starts the cycle over again with updated facts fromStep614 and situations fromStep615.
FIG. 7 illustrates the user interface atStep611. The example in this case is a model of a film narrative. On the left is a displayed outline. The user has indicated aResizable Outline Boundary701 that advises the system which chunk of the outline is the current situation of interest.
This outline can be created by iterations of the process shown inFIG. 6. The outline can contain multimedia content, such asText702,Video703 and other media elements not shown. TheVideo703 is collapsible viaControl707, if a compact text only view is desired. ARewind Control706 will step the iterative process ofStep607 throughStep613 back for respecification. The user may want to do this if it is apparent that the narrative process is going into wanted futures.
The content can contain origins of Typed Links as taught in U.S. Pat. No. 8,751,918. These are indicated by a TypedLink Marker708. The outline fragment selected in theResizable Outline Boundary701 also contains anOutline Child711. The user has selected the parent segment as the root of the next situated fact collection by starting a drag shown by the TypedLink Indicator705 from theAffordance710.
On the right hand side of the figure are certain possibilities the system has selected for the new, successive sibling of the selected, situated outline entry. The user has dragged a TypedLink Indicator705 to the second of the text-centric possibilities. Below are a number of media-centric possibilities asThumbnails704. These contain similar semantic content but are presented as thumbnails for compact presentation. A possible target for the TypedLink Indicator705 can appear both in text and media presentations.
In some cases, it is difficult to evaluate a future without following it a few steps. TheAffordance709 is provided to allow the user to explore as many future steps as will be required to make an informed selection. In this case, the right hand assembly is replaced with that step's options. The user can choose to accept several steps at once.
Once the selection has been made by the TypedLink Indicator705, the type options as taught by U.S. Pat. No. 8,751,918 can be assigned.
FIG. 8 illustrates a flow chart for a related activity. WhereFIGS. 6 and 7 concern structuring multithreaded, linearized sequences,FIGS. 8 and 9 illustrate the task of refining what a single fact/situation chunk means. The process is one of selecting a fact collection in its context, referring to a graphical presentation of what the system believes is meant and adjusting that to suit.
The user chooses an item in a chunk and overall context atStep801, perhaps as delineated by aResizable Outline Boundary701. As withStep606, the system assembles its situation atStep802 and fact atStep803. The user interface displays the outline atStep804 possibly in the same manner as inFIG. 7 or later figures. In these steps, the user has indicated that she wishes to audit and refine what the system assumes, so the relevant Ontology Graphs are calculated inStep805 by appropriate segmentation with the desired segment displayed atStep806. The user can modify the Ontology Graph by changing distance, increasing the scope to include more existing connections, or add, delete or edit nodes inStep808.
The new results are conveyed to the system and the semantic connections are adjusted atStep807. The new ontology arrangement conveys new meaning, nuance of meaning or resituationalized meaning and thus requires a new fabric of governance to be determined. This new governance may itself ‘change’ meaning of the target chunk or other entangled chunks, so there is a feedback signal denoted byPath811. The new situational fabric may adjust ontology relationships throughout the knowledgebase.
FIG. 9 illustrates an example user interface for this operation. Schematic Ontology Graphs are shown inFIG. 13 where they map to infons, but here we show a more nuanced version. The domain is human biology, and the context is trauma-induced stress that affects sleep. In this situation the role played by Corticoliberan in theRoot Infon901, the complex being defined, is highly context-specific.
The system presents an Ontology Graph derived from a baseline ontology imported from an external reference as modified by various situations: the studied condition (trauma induced sleep deprivation), the experimental protocols (embedded neuro-sensors in mice), the intent of discovery (the signals among different zones in the central nervous system) and the specific task of the moment (recording impressions from data). Items with horizontal borders are physical elements or structures. A PhysicalItem Selection Menu905, here illustrated as a popup selector, contains a prioritized list of physical items the system believes are relevant.
Items with vertical borders are phenotypes, qualities or attributes and are sometimes associated with quantitative data. Solid lines between these, for exampleSemantic Relations907, are Typed Links among Ontology Infons. These directed graphs are Husimi trees, meaning that relations can be established between elements and relationships such as Typed LinkSemantic Relation911 noted betweenAttribute Ontology Infon909 and Typed LinkSemantic Relation910.
Solid line indictedSemantic Relations907 indicate ontological relations dominated by strictly semantic considerations from DistributedRegular Reasoning Processors106 of theComputing Device410 supporting theConventional Reasoning System505.
SecondarySemantic Relations906 indicate relations dominated by situational influences, creating relations that would not be apparent in the current art.
The TypedLink Indicator903 is the same as TypedLink Indicator705, indicating Typed Links as taught in U.S. Pat. No. 8,751,918. The editing of the Ontology Graph is a means of refining the type. The TypedLink Indicator903 may have some shape properties that provide additional information as shown by The TypedLink Indicator3303 ofFIG. 33.
It indicates the main relationship link that connects a comprehensive view of the situation to the Ontology Graph. Such a comprehensive view can be the outline illustrated in theResizable Outline Boundary701 on the left side ofFIG. 7, one of the other views described below or any formal structured model. TheDescriptive Source Text902 is representative of an entry in such an outline view.
The user has several means of editing. InFIG. 9 a specific item has been selected from thePhenotype Selection Menu904 and dragged for example to ElementOntology Infon item912, establishing a ‘user wired’ link indicated by the dotted line User Typed LinkSemantic Relation908. This is an example of adding an element, in this case based on an observed behavior that the neurotransmitter has a specific nature.
Another novel editing technique allows the expert to establish ‘semantic distance’ among the elements by rearranging all visible items spatially to indicate his/her impression of this local definitional situation. The system will train itself to interpret subtle, subjective and intuitive cues from each expert user. As the user selects any element, the system temporarily displays connected elements to a user-specified depth to allow the user to evaluate the definition and its elements in a larger ontological context.
Other editing mechanisms follow the art of established ontology tools, for example as found in Protégé™ from the Stanford Center for Biomedical Informatics Research. One use in the described embodiment is as a notebook for experimental teams that uses semantic and situated reasoning to manage evolving formal models that can be exported in publication-ready form or as rich semantic data.
FIG. 10 illustrates a flow chart of a method for creating Typed Links as an improvement over a novel method taught in U.S. Pat. No. 8,751,918 which connects two elements in different situations and possibly different ontologies by connecting elements across two different outlines. The means illustrated byFIGS. 10 and 11 is by display of a representation of the structured statements as vectors in Hilbert Space. Hilbert Space is widely used in the art, and the methods of creating and displaying vectors in this space are standard. Such vectors are distinct from the Ontology Graphs for example ofFIG. 9 with the specification of the vector space being formally specified by the context, here the DistributedSituation Reasoning Processors104,Wreathing Engine204,Situation Reasoner408 andOntology Derivation System501.
Application Ser. No. 13/919,751 teaches the use of Hilbert Space vectors in FIGS. 18, 20 and 21 of that disclosure.
The user in this example has information in two ontology spaces that need to be related. An instance may be formal knowledge about the neurobiology of dream behavior in the context of cognitive phenotypes that needs to be bridged with information noted inFIG. 9 associated with cell-level signals. The user advises the system that this operation is desired inStep1001 and selects the two populated situations inStep1002. As typical, the two reasoning systems prepare their structures: the DistributedSituation Reasoning Processors104 prepares the narratives inStep1004, including at least those in the constituent domains plus the intended bridging process. The DistributedRegular Reasoning Processors106 collects the relevant facts and their ontological relationships inStep1003. As before, the outline view is assembled and displayed inStep1005 and this information is also presented as Hilbert Space vectors inStep1006 using additional semantic information.
A user can then select an affordance in either an outline view or its associated Hilbert Space view and see it selected in the other. That user can then drag from that affordance to any element in the other situation, either outline, Hilbert Space or other representation. (Some are described below.) The situations are updated inStep1008, this time calling on more fundamental categoric operations that manipulate semantics. Possibly profound enhancements may occur in the relevant ontologies atStep1009. The user can now interact with the two joined situations inStep1010.
FIG. 11, similar to FIG. 18 of application Ser. No. 13/919,751, illustrates an embodiment of a user interface for such an operation.
A chunk selected by theResizable Outline Boundary1101 is similar to701. It is composed ofInformation Chunks1111 and Information Chunk Children1115. In this example the chunks are expressed as text strings that have underlying infon representations. A standard notation for infons in the art is delineation by double carets as inTopoeisis Infons1302 ofFIG. 13; when infons are represented by their accompanying structured natural language strings, as here inInformation Chunks1111 and Information Chunk Children1115, they are delineated by single carets.
The chunk selected by theResizable Outline Boundary1101 is displayed in aHilbert Space Visualization1102. A similarHilbert Space Visualization1116 is matched by another chunk selected by aResizable Outline Boundary1101 not shown. A SelectedInformation Chunk1117 is mirrored in the SelectedVector Information Chunk1112 and highlights the corresponding item in the paired representation. A user can drag from either the SelectedInformation Chunk1117 or the corresponding SelectedVector Information Chunk1112 to a secondVector Information Chunk1113 in anotherHilbert Space Visualization1116 creating a TypedLink Indicator1114 as taught in application Ser. No. 13/919,751 and previously shown.FIG. 11 illustrates relevant affordances as described in FIG. 18 of application Ser. No. 13/919,751. Information Chunk Children1115 are collapsible and expandable. The SelectedInformation Chunk1117 has a similar affordance but enlarged to indicate the chuck (and children) are selected.Alias Affordance1103 designates whether the Resizable Outline Boundary's1101 chunk is an alias, having a copy in another location in the outline, allowing for complex lattice flows.Visualization Popup1104 over theHilbert Space Visualization1102 provides visualization options selected for that panel. For example, an Ontology Graph ofFIG. 9 may be chosen.Visualization Title1106 indicates the visualized chunk of theResizable Outline Boundary1101
The Hilbert Space presentation contains inspectableHilbert Space Designators1105 and a specificHilbert Space Origin1107 as the basis for the chunk's first statement.Statement Terminals1108 delineate the scope of the vector.Vector Nodes1109 correspond to Information Chunk Children1115 and the SelectedInformation Chunk1117.Subvector Lines1110 do not correspond to elements of the outline, being an artifact of the vectorization derived from but not directly identifiable from the Ontology Graphs.
The process is supported by the OntologyGraph Display Processors110.
FIGS. 6 through 11 extend the functions of U.S. Pat. No. 8,751,918, specifically the ability to support ontologically informed narrative situation construction (FIGS. 6 and 7), situated ontology enrichment (FIGS. 8 and 9) and ontology federation (FIGS. 10 and 11).
In the context of U.S. Pat. No. 8,751,918, the computing system of theComputing Device127,Storage128,Communicative Connection129,Network130 ofFIG. 1bsupports ontological processing required for the Typed-Link management. In the context of U.S. Pat. No. 8,751,918, the computing system of theClient Computing Device124,Storage125,Display123 ofFIG. 1bsupports the interactions taught in specifying, navigating, manipulating and using Typed Links. In the context of U.S. Pat. No. 8,751,918, the computing system of theComputing Device120, attachedStorage121 and is attached byCommunicative Connection122 to aNetwork132 ofFIG. 1bsupports Situated Reasoning in support of the Typed Links. In the context of U.S. Pat. No. 8,751,918, the computing system of theClient Computing Device117,Storage118,Display116 ofFIG. 1bsupports the automated recognition of Types.
FIG. 12 schematically illustrates the relationship among the representations in the system. The specific function illustrated is the fractional mapping of a feature within the context of an emerging situation as taught in U.S. Pat. No. 9,117,167. In that patent, a feature is extracted from an information stream within a local context. The result is termed a ‘semantic b-frame.’ Described herein is a more general application: the feature may be from a stream, a data pool or a knowledge base.
Infon Sequence1201 designates a structured collection of infons sequence that is extractable from an information stream, a data pool or a knowledge base. Infons are similar to Resource Description Framework (RDF) triples; many methods exist in the art to structure information of any type as RDF triples and these apply to infons. TheInfon Sequence1201 normally will consist of component infons, following a nesting method described inFIG. 18, extended in this disclosure from FIG. 9 of application Ser. No. 14/740,528.
Component Infons1202, elsewhere called ‘Topoiesis Infons,’ consist of anInfon Relation1203, anInfon Parameter11204,Infon Parameter21205 and anInfon Function1206 that supports the mapping between the DistributedRegular Reasoning Processors104 and Distributed Situated ReasoningProcessors106. ATopology Abstraction Process1210 employs Infon Functions1206 to map theInfon Sequence1201 to a category schematically shown asInfon Category1208. Component Topological Types1209 are indicated as supporting the abstraction.
The method for extracting the topology of logical statements as categories is well known in the art. In this schematic representation, theInfon Category1208 consists ofCategoric Elements1211 that are related byCategoric Morphisms1212. The combination ofCategoric Elements1211 andCategoric Morphisms1212 captures essential structure of theInfon Sequence1201 and can be considered an abstract signature. TheSupports Symbol1207 is used in an expression denoting that theComponent Infons1202 represented in theInfon Sequence1201 on the right ‘is supported by’ the situation on the left represented by theInfon Category1208.
The system stores characteristic categories and intercategory dynamics that themselves are stored as categories. An example is shown asDynamics Reference Category1214, having the same fundamental structure of elements and morphisms structure.Dynamics Reference Category1214 is the situation in which theConcept Lattice1215 is supported. Clever specification ofConcept Lattices1215 can result in a vocabulary ofDynamics Reference Categories1214 that serve the function of the control group of U.S. Pat. No. 9,117,167 but more generally.
The process described in U.S. Pat. No. 9,117,167 is group theoretic, using a wreath product over fiber bundles. This more general method subsumes wreath products in a more general method of morphisms (as functors) among instances ofInfon Categories1208 and a stored vocabulary ofDynamics Reference Categories1214 that capture the structure of known dynamics stored ingeneric Concept Lattices1215.Concept Lattices1215 as described in later figures aremultipath Topoiesis Infon1216 structures.Topoiesis Infons1216,Infon Sequences1201 andComponent Infons1202 are logically and mathematically congruent.
To make the correlation clear between the categoric operation and the group operation, the figure shows anExample Functor1213 consisting ofComponent Functor Morphisms1217 mapping structure fromInfon Categories1208 toDynamics Reference Categories1214 and thence fromInfon Sequences1201 toConcept Lattices1215.
FIG. 12 thus improves upon U.S. Pat. No. 9,117,167 to deal with any feature type in any situation, hosted by any computing environment supporting the system architecture ofFIG. 1.
FIG. 13 is similar to FIG. 4 of application Ser. No. 13/919,751 and FIG. 5 of application Ser. No. 14/093,229. Those disclosures teach the method also described in U.S. Pat. No. 8,751,918 of registeringTopoiesis Infon Elements1302 ofTopoiesis Infon1301 to structures ofOntology Infons1304 that in the cited disclosures are themselves infons. These structures can be constructed and maintained usingconventional Ontology Relations1306. application Ser. No. 13/919,751 terms these ‘Arrows.’
Note that when more than oneTopoiesis Infon1301 is considered, the Ontology Graphs can haveShared Ontology Infons1307. In general, infons that are related by experience or narrative have a great many overlaps. Application Ser. No. 13/919,751 teaches a method of managing, processing and displaying these overlaps.
FIG. 14 is similar toFIG. 5 of that disclosure in which aCell1401 comprises a set composed of Infons1402 (application Ser. No. 13/919,751 terms these ‘Points’) andFunctions1403 that reference thoseInfons1402.
A more general method considers the structures shown inFIG. 13 where eachitem Infon1402 is not asimple Topoiesis infon1301 as taught in application Ser. No. 13/919,751 but also Ontology infons1307 that when nested with connectedOntology Infons1304 andTopoiesis Infons1302 forms a composite infon that captures both the information of thesource Topoiesis Infon1302 plus all the ‘semantic connectedness’ information among them. The composition method is as described inFIG. 18 here and taught in application Ser. Nos. 14/740,528 and 14/093,229.
When this technique is used, theCell1401 becomes anInfon Category1208 and theFunctions1403 become when combined, theComponent Functor Morphisms1217 that collectively comprise theExample Functor1213. By this means, the method taught by application Ser. No. 13/919,751 can be extended to any item of information, related to any other and handled in a category theoretic fashion. By means common to the art and enabled by the Curry-Howard correspondence, any structure satisfying the requirements ofFIG. 13 can be coded using common functional programming techniques.
The means by which this is supported is schematically shown inFIG. 15, with the instance of a known collection of knowledge being enhanced by new knowledge.
FIG. 15ashows aConcept Lattice1502, being a multithreaded structure composed ofTopoiesis Infons1501 similar toInfon Sequence1201.
Each infon, infon element and infon constituent (in the case of composed infons) has a discrete Ontology Graph as disclosed inFIGS. 9 and 13. For clarity, two of these are illustrated as Ontology Graphs1503.Primary Ontology Infons1504 in the respective Ontology Graphs are colored black and the PrimarySemantic Relations1507 darkened.Other Ontology Infons1505 are shown in white with theirSemantic Relations1506. Only a few are shown; typically a great many ‘background’Ontology Infons1504 andSemantic Relations1506. The difference between those in black1504 and white1505 is set by the user in a limit on the boundary of interest.
FIG. 15atherefore illustrates aConcept Lattice1502 of a narrative or situation or model with theOntology Graphs1503 of two elements highlighted together with some less relevantbackground Ontology Infons1505.FIG. 15bintroduces a new fact, aNew Topoiesis Infon1508. It has itsown Ontology Graph1509. As is normally the case, someOntology Infons1505 in this new element'sOntology Graph1509 are shared with those in theConcept Lattice1502.
Ontology Graphs1503,1509 exert forces on each other, shifting the influence of theOntology Infons1505. Thus, the balance of meaning inFIG. 15awill be adjusted as the new forces ofOntology Graph1509 are incorporated through the DistributedSituation Reasoning Processors104. This process is schematically illustrated inFIG. 15c. Changes are determined by theExample Functors1213 as they are calculated. These are shown separately in the upper right of the figure; their effect is illustrated in the influence of theNew Ontology Structure1510 on the now adjustingearlier Ontology Graphs1503 and1513.
TheChange Vectors1512 of theExample Functors1213 can be viewed as a separate structure.
The result is shown inFIG. 15d. The same twoOntology Graphs1513 are shown as inOntology Graph1503 inFIG. 15a, but their contents and structure have been adjusted. Consequently, theConcept Lattice1514 has been adjusted, reflecting its evolved meaning.
The signals conveyed by eachExample Functor1213 are the ‘thunks’ taught in application Ser. No. 13/919,751.
Examples of this behavior include the case of collaborative feature recognition across many streaming sources as taught in U.S. Pat. No. 9,117,167.Concept Lattice1502 in this case represents an instance of an evolving tentative feature composition andOntology Graph1511 represents an instance of a continually refining reference feature.
Another example is the case of narrative modeling as taught in application Ser. No. 14/093,229, whereConcept Lattice1502 is the ‘story so far’ with theNew Topoiesis Infon1508 being the next element of the story for example in text or film.
Yet another example can be found in the case of teaching in U.S. Pat. No. 8,751,918 which can be used for modeling of biological systems. In this instance,Concept Lattice1502 may be an experimenter's notebook containing knowledge of a specific biomedical system andNew Topoiesis Infon1508 an entry of new experimental information.
Moreover, as taught in application Ser. No. 13/919,751, the lattice ofConcept Lattice1502 may be a network of processing code as functions, with New Topoiesis Infon1508 a new function, algorithm or monitor.
FIG. 16 is based on FIG. 10 of application Ser. No. 14/740,528, wherein is taught the ability to mix spatial and temporal annotations on a compact, navigable representation of a film.FIG. 16 illustrates different representations of an object in space. A space-time representation of a film Space-Time Strip1600 has a selected location indicated byLocation Marker1608, being the location that contains the object. When selected, the area that object occupies in the Space-Time Slice1602 (application Ser. No. 14/740,528 terms this an ‘Object’ or ‘Area’) can be highlighted, perhaps by scintillation of the pixels involved.
Optionally, a cartoon or other reduced representation of the entire object, here anEagle1604, can be shown as it exists in the frame selected by theLocation Marker1608. As the time selection of the film advances or reverses, the object's representation animates within the frame, and optionally within another Space-Time Frame1605 (application Ser. No. 14/740,528 terms this a ‘Location’), or offscreen as indicated by an Affordance1603 (application Ser. No. 14/740,528 terms this an ‘Object). Alternately, theFull Fidelity Eagle1606 and LaterFull Fidelity Eagle1607 can be animated.
Such objects are readily identified and placed as taught in U.S. Pat. No. 9,117,167. If by this or similar means, then a situated Ontology Graph exists for each instance of that object, changing as situations evolve through the narrative of the film. A novelty inFIG. 16 is the ability to view the Ontology Graph within the Space-Time Strip1600 and manipulate its meaning and its Hilbert Space sibling as previously described in the outline view inFIGS. 9 and 11.
FIG. 17 expands FIG. 9 of application Ser. No. 14/740,528, adding theConcept Lattice Layer1710, supplementing the presentation layers in the described embodiment of application Ser. No. 14/740,528. The film images are on aFilm Layer1700, displaying in part an object, in this case aHand1705.Semantic Frame Layer1701 contains theSemantic Frame1706 extracted as taught in U.S. Pat. No. 9,117,167 and there called a ‘semantic B-frame’ to emphasize the ability to employ compression artifacts.Outline Layer1702 draws the Outline of theObject1707, from either Space-Time Slice1602,Eagle1604 orFull Fidelity Eagle1606.
Semantics Layer1703 contains displayable physical metadata Object, Object Path andEnvironmental Notation1708 about the object or environment, such as implied mass, movement and intent. This will have been deduced by processes such as those discussed inFIG. 12 and taught in application Ser. No. 13/919,751.
Concept Lattice Layer1710 contains theConcept Lattice1711, enriched by the semantic information as Ontology Graph or Hilbert Space representation as described inFIGS. 9, 11 and 13.
Temporal Annotation Layer1704 containsTemporal Annotations1709 as taught in application Ser. No. 14/740,528.
FIG. 18 is derived from FIG. 14 of application Ser. No. 13/919,751, wherein is taught a method of infon nesting and parsing. A new ability to drag semantic elements to reassign meaning is taught in ourFIGS. 8 through 11. This same underlying ability allows us to reregister semantics when displayed in this nesting graph. Such a nesting graph is the Topoiesis Infon equivalent of the Ontology Graphs among Ontology Infons.
An example initial chunk of information is ‘An author is typing in Chicago.’ OneComponent Topoiesis Infon1812 is captured in the diagram asComponent Topoiesis Infon1809 andComponent Topoiesis Infon1810 joined at an ‘is’ node. Enclosing infons can capture the explicit situation that the ‘author’ (1810) ‘is typing’ (1809) ‘on a Windows™’ (1808) ‘computer’ (1811) and is ‘in Chicago’ (1806). TheNested Infon1803 combines the components to mean ‘in Chicago’.
In this example, allNodes1801 are the ‘is’ relation. AnyNode1801,1805,1807,1809,1810,1811 with itsLeading Links1802,1804 is aTopoiesis Infon1803, which for example captures the notion that ‘someone is in Chicago’.
Application Ser. No. 13/919,751 teaches the central nature of this nesting in building the functional reactive fabric of the system. An added novelty inFIG. 18 is that the user can select aNode1813 and reassign it within the graph wherever logical dependencies allow. A user may wish to perform a reassignment to adjust ‘semantic distance’ by changing the nesting to present the more relevant facts as foremost leaves. For instance, if a forthcoming fact is of a physical disaster, it may be more significant that the subject is in Chicago than she is using a Windows™ computer.
This nesting view is substitutable for any of the semantic views. Thus, a user can modify semantic structures by the Futures View ofFIG. 7, the Ontology Graph view ofFIG. 9 (and FIG. 19 of application Ser. No. 13/919,751), the Hilbert Space view ofFIG. 11 or the nesting view ofFIG. 18. These can be in the context of an outline as inFIGS. 7 and 11, the Space-Time Scrubber ofFIG. 16 or the Concept Lattice ofFIG. 15 and described more fully below inFIGS. 19, 26 and 27.
FIG. 19 illustrates aConcept Lattice1901. The method of constructing and using such is taught in application Ser. No. 14/093,229;FIG. 37 from that disclosure is the source. On the right inFIG. 19b, theConcept Lattice1901 is displayed. EachTopoiesis Infon1911 is a structured infon, typically with nested information as described inFIG. 18. The Concept Lattice begins at theOriginating Topoiesis Infon1903 in terms of sequence.Connectives1904 are logical connectives, typically of the ‘and-then’ type. In one embodiment, theGoverning Path1905 is drawn darker. The quality of governance is taught in application Ser. No. 14/093,229.FIG. 19aillustrates a simple extraction of categoric structure of the Concept Lattice, using a skeletal lattice Half-Dual1902 as an example. In this case, lines and nodes are converted to each other. For example,Node1907 labelled ‘12-15’ is derived from theTopoiesis Infon1910 that connects nodes numbered12 and15 in theConcept Lattice1901. Connective1906 is derived fromTopoiesis Infon1911 numbered14.
The relationship ofConcept Lattice1901 and Half-Dual1902 is the same asInfon Sequence1201 andInfon Category1208 inFIG. 12.
A new novelty is that users can directly reassign nodes in the lattice by selecting a node, here illustrated as SelectedTopoiesis Infon1908,1909, and dragging it and connecting links to another location in the lattice or copying or moving to a location in another lattice. This can be combined with other views and semantic editing modes as previously described.
For example,FIG. 20 imposes a Concept Lattice of the type shown inFIG. 19bon a Space-Time representation as illustrated inFIG. 16 and taught in FIG. 7 of application Ser. No. 14/740,528. Each node, aTopoiesis Infon2013 in an instance of aConcept Lattice2009 corresponds to a point or span of time. Each video or stream slice in the Space-Time Strip2000 also corresponds to a moment. TheTopoiesis Infons2013 are matched to the relevant Space-Time Slices2004,Temporal Annotations2008,Marked Timespans2002, Script Times (absolute times in the story or described model, separate from the description)2007,Precise Times2005,Spatial Annotations2012,other Markers2010,2001 or via a TypedLink Indicator2011 to a location in one of the other representations described above.
FIG. 21 illustrates aUser Interface2101 incorporating the Space-Time Strip2000, the same as1600 as taught in application Ser. No. 14/740,528 with associated information. The example is from a biological systems model. The bottom part of the user interface is dominated by the Space-Time Strip2115. An area immediately above, aText Annotation Area2107 contains metadata associated with the model and the selected instant. That instant is marked by aLocation Bar2114.
Under the Space-Time Strip2115 is a SecondText Annotation Area2111 with user-editable notes keyed to temporal location. AScrubber2108, here shown as a black bar functions as a traditional scrubber; anIndicator Rectangle2109 indicates the zone of the process visible in the displayed Space-Time Strip with asmall Location Bar2110mirroring Location Bar2114.
The described embodiment showsUpper Area Controls2106 and theLower Area Controls2104. The upper area is dominated by aKey Frame2102 that contains the detailed model of what is happening at that instant. These are visual representations of aTopoiesis Infon1911,2033 of a Concept Lattice. This is an editable field. A biological process is displayed and edits can be made using a coherent visual grammar that is an intermediary with the more abstract Ontology Graph.
Because many threads of the Concept Lattice may be active, aSelection Zone2105 allows the user to choose which thread to examine. AControl2103 allows the user to go forward or back in that single thread.Temporal Annotations2113 can be keyed to these threads as a surrogate for the Concept Lattice overlay ofFIG. 20.
An extension of the Space-Time Strip2115 taught in application Ser. No. 14/740,528 is the ability to display quantitative information associated with a Space-Time Slice as graphs. Two bar charts are illustrated, one with black bars measured from the bottom,Bar Chart12112 and another measured from the top,Bar Chart22115 with variation shown in black.
FIG. 22 is similar to FIG. 7 of application Ser. No. 14/093,229 which teaches a method of modeling the dynamics associated with aConcept Lattice1901. Key elements of that disclosure are three zones in the graphical language.
ACentral Zone2202 in the embodiment disclosed in application Ser. No. 14/093,229 contains elements identical to the nodes of the Concept Lattice, without necessarily displaying the structure, though the Concept Lattice can be superimposed on this field. TheCentral Zone2202 displays theTopoiesis Infons2214 in one of their display modes. TheCentral Zone2202 thus is the Concept Lattice Space.
AnUpper Zone2201 models the influence of theOntology Graphs2215 ofFIGS. 9 and 13, connected fromTopoiesis Infons2214.FIG. 22 adds the ability to explicitly display an editable field. Shown is theOntology Graph2215 spanning successively moreprimitive Ontology Zone12207,Ontology Zone22208 andOntology Zone32209, but other editable fields can be displayed: the visual grammar of theCentral Zone2102 if the domain allows one; a Hilbert Space view ofFIG. 11; a nesting view ofFIG. 18; or a cross-ontology outline ofFIG. 7. TheUpper Zone2201 is therefore the Ontology Graph Space.
TheLower Zone2203 tokenizes the topology of theExample Functor1213 andChange Vectors1512 and is also editable, being a window into the Distributed Situation andSituation Dynamics Store112. The editor may use an interface disclosed in FIGS. 38, 39 and 40 of application Ser. No. 14/093,229 which can be superimposed on this field. TheLower Zone2203 is thus the Dynamics Space where the work of the DistributedSituation Reasoning Processors104 is visualized as described in application Ser. No. 14/093,229.
FIG. 23 is derived from FIG. 16 of application Ser. No. 13/919,751 which teaches anOutline Segment2301 with an Assignable Governing Situation2302 (Similar to Visualization Title1106),Parents2303 andChildren2305,2306. Some chunks are both Parent andChild2304. AHollow Affordance2309 designates an alias, compared to aSolid Affordance2307,2308
One novel extension is the ability to select aSelected Chunk2310, collapsed or not, and reassign it in the outline as parent or child, with all the nesting reassignments ofFIG. 18 performed automatically using the Ontology Graph governance taught inFIGS. 12 and 15.
FIG. 24 illustrates asimilar Outline2401, derived from FIG. 17 of application Ser. No. 13/919,751. In this case, theOutline2401 is used to display the infon nesting ofFIG. 18 directly and provide richer affordances for associated views such as illustrated inFIGS. 7, 9, 11, 13, 16, 18, 19, 20, 21 and 22, but not limited to those.
In this case the representation is between the natural language of outlines as illustrated inFIGS. 7 and 23 and the Topoiesis infons1216,1301,1803 ofFIGS. 12, 13 and 18. The representation is as discussed inFIG. 11, consisting of Topoiesis infons expressed as structured natural language.
The upper right of theOutline2401 contains anOption Control2402, which if not activated appears alone with no controls below it. If activated, a popup menu (not shown) provides for allowing the appearance of theLabel Field2403 and/or thelnfon Control Gutter2404
TheLabel Field2403 has two zones. TheTop Zone2405 contains theField Name2406. This Field name serves the purpose of advising collaborating users on the contents of the information in short form. The zone contains an expandingName Option Popup2407 which displays a popup inspector (not shown) that has more detailed information about the contents. For example, the more detailed information may include a longer description, the source, the storage, the trustworthiness, the age and so on.
A second zone contains the Option Popup List2416 (list not shown) to select the nature of the outline display. When this is selected, each entry in the main outline is displayed in single carats, being a natural language expression of the fact.Parent2408 is such a fact. Its outline control, theDisclosure Triangle2409 indicates that there is more detail. In this example, the entry ‘Leonard gets a phone call from an unknown’ is a scene in a film and children of that entry may provide details about plot, cinematic expression and any other desired annotation.
OutlineChunks2408 and2410 in the figure are sequences in a narrative construction, so that interpretations in the Ontology Graph of any one entry or its children can affect the Ontology Graphs of all other entries, as previously described. In this case theOption Control2402 has been toggled to display theInfon Control Gutter2404 which contains controls.
ADisclosure Triangle2411 when pointing to the left indicates that no detail of the infon is displayed. TheDisclosure Triangle2411 here has been turned down by clicking to display detail of theOutline Chunk2408 which displayed in natural language form This can be changed to display in formal Topoiesis Infons.
In the figure, an entire panel is expanded contained in aninterior Field2412 illustrated here as a rounded rectangle. It contains four entries, one each for each of the four elements of the Topoiesis Infon, the Relation2417 (in italics),Parameter12418, andParameter22419. Each of these is displayed on its own line with its own disclosure triangle; each can be expanded to inspect their internal structure.
Parameter22419 ‘phone’ has been expanded, as shown by theDisclosure Triangle2413, This has exposed the first tier Ontology Infon in the Ontology Graph. Typically several Ontology Infons will be opened for each expanded Topoieisis Infon component (relation or parameter). Ontology Infons have three constituents, each displayed on its own line and enclosed in aChild Enclosure2414.
Any number of elements can be simultaneously expanded. If theOutline Segment2401 is not large enough for the expanded items, theInfon Control Gutter2404 doubles as a scroll bar. The figure illustrates that there is content out of view at the bottom of theOutline Segment2401 with theArrow2415. Should the content be scrolled in a way that unviewed material is off the top of the Field, thenOption Control2402 will be replaced with the upward twin of theArrow2415.
Topoiesis Infons can in this manner have their Ontology Graph be fully explored.
Any entry on the graph that appears in two linkage paths is displayed as an alias. One novel use of this view is to arrange a collection of facts under headers by dragging and dropping, perhaps from other Fields or linking from other Fields as described below. In that case, the user will have a number of sequential facts as Topoiesis Infons each under a header. For example, by selecting ‘Hilbert Space’ from theName Option Popup2407, the outline view can be replaced by one in which eachOutline Header2409 generates a vector from the children under it.
These are seen as Topoiesis Infon statements. Each of these expressions generates the vector. The collection of such vectors displayed in theOutline Segment2401 defines a Hilbert Space as described inFIG. 11 in which all the defined vectors are well behaved.
A novel extension to application Ser. No. 13/919,751 is the ability to select aSelected Child Enclosure2412, and reassign it in the outline as parent or child, with all the nesting reassignments ofFIG. 18 performed automatically using the Ontology Graph governance taught inFIGS. 12 and 15. updating in real time.
FIGS. 25 through 35 illustrate new capabilities using principles of U.S. Pat. No. 8,751,918; U.S. Pat. No. 9,117,167; application Ser. No. 13/919,751; application Ser. No. 14/740,528 and application Ser. No. 14/093,229, and the display of those capabilities using Concept Lattices as the primary visual grammar.
FIG. 25 illustrates four node types that present Topoiesis Infons found in common Concept Lattices. The types expand the capability of Concept Lattices as known in the art. The new capability results from the ability to reason over the open world afforded by Situation Theory. The implication is that useful reasoning will occur over nodes that are partially or totally unknown.
A Primitive Infon is displayed as2501. The definition of primitive varies by user, domain and application. A primitive is the deepest component that concerns the user. For example, a primitive for a biomedical researcher may be ‘a-helical CRH9-4’909 ofFIG. 9.
A nested infon where all the internal components are known and stored as illustrated in anOntology Store205, anInformation Server303, an Ontology Store andReasoning System404 andOntology Structures508 is represented byNested Infon2502. A novel feature of embodiments of the invention is the central use of Situation Theory which allows the use of infons with internal nesting that contains unknown elements. This is denoted byUnknown Element Infon2503. Unknown elements in this case include items that are knowable and unknowable.Unknown Element Infon2503 only applies when unknown component items are suspected; our use of Situation Theory presumes that fullyexplicit Nested Infons2502 are likely to contain unknown or unresolved components.
Infons or infon constructions whose existence is known but whose information is wholly unknown are denoted byUnknown Infon2504. Collectively,Unknown Infons2504,Unknown Element Infons2503 andNested Infons2502 comprise a set known as Soft Infons.
FIG. 26 illustrating aConcept Lattice2601 is identical in nature to FIG. 19b and FIG. 37 of application Ser. No. 14/093,229 but with the ‘soft’ infons ofFIG. 25. It represents what in application Ser. No. 13/919,751 is called the Functional Reactive Fabric. Beginning Topoiesis Infon2602 is the beginning of the narrative, process or other sequence of interest withEnd Topoiesis Infon2603 the current state ‘caused’ by the predecessor infons and infon structures.
FIG. 27 is thesame Concept Lattice2701 as inFIG. 26 but tilted and some associatedOntology Graphs2703 connected viaOntology Reference Links2702. The displayedOntology Graphs2703 are notional; a more useful diagram would have many more nodes and threads in theConcept Lattice2701. Only somerepresentative Ontology Graphs2703 are shown. The depth of concern in anOntology Graph2703 is set by the user or determined by the system based on its understanding of the user's situation. Some elements within the determined or specified scope of theOntology Graphs2703 areUnshared Infons2704 orShared Infons2705. Sharing can occur on a massive scale withinOntology Graphs2703 of a significant percentage of Topoiesis Infons referenced by theConcept Lattice2701. Productive visualizations have between 34 and 39% of Topoiesis Infons sharing infons at the third and fourth level of ontological depth.
FIG. 27 illustrates the relationship between Topoiesis Infons and Ontology Infon sharing, but it also the basis of a user interface elaborated in later figures. As a user interface, it has the ontology space ‘above’ but can also support by connectives ‘below’Other View Connectives2706, one of the other views noted in previous figures.
FIG. 28 illustrates a new user view that can be supported in conjunction with theConcept Lattice2801.Other View Connectives2802 are the same asOther View Connectives2706. Infons and Nested Infons are projected onto a new plane. InFIGS. 26, 27 and 28 the layout of the Concept Lattice has been structured on a grid using techniques that are common in the art and designed to minimize distance and avoid crossing lines. TheProjection2803 below removes theConnectives2805 and introduces a new feature. TheGoverning Influence2804 is the dominant line of semantic connection among the connected Ontology Graphs. The generation of theGoverning Influence Line2804 is taught below.
FIG. 28 presents theGoverning Influence2804 as a line, but any number of visualizations are possible, conveying densities and flux. ThisGoverning Influence2804 imparts significant information about the system modeled in2801, indicating both the flow of governing influence and signal paths.
The second sorted reasoning system taught in U.S. Pat. No. 9,117,167, application Ser. No. 13/919,751 and application No. /093,229 reasons in large measure about the topology of the system, a key feature of which are flows such as theGoverning Influence2804. In other words, theExample Functors1213 supported in the system illustrated as DistributedSituation Reasoning Processors104,Wreathing Engine204,Situation Reasoner408 andOntology Derivation System501 are themselves categories with internal morphisms and symmetries.
FIG. 29 illustrates an example geometry onto which this functor topology can be mapped when moving through theTopology Abstraction Process1210 from the DistributedSituation Reasoning Processors104 to the DistributedRegular Reasoning Processors106.
The illustration shows one of14 possibleBravais Lattices Cells2901 of a structure that is periodic when constructed of many such cells. TheBravais Lattices Cell2901 consists of aMembrane Surface2902 that divides space into two equal volumes: Half-Space12903 and Half-Space22904 that are also identical in form. These surfaces are generally called ‘periodic sponge surfaces.’ Many types exist; a method for discovering them has been developed by Michael Burt and described in “The Periodic Table of the Polyhedral Universe”, International Journal of Space Structures 26, (2), 75. 2011.
TheBravais Lattices Cell2901 illustrated inFIG. 29 has a cubic packing but many symmetries exist. All types of these periodic sponge surfaces can be employed in embodiments of the invention.
FIG. 30 illustrates part of thePeriodic Surface3001 composed from the cells ofFIG. 29, with the periodicity more apparent.
The symmetric substrate is a regular branching structure onto which Ontology Graphs can be mapped with no permanent assignment of ontology relation to substrate branch and no exclusivity of ontology relations. Techniques similar to these are commonly practiced in the Formal Concept Analysis community.
Thes Periodic Surface3001 is defined by a topology that is shared by the categoric space selected for the domain, as described inFIG. 12. Thus, any represented content in the substrate when projected on the surface and reduced in dimension reflects the functors applied in the Functional Reactive Fabric. Techniques similar to these are used in modern quantum logic as it applies to modeling physics. A good reference is Coecke, B. (2012). The Logic of Quantum Mechanics-Take II. Retrieved from http://arxiv.org/pdf/1204.3458v1.
The tension that structures the minimal surface ofPeriodic Surface3001 thus produces the Ontology Force Structure that attracts and repels the Topoiesis Infons in a Concept Lattice. GoverningInfluence3002 here shown as a line show concentrations of the forces. Application Ser. No. 14/093,229 teaches a method of specifying the dynamics that by the DistributedSituation Reasoning Processors104 andSituation Reasoner408 produce theappropriate Periodic Surface3001 using Michael Burt's algorithm.
FIG. 31 illustrates theConcept Lattice2601 ofFIG. 26 as a ThreeDimensional Concept Lattice3101 which has been perturbed by the Ontology Force Structure. Coupled Ontology Infons collectively form Governing Influences3002 on the associatedPeriodic Surface3001, attracting and repelling one another in a complex fashion mediated by the Functional Reactive Fabric governing the Ontology Force Structure. TheOntology Reference Links2702 typically have a simple springiness that pulls the Concept Lattice into its three dimensional shape as a Three Dimensional Concept Lattice.
A skilled user will be able to read the nodes and causal connectives of the ThreeDimensional Concept Lattice3101 as they are modelable in the current art. A novelty of embodiments of the invention is how the nodes are connected by force that provide significant additional information by the Semantic Distance among nodes in the lattice. By various user interface means including haptic interfaces, a user can experience the relative forces involved.
By direct manipulation of the nodes, a skilled user can teach the system to adjust its understanding by moving a node to adjust its Semantic Distance. Moving a node also pulls the associated Ontology Graphs, perhaps radically changing their connection, their Force Structure and associated location on the Sponge Surface. The arrangement of nodes of the ThreeDimensional Concept Lattice3101 may snap at certain thresholds to new configurations.
This is a fundamental user interface of a system, for example a biological systems model or model of a film or genre narrative. A user can directly edit it using any or all of the views described earlier inFIGS. 7, 9, 11, 13, 16, 18, 19, 20, 21, 22, 23, 24 and 28, as zoomed views, inspectors or linked panes.
FIG. 32 illustrates an enhanced view ofFIG. 31, where some of theGoverning Influence3002 from the Ontology Force Structure are imposed on the ThreeDimensional Concept Lattice3201. A line ofGoverning Influence3202 is displayed in this example, as well as some indication of cloud density or influence. Color is especially useful in this context, with one color typically reserved for governance.
FIG. 33 revisits the interface convention previously illustrated inFIG. 7 as a TypedLink Indicator705,FIG. 9 as TypedLink Indicator903,FIG. 11 as TypedLink Indicator1114 andFIG. 20 as TypedLink Indicator2011. Using our Governing Influence from the Ontology Graph's Force Structure mediated by the Periodic Surface, we can now assign a form to instances of Typed Links as taught in U.S. Pat. No. 8,751,918. Such links have anOriginating Element3301 and aTarget Element3302. As described in previous figures, these elements may have outline chunk, nested infon, functor or situation identities. As illustrated inFIG. 11, these elements may originate in different ontological domains.
The TypedLink Indicator3303 can be a simple line or optionally have additional elements. These include an origin annotation here illustrated as anOriginating Cone3304, whose character can in part be discerned by visual characteristics. ThisOriginating Cone3304 collects conveyable information about the relationship denoted by the TypedLink Indicator3303. Both theOriginating Cone3304 and a similar theTermination Annotation3305 typically trigger inspectors or a similar device to communicate and edit essential properties.
As taught in U.S. Pat. No. 8,751,918, visual characteristics of the TypedLink Indicator3303 designating the Typed Link can communicate information of its nature. Added is the ability to have a visual grammar computed and assigned by the system that can directly communicate to a skilled user. That user can directly edit these properties of the link by for example manipulating its shape and calligraphic nature. These interactions can be supplemented by or work in concert with the other affordances described in U.S. Pat. No. 8,751,918.
FIG. 33 shows a specific form of3303 with twoInflection Points3306 and3307 with their respective and two GoverningInfluences3308 and3309 from the Periodic Surface. These are similar toGoverning Influence3202. An embodiment expresses theforms Governing Influence3202 andGoverning Influence3303 in such a way that some central nature of the system is revealed, following a quality coined as ‘kutachi’ based on a Japanese concept of ‘katachi,’ often applied by scientists in this manner.
FIG. 34 illustrates an immersive version of the user interface ofFIG. 32. Embodiments can use a variety of visualization technologies including virtual and augmented reality. In this instance, theConcept Lattice3402 is ‘held’ in the hand or hands by aUser3401, possibly with a haptic device. The Governing Influences3002,3202 form a larger structure that can be as large as enclosing theuser3401, here shown as Governing Influences3403. The idea is that if each line effectively conveys subtle but essential, situated information then a manipulatable, immersive three dimensional assembly will be more effective.
The tactile interface of theConcept Lattice3402 may be based on string figures.FIG. 34 shows a standing human, but any posture can be accommodated. Groups can be enclosed. Remote collaboration is possible, using identical copies of the model or parsed, fractional versions tailored to specific purposes. Collaboration can be in real time or used as a persistent three dimensional ‘notebook.’
An alternative embodiment has theConcept Lattice3402 by itself without the Governing Influences3403 as a tactile model, perhaps immersive and collaborative. Regarding this, the examples shown inFIGS. 26, 27, 2831 and32 are relatively simple to indicate in such a user interface. Practical applications where embodiments of the invention have a unique advantage are more complex.
FIG. 35 illustrates a Concept Lattice representative of a practical use. The Concept Lattice3501 (which is an example of a functional fabric) in this case is displayed without large nodes. It is a collection of causal links shaped by the Governing Influences. Though they are not explicitly shown, the resulting shape of these Governing Influences provides significant insight.
The use case is a model of interacting biological systems in the case of post-traumatic stress disorder. Well after the original trauma, subjects retain a fear memory, often associated with specific narratives. Stress is induced and the body responds in many ways as if a low level pathogen is present. Sleep is disturbed and consequently the inflammation and physical/mental stress increases. Many systems are involved: the central nervous system, innate and adaptive immune systems and at least two cognitive systems that employ radically different ontologies. The diagram is a baseline among hundreds of human cases and thousands of rodent models.
Reading from theleft Zone3502 is the acquisition of the mental trauma. The top of the model until about the center captures phenomes associated with reflective awareness that the trauma was a discrete, past event. This awareness is subsumed.
In the center starting withZone3502 and continuing to Zone3506 are a collection of subconscious and passive nervous system processes that are centered in specific regions of the brain and manage fear memory and reparative dreaming.
The bottom collection of processes starting withZone3504 and continuing through the right atZone3505 are a collection of purely biological processes associated with the immune systems, and primarily the relatively blunt innate immune system.
There are thousands of measurable events that are contained in this model, the reduced biological processes being of the kind illustrated inFIG. 21. A detailed understanding of all of these is beyond the expertise of any researcher. Yet standing back and just observing the shape resulting from the Semantic Distance and Governing Influences, one can extract key insights not directly apparent in the data.
Lucid cognition and the feeling of control vanish at apoint Zone3509 as the memory of the event becomes subconscious fear memory, entangled with and interrupting sleep. Meanwhile, there are twointense periods Zone3510 andZone3508 where the immune and passive nervous systems are entangled, followed by apuzzling period Zone3507 of no interaction.
A wise experimental strategy is to look at the area ofZone3510 and the specific signals that are exchanged, to interrupt them and see the result. As it happens, there is enough knowledge in this pathology to experiment using the model itself. If the researcher blocks a single, signal path atZone3510 thehole Zone3507 vanishes, the immune system transfers to an adaptive mode, the reflective capability and control impulse are not stunted and the fear memory is neutralized. As with any good model, the next step would be to perform bench research to validate and adjust the model.
While interacting with the model, the user will have zoomed, examined and manipulated information using multiple affordances. Queries to remote sources will have been automatically made to refine the model. Ongoing new results from the literature and central data stores will have been automatically ingested and made situationally appropriate.