CROSS REFERENCE TO RELATED APPLICATIONThis application is a divisional of U.S. patent application Ser. No. 11/786,820 filed Apr. 13, 2007 entitled “Distributed Routing Table Architecture And Design”, the entire specification of which is hereby incorporated by reference.
BACKGROUNDA network of computing devices often includes at least one computing device that functions as a “router,” directing message traffic on the network. Traditionally, routers connect two or more sub-networks such that messages between computing devices on different sub-networks are directed to the appropriate sub-network for delivery to their targeted destination. At the center of a router's functionality is a “routing table” that correlates identifiers of individual computing devices to network paths that can be used to reach that computer. Routing tables can be implemented in a variety of ways, and are not limited to traditional look-up tables. Indeed, while a traditional look-up table can be suitable for implementing routing tables for networks comprising a limited number of computing devices, router tables for large networks, such as the ubiquitous Internet, that comprise millions of individual computing devices can instead be implemented in uniquely structured forms designed for highly efficient information retrieval.
The information contained within a routing table can likewise vary depending on the type of routing used. For example, in a simple routing scheme known as “hop-by-hop routing”, the routing table can correlate identifiers of individual computing devices to the address of the next device along the path to that destination; in other words, the next “hop.” Alternatively, more complex routing schemes are based on knowledge of network topology and, consequently, a routing table in such a scheme can correlate identifiers of individual computing devices to one or more paths to that computing device.
Because a router can comprise information critical to inter-network communication, it can easily become a bottleneck for network communication. For example, if a router becomes clogged or non-responsive, it can slow down, or even interrupt, various network communications. To eliminate such a bottleneck, distributed routing tables (DRTs) can be used, whereby multiple computing devices can each host some or all of a routing table that can be used in a distributed fashion. For example, each one of multiple computing devices can host a portion of a DRT that comprises information regarding a subset of computing devices on the network. Thus, for each message received by a computing device that is not destined for that device, the computing device can reference its portion of the DRT and identify another computing device that may be more likely to be able to ultimately deliver that message to its intended recipient. Unlike a centralized router, which can no longer direct messages to their intended destination when it fails, a DRT can continue to direct messages even if one or more computing devices fail. In such a failure scenario, the remaining, operational, computing devices simply direct messages to other, still operational, computing devices, and thereby ultimately deliver messages to their intended destinations.
SUMMARYMultiple applications, executing on one or more computing devices, can implement inter-application communication based on a distributed routing table topology. To minimize the opportunity for malicious behavior among these applications, messages can, in one embodiment, be signed by a certificate that is derived from a root certificate. Thus, received messages can first be verified to have been properly signed by a certificate, and, subsequently, the certificate itself can be verified to ensure that it is properly derived from one or more pre-selected root certificates. Such verifications can be performed by a modularized security module that, due to its modularized design, can be efficiently substituted by another security module that can be based on alternative security mechanisms
Such a modularized approach can be applied to additional elements used in the operation of the DRT, providing greater flexibility to those applications implementing the DRT network. For example, in one embodiment, a modularized transport module can be used to enable the applications to communicate using any one of a number of unique network communication protocols. Thus, one transport module could implement communications using the ubiquitous Transmission Control Protocol (TCP), while another transport module could implement communications using some other protocol, such as the User Datagram Protocol (UDP). Similarly, a modularized bootstrap module can be used to enable applications to form and join DRT networks based on a variety of peer-to-peer protocols or other name resolution protocols. For example, one bootstrap module could enable the formation and joining of DRT networks based on the Peer Name Resolution Protocol (PNRP), while another bootstrap module could work with the Domain Name Service (DNS).
Among the modules used in the operation of the DRT, one embodiment contemplates the existence of a routing table management module that can maintain a routing table based on network locality. More specifically, the entries in each individual routing table of the DRT identify computing devices that are “close,” in a network topology sense, to the computing device on which the routing table management module executes. Such “close” entries can be maintained by comparing, for example, the round trip time of communications with a device about to be added to the routing table with the round trip time of devices already in the table, and adding the new device to the table only if the round trip time of communications to the new device is shorter. By providing a routing table that references local computing devices, the overall efficiency of the DRT network can be increased.
This Summary is provided to introduce a selection of concepts in a simplified form that are further described below in the Detailed Description. This Summary is not intended to identify key features or essential features of the claimed subject matter, nor is it intended to be used to limit the scope of the claimed subject matter.
Additional features and advantages will be made apparent from the following detailed description that proceeds with reference to the accompanying drawings.
DESCRIPTION OF THE DRAWINGSThe following detailed description may be best understood when taken in conjunction with the accompanying drawings, of which:
FIG. 1 is a network diagram of an exemplary DRT network;
FIG. 2 is a block diagram of an exemplary computing device;
FIG. 3 is a block diagram of an exemplary DRT mechanism that can be invoked by an application;
FIG. 4 is flow diagram of exemplary steps to instantiate a DRT mechanism;
FIG. 5 is a flow diagram of exemplary steps to join a DRT network;
FIG. 6 is a communicational diagram of an exemplary series of communications enabling the registering of a computing device with a DRT network; and
FIG. 7 is a communicational diagram of an exemplary series of communications enabling the searching for a computing device on a DRT network.
DETAILED DESCRIPTIONThe following description relates to mechanisms for implementing and using a network of computing devices relying on a distributed routing table (DRT). In particular, each computing device can host one or more application programs that can act as endpoints of the DRT-based network. The application programs can instantiate processes that can maintain each individual portion of the DRT, and that can further provide communicational and security infrastructure, enabling the application programs to create and use a network relying on the DRT.
Various tasks relevant to the implementation of the DRT can be componentized such that interchangeable components can provide a broad range of functionality. In one embodiment, a security module can be selected by an application program from among one or more security modules, where each security module can verify incoming messages according to the particular security strategy being implemented by that module. In another embodiment, a transport module can be selected by an application program from among one or more transport modules, where each transport module can provide communications using a different network transport protocol. Likewise, further embodiments contemplate a bootstrap module or logging module, each of which can be selected by an application program from among one or more modules providing the same basic functionality, but each implementing mechanisms specific to that module.
A still further embodiment contemplates a routing table management module that can manage the entries for each portion of the DRT to ensure that the entries in each routing table identify computing devices that are proximate, in the network topology, to the computing device. In one embodiment, the closeness of an entry, or potential entry, in the routing table can be determined by the round trip time for communications between the computing device identified by the entry and the computing device on which the routing table management module is executing. Thus, if the routing table already comprises a sufficient number of entries, then, in one embodiment, a potential entry can be added only if its round trip time is less than the round trip time of at least one entry currently in the routing table. In an alternative embodiment, entries can be freely added to the routing table, but the routing table management module can, periodically, or as otherwise requested, test the round trip time of one or more entries in the routing table, and can discard those entries whose round trip time is greater than determined threshold, or retain only a threshold number of entries.
In another embodiment, a security module can apply security based on one or more predefined root certificates. Specifically, upon receipt of messages related to the management of the DRT-based network, such a security module can verify both that the received message was properly signed by a certificate provided with the message, and that the provided certificate comprises a certificate chain that leads back to one of the predefined root certificates. The root certificates can be provided by either a local certificate authority (CA), or by an external CA, such as any of the commercial CA who sell and provide services over the Internet.
The techniques described herein focus on, but are not limited to, the mechanisms used to create, join and manage a network of multiple application programs executing on one or more physical computing devices, with each computing device physically networked to the other computing devices through one or more sub-networks. To distinguish between the physical network connections between computing devices, and the series of inter-communicating application programs, the latter will be referred to as a “mesh.” Thus, a mesh that uses a distributed routing table (DRT) to route messages within the mesh will be referred to as a “DRT mesh.”
Turning toFIG. 1, anexemplary DRT mesh99 is illustrated comprisingcomputing devices10,20,30,40 and50 networked together throughnetwork90. In theexemplary DRT mesh99 shown, each of the participatingcomputing devices10,20,30,40 and50 can comprise a key and a routing table. Each key can act as a unique identifier of, either the computing device itself, or of a particular process or application executing on the computing device. For example, the ubiquitous Internet Protocol (IP) address is a type of key used to identify endpoints in networks conforming to the Internet Protocol. In the illustratedexemplary DRT mesh99,computing devices10,20,30,40 and50 have been assignedkeys1000,2000,3000,4000 and5000, respectively.
Each of the routing tables11,21,31,41 and51, of thecomputing devices10,20,30,40 and50, respectively, only comprises a portion of the routing information available. Instead, it is the combination of routing tables11,21,31,41 and51 that acts as a single, distributed, routing table for theDRT mesh99. For example, to send a communication tocomputing device40, thecomputing device20 can first determine that the key it is searching for is key4000. Once it has identified the key4000, thecomputing device20 can reference its routing table21 to search for the closest entry to the key4000. In the routing table21, thecomputing device20 can identify either key3000 or key5000 as being closest. Selecting key3000, for example, can cause thecomputing device20 to initiate communications withcomputing device30, identified by key3000, and request that thecomputing device30 forward communications from computingdevice20 to a computingdevice having key4000.Computing device30 can reference its own routing table31, and can identify the key4000. Consequently, forward the communications from thecomputing device20 to thecomputing device40, havingkey4000. In such a manner, communications can be routed among any of the computing devices participating in theDRT mesh99 ofFIG. 1, even though each computing device comprises only a portion of an overall routing table.
Although not required, the descriptions below will be in the general context of computer-executable instructions, such as program modules, being executed by one or more computing devices. More specifically, the descriptions will reference acts and symbolic representations of operations that are performed by one or more computing devices or peripherals, unless indicated otherwise. As such, it will be understood that such acts and operations, which are at times referred to as being computer-executed, include the manipulation by a processing unit of electrical signals representing data in a structured form. This manipulation transforms the data or maintains it at locations in memory, which reconfigures or otherwise alters the operation of the computing device or peripherals in a manner well understood by those skilled in the art. The data structures where data is maintained are physical locations that have particular properties defined by the format of the data.
Generally, program modules include routines, programs, objects, components, data structures, and the like that perform particular tasks or implement particular abstract data types. Moreover, those skilled in the art will appreciate that the computing devices need not be limited to conventional personal computers, and include other computing configurations, including hand-held devices, multi-processor systems, microprocessor based or programmable consumer electronics, network PCs, minicomputers, mainframe computers, and the like. Similarly, the computing devices need not be limited to a stand-alone computing devices, as the mechanisms may also be practiced in distributed computing environments where tasks are performed by remote processing devices that are linked through a communications network. In a distributed computing environment, program modules may be located in both local and remote memory storage devices.
With reference toFIG. 2, anexemplary computing device100 is illustrated. Thecomputing device100 can represent any of thecomputing devices10,20,30,40 and50 ofFIG. 1. Theexemplary computing device100 can include, but is not limited to, one or more central processing units (CPUs)120, asystem memory130, and asystem bus121 that couples various system components including the system memory to theprocessing unit120. Thesystem bus121 may be any of several types of bus structures including a memory bus or memory controller, a peripheral bus, and a local bus using any of a variety of bus architectures.
Thecomputing device100 also typically includes computer readable media, which can include any available media that can be accessed by computingdevice100 and includes both volatile and nonvolatile media and removable and non-removable media. By way of example, and not limitation, computer readable media may comprise computer storage media and communication media. Computer storage media includes media implemented in any method or technology for storage of information such as computer readable instructions, data structures, program modules or other data. Computer storage media includes, but is not limited to, RAM, ROM, EEPROM, flash memory or other memory technology, CD-ROM, digital versatile disks (DVD) or other optical disk storage, magnetic cassettes, magnetic tape, magnetic disk storage or other magnetic storage devices, or any other medium which can be used to store the desired information and which can be accessed by thecomputing device100. Communication media typically embodies computer readable instructions, data structures, program modules or other data in a modulated data signal such as a carrier wave or other transport mechanism and includes any information delivery media. By way of example, and not limitation, communication media includes wired media such as a wired network or direct-wired connection, and wireless media such as acoustic, RF, infrared and other wireless media. Combinations of the any of the above should also be included within the scope of computer readable media.
Thesystem memory130 includes computer storage media in the form of volatile and/or nonvolatile memory such as read only memory (ROM)131 and random access memory (RAM)132. A basic input/output system133 (BIOS), containing the basic routines that help to transfer information between elements withincomputing device100, such as during start-up, is typically stored inROM131.RAM132 typically contains data and/or program modules that are immediately accessible to and/or presently being operated on by processingunit120. By way of example, and not limitation,FIG. 2 illustrates anoperating system134,other program modules135, andprogram data136.
Thecomputing device100 may also include other removable/non-removable, volatile/nonvolatile computer storage media. By way of example only,FIG. 2 illustrates ahard disk drive141 that reads from or writes to non-removable, nonvolatile magnetic media. Other removable/non-removable, volatile/nonvolatile computer storage media that can be used with the exemplary computing device include, but are not limited to, magnetic tape cassettes, flash memory cards, digital versatile disks, digital video tape, solid state RAM, solid state ROM, and the like. Thehard disk drive141 is typically connected to thesystem bus121 through a non-removable memory interface such asinterface140.
The drives and their associated computer storage media discussed above and illustrated inFIG. 2, provide storage of computer readable instructions, data structures, program modules and other data for thecomputing device100. InFIG. 2, for example,hard disk drive141 is illustrated as storing anoperating system144,other program modules145, andprogram data146. Note that these components can either be the same as or different fromoperating system134,other program modules135 andprogram data136.Operating system144,other program modules145 andprogram data146 are given different numbers hereto illustrate that, at a minimum, they are different copies.
Of relevance to the descriptions below, thecomputing device100 may operate in a networked environment using logical connections to one or more remote computers. For simplicity of illustration, thecomputing device100 is shown inFIG. 2 to be connected to anetwork90 that is not limited to any particular network or networking protocols. The logical connection depicted inFIG. 2 is ageneral network connection171 that can be a local area network (LAN), a wide area network (WAN) or other network. Thecomputing device100 is connected to thegeneral network connection171 through a network interface oradapter170 which is, in turn, connected to thesystem bus121, though it will be appreciated that the network connections shown are exemplary and other means of establishing a communications link may be used.
In a networked environment, program modules depicted relative to thecomputing device100, or portions or peripherals thereof, may be stored in the memory of one or more other computing devices that are communicatively coupled to thecomputing device100 through thegeneral network connection171. Likewise, theoperating system144 andprogram modules145 can communicate with other operating systems and program modules executing on other computing devices to which thecomputing device100 may be communicatively coupled via thegeneral network connection171 and thenetwork90. In one embodiment, theprogram modules145 comprise at least one application that communicates with other applications, executing on other computing devices, via a DRT mesh, such as theDRT mesh99 ofFIG. 1, implemented acrossnetwork90.
Turning toFIG. 3, anapplication process200, such as would be created by one or more of theprogram modules145, is shown comprising elements for communicating with other application processes through a DRT mesh, such asDRT mesh99. In particular, theapplication process200 ofFIG. 3 comprisesapplication code210 and twonode instances220 and230. The illustration of two node instances is meant to be exemplary only, as any number of one or more node instances can be instantiated as part of a single application process, such as theapplication process200.
In one embodiment, each node instance can comprise multiple components or modules for implementing particularized tasks. Thus, as shown, thenode instance220 comprises anode221, a routingtable management module222, atransport module223, abootstrap module224, asecurity module225 and alogging module226.Node instance230 can comprise similar modules, thoughonly node231 is shown inFIG. 3. Initially, as illustrated by the superior arrows ofFIG. 3, thetransport module223,bootstrap module224,security module225 andlogging module226 can be instantiated into theapplication process220 by theapplication code210. Subsequently, thenode221 can be instantiated into theapplication process220 by theapplication code210, and can be provided pointers to the already instantiated instances of thetransport module223,bootstrap module224,security module225 andlogging module226, as indicated by the inferior arrows ofFIG. 3. Thenode221 can instantiate the routingtable management module222, and can create thenode instance220 which comprises the instantiated instances of thenode221, routingtable management module222,transport module223,bootstrap module224,security module225 andlogging module226.
Eachnode instance220 and230 can represent a uniquely identifiable endpoint within a DRT mesh, such as theDRT mesh99 ofFIG. 1. Thus, eachnode instance220 and230 can be assigned a key identifier for the node instance, and each node instance can enable the application code to communicate with one or more other node instances in an independent manner. As can be seen, a single computing device and, indeed, a single application program, can host multiple node instances. Thus, whileFIG. 1, and subsequentlyFIGS. 6 and 7, each illustrate a computing device as a single node instance, nothing in the descriptions should be read to require such a one-to-one correspondence between node instances and computing devices, or node instances and application programs executing on those computing devices.
Turning toFIG. 4, aflowchart300 is shown illustrating an exemplary series of steps that can be initiated by an application program to establish a node instance, such asnode instance220 or230 ofFIG. 3. While the steps offlowchart300 are illustrated in a particular order, the exact relationship between many of the series of steps is irrelevant and, consequently, there exist a multitude of orderings of the steps offlowchart300 that are equivalent to the ordering shown. Generally, however, some initiating step will occur to cause the application program to undertake the illustrated steps. Thus, inFIG. 4, one such initiating step is the decision, by the application program to join a DRT mesh atstep310. Subsequent to such an initiating step, a multitude of instantiations can occur, though, as indicated, the precise order of such instantiations is not limited.
In the example illustrated byflowchart300, atstep320, theapplication code210 can instantiate thetransport module223. In one embodiment, thetransport module223 that is instantiated atstep320 is specifically designed to operate using a particular transport, and nothing further need be performed by theapplication code210 to enable thetransport module223 to be utilized by thenode221. In an alternative embodiment, however, thetransport module223 can be compatible with multiple different transports. In such a case, theapplication code210 can, after instantiating thetransport module223 atstep320, further inform the transport module of the underlying protocol to use atstep325.
Subsequently, atstep330, theapplication code210 can instantiate thesecurity module225, though, as indicated previously, the instantiation of the security module can equivalently occur prior to the instantiation of thetransport module223. Once the security module is instantiated, theapplication code210 can provide appropriate security credentials to the security module atstep335. Such security credentials can include private and public key pairs used by theapplication code210, and a public key certificate. In one embodiment, the public key certificate provided atstep335 can be derived from a root certificate common to all of the application programs communicating though the DRT mesh.
After instantiating thesecurity module225 and providing it with relevant information atsteps330 and335, theapplication code210 can, atstep340, instantiate thebootstrap module224. Alternatively, as indicated previously, the instantiation of thebootstrap module224 can also, equivalently, occur prior to either or both of the instantiation of thetransport module223 and thesecurity module225. In one embodiment, thebootstrap module224 is specifically designed to operate using a particular network from which to initiate communication with a DRT mesh, and nothing further need be performed by theapplication code210 to enable thebootstrap module224 to be utilized by thenode221. In an alternative embodiment, thebootstrap module224 can be more flexible, in which case theapplication code210 can, after instantiating thebootstrap module224 atstep340, further inform the bootstrap module, atstep345, of a specific network to use.
Before instantiating the node atstep360, theapplication code210 can instantiate thelogging module226 atstep350. As with the instantiations ofsteps320,330 and340, the instantiation of thelogging module226 need not occur in any particular order with respect to the instantiation of thetransport module223,security module225, or thebootstrap module224 and could, equivalently, occur before any or all of such instantiations. Subsequently, atstep355, theapplication code210 can inform thelogging module226 of the name and location of the log store to be used. Alternatively, each logging module, such as thelogging module226, can be designed to use a particular log store, in whichcase step355 would not be performed.
After thetransport module223,bootstrap module224,security module225 and thelogging module226 have been instantiated by theapplication code210, the application code can proceed to instantiate thenode221 and, thus, create a DRT instance, such theDRT instance220. Thus, as shown inflowchart300, atstep360, theapplication code210 can instantiate thenode221. Theapplication code210 can further pass pointers to the instantiated instances of thetransport module223,bootstrap module224,security module225 and thelogging module226 to thenode221, thereby enabling thenode221 to communicate with and make use of the transport, bootstrap, security and logging modules. Thenode221 can then create theDRT instance220, and can further instantiate the routingtable management module222.
Once theDRT instance220 has been created, theapplication code210 can, atstep365, instruct thenode221 to join a DRT mesh, such as theDRT mesh99 ofFIG. 1. Atstep370, the initial processing by theapplication code210 in joining a DRT mesh can end.
To join a DRT mesh, such as in response to step365, thenode221 can utilize the instantiated routingtable management module222,transport module223,bootstrap module224,security module225 andlogging module226. Turning toFIG. 5, aflowchart400 illustrates an exemplary series of steps performed by thenode221 to join a DRT mesh. Initially, as shown, thenode221 can, atstep410, receive a request fromapplication code210 to join at DRT mesh. Subsequently, thenode210 can request that the instantiatedbootstrap module224 provide a listing of the other nodes, in the DRT mesh that the application is attempting to join, of which the bootstrap module is aware. If thebootstrap module224 does not return any nodes, as determined bystep430, then, atstep440 thenode221 can remain in an offline state.
In one embodiment, if thebootstrap module224 does not return any nodes, theapplication code210 can request the bootstrap module to publish its availability, or the bootstrap module can automatically publish its availability. More specifically, to enable at least two nodes to initially communicate to form a DRT mesh in the first place, a prearranged rendezvous can be used by thebootstrap module224. Such a rendezvous can take the form of a publication or registration, made within the context of another network, announcing the availability of thenode instance220 and its willingness to form a DRT mesh. Specifically, thebootstrap module224 can use the particular peer-to-peer protocol, or other name resolution protocol, that was used by, for example, thecomputing devices10,20,30,40 and50 to form thenetwork90. In one embodiment, specific individual bootstrap modules can be tailored to a particular protocol, such as the Peer Name Resolution Protocol (PNRP), thereby enabling an application to form a DRT mesh over networks depending on various name resolution protocols by simply instantiating an appropriate bootstrap module.
Once thebootstrap module224 has used the name resolution protocol ofnetwork90 to announce the presence ofnode instance220, it can wait for another node to become active on thenetwork90. Such a subsequent node can then check, via the prearranged rendezvous point withinnetwork90, whether any other node, such as thenode instance220, is available to create a DRT mesh. The subsequent node can, thereby, learn of the existence ofnode instance220, and the two nodes can proceed to communicate to form a DRT mesh.
If, however, atstep430, it is determined that at least one node was returned by thebootstrap module224, then thenode221 can proceed to join or establish a DRT mesh with that node. As indicated, if a single node had previously published, in a prearranged way onnetwork90, its availability to form a DRT mesh, such a node could be returned by thebootstrap module224 atstep430. Similarly, if multiple nodes have already formed a DRT mesh, one or more of such nodes can also be returned by thebootstrap module224 atstep430. In either case, atstep450, thenode221 can request thesecurity module225 to validate one or more of the nodes returned by thebootstrap module224.
To enable thenode instance220 to more quickly develop a useful routing table, the nodes returned by thebootstrap module224 can provide, to thenode instance220, information regarding other nodes of which those nodes are aware. Such information can also be validated by thesecurity module225.
In one embodiment, thesecurity module225 can validate nodes identified and provided by thebootstrap module224 by verifying both that the information received from the node was properly signed by the certificate that accompanied it, and that the certificate is properly derived from the root certificate. For example, the information received from a node can identify the node with a unique key number. To prevent a malicious entity from pretending to be that node by providing information that uses the same key number, a certificate can be used to sign the information received, including the key number. Using known cryptographic techniques, thesecurity module225 can verify that the information received, including the key number, was properly signed by the certificate provided. For example, thesecurity module225 can use the public key provided in the certificate to verify that the information provided, and the key number, were signed by the corresponding private key. Additionally, in one embodiment, the key number can be derived from the public key of the publisher, thereby enabling the security module to further verify the key number. After using the certificate to verify the contents of the message, thesecurity module225 can further verify that the certificate is either a pre-approved root certificate, or is part of a certificate chain that includes a pre-approved root certificate. By using a root certificate, as opposed to self-signed certificates, the DRT mesh can be made more secure and the security can have greater flexibility.
Once thesecurity module225 has validated, atstep450, the nodes returned by thebootstrap module224, thenode221 can, atstep460, request that the routingtable management module222 add the verified nodes, and the relevant routing, to the routing table maintained bynode instance220. In one embodiment, the routingtable management module222 can decide to add one or more of these nodes only if those nodes are closer in the network topology to the computing device hostingnode instance220 than one or more of the nodes already in the routing table. In an alternative embodiment, the routingtable management module222 can initially add some or all of the nodes returned by thebootstrap module224, and verified by thesecurity module225, to the routing table of thenode instance220. Subsequently, the routingtable management module222 can determine if the number of entries in the routing table off thenode instance220 exceeds a predetermined threshold. If such a threshold is exceeded, entries can be removed based on the distance, in the network topology, between the node referenced by the entry in the routing table and the computing device hostingnode instance220. More specifically, those entries having a greater distance can be removed in favor of entries that represent closer nodes. Such a pairing down of the routing table by the routing table management module can occur on a period basis or when requested by theapplication code210 or thenode221, or otherwise triggered, such as by a process that monitors the number of entries in the routing table.
Within a network topology, the distance between nodes, or the distance between the computing devices hosting those nodes, can be defined, not by the physical distance between the nodes, but by the distance of the communicational pathway between the nodes. Indeed, a network topology often does not conform to physical topologies. For example, a computing device connected to a corporate network may be able to more efficiently communicate to other computing devices connected to the same corporate network, even if those computing devices are in other cities, than such a computing device would be able to communicate to another computing device connected to a different corporation's network, even if this other device was located only a few blocks away. The communicational pathway between two computing devices on the same network subsection can be short and straightforward, even if the two computing devices are physically separate from one another, while the communicational pathway between two computing devices on different network subsections can require an extensive amount of inter-network transfers and can be, therefore, fairly long and complex.
To determine how close a node is, the routingtable management module222 can, in one embodiment, measure the round trip time of a message sent from thenode instance220 to the node whose distance is being measured. Nodes having the shortest round trip time are deemed to be “closer,” and are, therefore, retained in the routing table. In another embodiment, the routingtable management module222 can determine which node is “closer” based on a comparison of characteristics of an Internet Protocol (IP) address of the computing device hosting the node whose distance is being evaluated with characteristics of the IP address of the computing device hosting thenode instance220. In a still further embodiment, a combination of the two previously described embodiments can be used. For example, the routingtable management module222 can initially measure the round trip time, but if two or more nodes are measured have similar round trip times and, therefore, deemed to be approximately of equal network topological distance from thenode instance220, the node having a more similar IP address can be selected for inclusion in the routing table.
When initially joining a DRT mesh, however, a more likely scenario is that the routing table has too few entries. If the routingtable management module222 determines that the routing table requires additional entries, it can request that thenode221 perform a search in the DRT mesh to obtain additional node information. As shown in theflowchart400 ofFIG. 5, atstep470, thenode221 can receive such a request to obtain additional node information. If no such request is received atstep470, the processing performed by thenode221 to join the DRT mesh can end atstep490. However, if a request for addition node information is received atstep470, thenode221 can perform a search of the DRT mesh atstep480.
One mechanism by which a search of a DRT mesh can be performed is illustrated bycommunicational flow500 ofFIG. 6. Initially a node, such as the node identified by key2000 and hosted by computingdevice20, can identify a key number associated with the node being searched for. The searching node need not be certain of the existence of the searched-for node, since a search for any key will return those nodes that are identified by keys approximately equal to the searched-for key. In the example illustrated by thecommunicational flow500, the node identified by key2000 can search for a node identified by the key4000. Initially, as illustrated bycommunication510, the node, such asnode221, can request the routingtable management module222 to look up the searched for key4000 in the local routing table21. In response, the routingtable management module222 can return to thenode221, either the routing to the searched for key, or the routing to the key that is closest to the searched-for key. In the particular example illustrated inFIG. 6, the searched-for key4000 is not present in routing table21. Thus, the routingtable management module222 can provide, to thenode221, the routing to the node identified by the key3000, which is the closest key to the searched-for key4000 that is present in the routing table21.
With the routing to the node identified by the key3000, thenode221 can establish communication with the node identified by the key3000 and request that that node look up the node identified by the searched-for key4000 in its routing table. Thus, as shown inFIG. 6, such a request can be made in the form ofcommunication520 from thecomputing device20, which can be considered to host thenode instance220 ofFIG. 3, to thecomputing device30, which, in theexemplary DRT mesh99 ofFIG. 6, hosts the node identified by the key3000. The routing table31 of the node identified by the key3000 does, in the illustrated embodiment ofFIG. 6, comprise an entry for the searched-for node identified by the key4000. Consequently, after identifying theentry525 in the routing table31, the node identified by the key3000 can respond, viacommunications530, to thenode221, informing it of the routing to the node identified by the key4000.
In one embodiment, prior to determining that the searched-for node identified by the key4000 has been located, thenode221 can send acommunication540 to the node identified by the key4000 requesting that that node likewise search its routing table to for the node identified by the key4000. Such a message can be known as an “inquire” message, and is illustrated viacommunication540 inFIG. 6 from thecomputing device20, hostingnode221, to thecomputing device40 which, in the example illustrated inFIG. 6, hosts the node identified by the key4000. To ensure that the message responding to the inquiremessage540 was not retrieved and replayed from some other node, the inquire message can include a token “nonce” that can be randomly generated. The subsequent response message can then include this token and the signature for that message can be computed with reference to this token.
Upon receipt ofcommunication540, the node identified by the key4000 can respond that it is, indeed, the node thatnode221 is searching for. Thus, as illustrated inFIG. 6,communication545 from thecomputing device40 hosting the node identified by the key4000, to thecomputing device20 hostingnode instance220, can indicate that the responding node is identified with key4000. Such a message can be known as an “authority” message and can be digitally signed and transmitted with a certificate comprising a public key. As indicated, the authority message can also include the random nonce from the inquiremessage540 and the signature in the certificate can be computed with reference to the nonce. In addition, thenode221 can, upon receipt ofcommunication545, request that thesecurity module225 verify that the authority message is properly signed and that the certificate either is a proper root certificate or is part of a certificate chain that includes a proper root certificate.
Once thesecurity module225 has verified the authority message, thereby verifying for thenode221 that it has, indeed, found the node identified by the searched-for key4000, thenode221 can provide the application payload to theapplication code210. The application payload is the information that the remote application represented by the node identified by key4000 registered with that key. In one embodiment, the application payload can be the IP address, or other common network address, of thecomputing device40 hosting the remote application. In such a case, once theapplication code210 receives the IP address, it can proceed to communicate with the remote application using traditional TCP/IP communications.
In one embodiment, the searching for another node, such as in the manner just described, can be performed by any node of theDRT mesh99. In an alternative embodiment, however, only nodes that are registered within theDRT mesh99 can be allowed to search. To register itself with theDRT mesh99, a node, such as thenode221 can publish a key, or other identifying value, to one or more of the other nodes of the DRT mesh. Turning toFIG. 7, acommunicational flow600 is shown, illustrating an exemplary series of communications which can enable the node hosted bycomputing device610 to register itself as part of theDRT mesh99. Initially, in one embodiment, the node hosted bycomputing device610, such asnode221 ofnode instance220, can request thesecurity module225 to generate a unique key which can be used to identify thenode instance220 within theDRT mesh99. In an alternative embodiment, depending on the functionality of thesecurity module225, thenode221 can, simply, itself select an identifying key to be used in registering the node in theDRT mesh99. In the example illustrated inFIG. 7, the key1900 can be selected to identify thenode instance220 hosted bycomputing device610.
To register itself with theDRT mesh99, thenode221 can initially perform a “resolve” operation, whereby a node having a key close to the selected key for thenode instance220 is identified. In one embodiment, the resolve operation can be performed by incrementing the key1900 by one and then searching theDRT mesh99 for the new key value, namely1901 in the present example. Thus, as illustrated in the communicational flow diagram600,communication620 from thecomputing device610 can search for the key1901. For simplicity, thesearch request620 is illustrated as generally being directed to theDRT mesh99, instead of a particular node of the DRT mesh, though, as described in detail above, such search requests are initially directed to a node having a similar key stored in the routing table maintained by the searching node, and are then redirected by each subsequent node, as appropriate. Similarly, for simplicity, asearch response communication625 is illustrated as generally coming from theDRT mesh99, as opposed to a particular node, though, again as described in detail above, such search responses can be received from one or more nodes that were contacted by the searching node.
In the example illustrated inFIG. 7, theresponse communication625 can indicate that the node identified by key2000 has the closest key to the searched-for key of1901. Once thenode instance220, which is attempting to register itself with theDRT mesh99 using key1900, is aware of the node identified by key2000, it can send it a “flood”message630, announcing the presence of thenode instance220 as part of the DRT mesh and providing the identifying key1900. In response to receiving a flood message, a node can, initially use a security module to verify that the flood message was properly signed and that the associated certificate is either an approved root certificate, or is part of a certificate chain that includes the approved root certificate. Once the flood message is verified, the receiving node, such as the node identified by the key2000 in the example illustrated inFIG. 7, can add the received key and routing information into its routing table21. Thus, as shown, anentry635 for key1900 can be added to the routing table21 of the node identified by the key2000.
In addition to adding the key1900 to its routing table21, the node identified by key2000 can further, in response to theflood message630, forward the flood message to the other entries in its routing table21, with the assumption that such other entries may also wish to learn of the new node. Thus, as illustrated inFIG. 7, the node identified by the key2000 can send forwarded flood messages, viacommunication640, to the other nodes in its routing table21, namely the nodes identified bykeys1000,3000 and5000. Upon receipt of the forwardedflood messages640, each of the receiving nodes can respond in a manner similar to the node identified by key2000 when it received theflood message630; namely, each of the receiving nodes identified bykeys1000,3000 and5000 can add the node identified by key1900 to their respective routing tables11,31 and51 if their respective security modules verify the forwardedflood messages640.
Once the routing information for the node corresponding to key1900 is stored in one or more routing tables of other nodes comprising theDRT mesh99, the node corresponding to key1900 can be considered to be registered as part of the DRT mesh. In another embodiment, a minimum number of nodes must have entries in their routing tables corresponding to the node identified by key1900 for the node to be considered a registered node of theDRT mesh99. In such a case, the nodes receiving the forwardedflood message640 can, themselves, further forward the flood message to the nodes identified in the receiving nodes' routing tables. If a node receives a flood message from more than one source, it can merely ignore the duplicate flood messages. Nevertheless, by forwarding the flood message multiple times, a greater number of nodes' routing tables can be updated with the key and routing information contained in the flood message, enabling the node sending the flood message to register itself in theDRT mesh99 that much more efficiently.
As can be seen from the above descriptions, a DRT mesh can be created, maintained and utilized by nodes having componentized modules implementing various defined tasks, including a security module that can provide for the use of root certificates to verify messages, and a routing table management module that can provide for a routing table comprising entries that are close, taking into account network topology, to the node. In view of the many possible variations of the subject matter described herein, we claim as our invention all such embodiments as may come within the scope of the following claims and equivalents thereto.