CROSS-REFERENCE TO RELATED APPLICATIONSThis application claims priority under 35 U.S.C. §111 to European Application EP11005641.3 filed Jul. 11, 2011.
BACKGROUND1. Field of the Invention
This invention is in the field of managing networks having a plurality of connected devices from various sources.
2. Prior Art
The present invention relates to a method and system for managing network devices of generic vendors and manufactures.
The pervasive diffusion of access points and other network devices has increased, in the latest years, the costs the companies and individuals have to afford in order to manage, maintain and monitor this multitude of devices Managing multiple network devices such as Wi-Fi Access Points and wireless CPE (Customer Premise Equipment) is a very time consuming task.
The existing methodologies to manage network devices, such as wireless access points, can be classified into three main categories: non-centralized management systems, centralized hardware systems and centralized remote software systems. Each of these classes of solution has some drawbacks as described in the following.
Non-centralized management systems allow network administrators to configure and monitor each network device individually thanks to software, often called firmware, installed and running on the device itself. Different vendors/manufactures implement proprietary protocols on their devices that allow the network administrator to access them in a variety of ways, e.g. web interface, CLI (Command Line Interface) interface, SSH (Secure SHell) protocol.
Non-centralized management systems are usually adopted in consumer-grade routers, gateways and access points, for low-end price-sensitive markets. The drawbacks of this approach are the huge amount of time required to manage each device in a one-by-one fashion; the non-homogeneous user interface provided by each vendor/manufacturer; and the increased probability of human mistake as no centralized consistency check algorithm can be adopted.
Centralized hardware systems are usually designed to provide sophisticated management tools suitable for high-end enterprise markets. These solutions require the installation of a hardware controller, e.g. a server with an installed application, which allows the network administrator to configure all the network devices through one single interface, saving time and reducing managing costs. The drawback of this solution, usually preferred in large plants, airports, harbors, etc., is a higher initial investment, or capex.
Centralized software-based remote systems allow the network administrator to reach and manage network device without the need of purchasing a server or dedicated hardware and of physically installing it. Examples of these systems are described by U.S. Pat. No. 7,852,819, U.S. 2008/0294759, U.S. 2008/0285575, and U.S. 2008/0304427.
Although these systems are attractive both from costs and from time saving standpoint, they also exhibit several limitations and pose challenges as described below.
Network devices usually have to operate according to specific procedures in order to be reached and managed by the network administrator, thus involving the deployment of specific firmware/software on all controllable devices. The solutions disclosed by the above mentioned patent documents imply that manufacturers of enterprise-grade network devices develop proprietary methods and solution for centrally configuring their network devices. This means that in order to be enabled to use a centralized software-based remote system provided by a specific manufacturer, the user is requested to purchase and install in its network only network devices produced by said specific manufacturer. Network devices from generic vendors/manufacturers, such as for example, low-cost consumer-grade network devices, cannot be managed.
An additional drawback is related to the procedures adopted to provide remote access to the network devices which reside within a private network of a user. In the solutions disclosed by the above mentioned patent documents, connection among the host network, i.e. remote controller, and the network devices is initiated and established by each single network device of the managed network. This aspect represents a strong limitation on the scalability of the system.
In order to overcome the above mentioned drawbacks, a centralized software-based remote system that allows managing network devices from various vendors and manufacturers would be advantageous.
U.S. 2011/0087766 discloses a central unified services and device management framework operated to simultaneously manage various types of resources on behalf of multiple organizations.
A drawback of the solution disclosed by U.S. 2011/0087766 is related to the fact that connection among the central management facility and the network devices is initiated and established by each single network device of the managed network. This aspect represents a strong limitation on the scalability of the system. In addition, this solution still requires the deployment of specific firmware/software on all controllable network devices in order to enable them to make initial contact with the central facility, after they have been inserted into the managed network.
This solution still fails to provide an improved centralized software-based remote system that allows managing network devices from generic vendors and manufacturers.
SUMMARY OF THE INVENTIONThe invention includes a method of remote management in a network, the network comprising a plurality of nodes to be managed by a remote controller and at least one agent device, the at least one agent device being in number lower than the plurality of nodes, wherein:
- the at least one agent device makes initial contact with the remote controller in order to be authenticated by the remote controller and to establish a connection with the remote controller;
- after the connection is established, the remote controller executes a discovery procedure through intermediation of the at least one agent device for discovering the plurality of nodes;
- after executing the discovery procedure, the remote controller executes an identification procedure through intermediation of the at least one agent device for identifying the discovered nodes, including identification of at least one characterizing parameter selected from: model, vendor, manufacturer, software version, hardware version, firmware version, serial number and MAC address;
- the remote controller manages the discovered and identified nodes through intermediation of the at least one agent device, by using managing procedures specific for the identified nodes.
 
This method may be established with the remote controller connection being a tunnel connection.
The tunnel connection may be established by the at least one agent device according to a tunnelling procedure including the step of trying in sequence a predetermined plurality of tunnelling protocols for establishing tunnel connection with the remote controller till a tunnel connection is successfully established.
The predetermined plurality of tunneling protocols may be tried in sequence following a selection criterion adapted to minimize resources required on the at least one agent device and/or on the remote controller in order to execute the tunnelling protocols.
The discovery procedure may include trying to establish a connection with the plurality of nodes, through intermediation of the at least one agent device, by using predetermined IP address and/or MAC address, or by using a scanning procedure scanning a predetermined multitude of IP addresses.
The predetermined multitude of IP addresses may comprise IP addresses included in at least one subnet corresponding to at least one interface of the at least one agent device, and/or generic IP addresses corresponding to IP addresses set by default by predetermined manufacturers and/or vendors.
When the at least one agent device comprises more than one interface, the scanning procedure may be executed for each interface.
The identification procedure may comprise:
a) selecting a specific node from the discovered nodes;
b) retrieving from a database of the remote controller a specific connection procedure associated with the specific node;
c) using the retrieved specific connection procedure for connecting to the specific node, through the intermediation of the at least one agent device, and obtaining from the specific node said at least one characterizing parameter.
With this method, when the database does not include a specific connection procedure associated with the specific node, the identification procedure may include trying in sequence a plurality of connection procedures for connecting to the specific node till connection is successfully established, the plurality of connection procedures being selected in sequence according to a predetermined selection criterion.
When a connection to a specific node with a specified IP address and MAC address has to be established, and in case of IP address conflict between the specific node and at least one other node of the plurality of nodes, the at least one agent device may execute an IP conflict avoidance procedure making use of ARP protocol and ARP table, the IP conflict avoidance procedure comprising:
i. sending into the network a request according to ARP protocol in order to translate the specified IP address into a MAC address;
ii. after executing i., checking if the ARP table includes the specified IP address;
iii. in the positive case of ii., checking if the specified IP address is associated in the ARP table with the specified MAC address;
iv. in the positive case of iii., trying to establish a connection with the specific node by using the specified IP address;
v. in the negative case of iii., modifying the ARP table so as to associate the specified IP address with the specified MAC address, then trying to establish a connection with the specific node by using the specified IP address.
When a connection to a specified IP address through a specified interface of the at least one agent device has to be established, and in case of IP address conflict between the specified IP address and the IP address of the specified interface and/or in case the specified IP address is not included in a subnet corresponding to the specified interface, the at least one agent device may execute a subnet conflict avoidance procedure comprising:
I. checking if the specified IP address is included in the subnet corresponding to the specified interface and if the specified IP address is different from the IP address of the specified interface,
II. in the affirmative case of I., the at least one agent device tries to establish a connection by using the specified IP address,
III. in the negative case of I., the at least one agent devices temporally assigns to the specified interface both a subnet including the specified IP address and an IP address included is said subnet, which is different from the specified IP address.
With this method, when the at least one agent device comprises a plurality of interfaces, step III may also comprise a step of temporarily making out of use any other interface of the plurality of interfaces, other than the specified interface, which corresponds to a subnet including the specified IP address.
The invention may include a remote managing system comprising a remote controller and a network, the network comprising a plurality of nodes to be managed by the remote controller and at least one agent device, the at least one agent device being in number lower than the plurality of nodes, characterized in that the remote controller and the at least one agent device comprise hardware and/or software and/or firmware adapted to carry out any of the methods described herein.
A computer program may be adapted to carry out the steps concerning the remote controller in any of the methods described herein. A computer program may be adapted to carry out the steps concerning the agent device in any of the methods described herein.
In a further aspect, the present invention relates to a computer program product comprising program code means stored on a computer readable medium for carrying out the steps concerning the remote controller in any of the methods described herein.
In a further aspect, the present invention relates to a computer program product comprising program code means stored on a computer readable medium for carrying out the steps concerning the agent device in any of the methods described herein.
In a further aspect, the present invention relates to a remote controller comprising hardware and/or software and/or firmware means adapted to carry out the steps concerning the remote controller in any of the methods described herein.
In a further aspect, the present invention relates to an agent device comprising hardware and/or software and/or firmware means adapted to carry out the steps concerning the agent device in any of the methods described herein.
In the present description and claims, the term:
“network” may indicate any wide or local area network, wired, wireless, hybrid wired/wireless;
“network device” may indicate any device of a network such as a router, a gateway, an access point, a server, a client device (such as a PC, tablet, laptop, mobile phone, and similar);
“node” may indicate a network device to be managed by a remote controller. Examples of nodes are routers, access points, gateways, firewalls, and network hard drives;
“tunnel connection” may indicate a connection established among network devices encapsulating one network protocol, said “payload protocol”, inside the messages of another network protocol, said “delivery protocol”. This mechanism allows the payload protocol to be delivered even if it is not explicitly allowed by network obstacles such as firewalls, NAT translators, gateways, proxies, etc . . . , which instead, allow the delivery protocol to be delivered;
“tunneling protocol” may indicate a specific protocol adapted to implement a tunnel connection. Each tunneling protocol is able to be delivered across a specific subset of network obstacles.
BRIEF DESCRIPTION OF THE DRAWINGSFurther characteristics and advantages of the present invention will become clearer from the following detailed description of some preferred embodiments thereof, made as an example and not for limiting purposes with reference to the attached drawings. In such drawings,
FIG. 1 schematically shows a system according to an embodiment of the invention;
FIG. 2 schematically shows a system according to another embodiment of the invention;
FIG. 3 schematically shows a remote controller according to an embodiment of the invention;
FIGS. 4A and 4B schematically show a remote controller database according to two embodiments of the invention;
FIGS. 5A and 5B schematically show the structure of a section of the remote controller database according to two embodiments of the invention;
FIG. 6 schematically shows an agent device according to an embodiment of the invention;
FIG. 7 shows a flowchart of an algorithm to implement a tunneling procedure according to an embodiment of the invention;
FIG. 8 shows a flowchart of an algorithm to implement a discovery procedure according to a first embodiment of the invention;
FIG. 9 shows a flowchart of an algorithm to implement a discovery procedure according to a second embodiment of the invention;
FIG. 10 shows a flowchart of an algorithm to implement a discovery procedure according to a third embodiment of the invention;
FIG. 11 shows a flowchart of an algorithm to implement a discovery procedure according to a fourth embodiment of the invention;
FIG. 12 shows a flowchart of an algorithm to implement an IP conflict avoidance procedure according to an embodiment of the invention;
FIG. 13 shows a flowchart of an algorithm to implement a subnet conflict avoidance procedure according to an embodiment of the invention;
FIG. 14 shows a flowchart of an algorithm to implement an identification procedure according to a first embodiment of the invention;
FIG. 15 shows a flowchart of an algorithm to implement an identification procedure according to a second embodiment of the invention.
DETAILED DESCRIPTIONFIG. 1 shows a networkdevice managing system10 according to an embodiment of the invention, comprising a remote controller1 (in the present description and drawings referred to also as “multi-vendor controller” or MVC), a wide area network (WAN)2 and a local area network (LAN)100. In the exemplary embodiment, theremote controller1 is located in theWAN2, theWAN2 comprises the Internet, and theLAN100 comprises a plurality ofnetwork devices110,120 and130.
In one embodiment of the invention,LAN100 is an Ethernet/IP (Internet Protocol) LAN. That is,LAN100 can have any physical layer, and has an Ethernet layer 2 (or data link layer) and an IP layer 3 (or network layer). Preferably,LAN100 supports translation protocols for resolution of layer 3 addresses (e.g. IP addresses) intolayer 2 addresses (e.g. Media Access Control or MAC addresses) and vice-versa. Examples of these protocols respectively are ARP (Address Resolution Protocol) and RARP (Reverse Address Resolution Protocol) well known in the art and specified, for example, by RFC 826 andRFC 903.
Network devices can be wireless or wired routers, access points, gateways, local servers, client devices (such as PC, tablet, laptop, mobile phone, . . . ) and similar.
Network devices to be managed byremote controller1 are hereinafter referred to as nodes. The nodes can be any network device that might require to be configured or managed such as, for example, access points (AP), routers, gateways, firewalls, network hard drives.
As explained more in detail below, thanks to the invention, the nodes to be managed byremote controller1 can be of any vendor/manufacturer and can be identified by no specific class or range of (MAC) addresses.
InFIG. 1,LAN100 comprises a gateway (GW)130 for connection to theInternet2, anagent device120, and fournodes110 to be managed.
Theagent device120 is an intermediate component of thesystem10 that allows theremote controller1 to communicate with any node of the managedLAN100.
In particular, the agent device can be any network device of theLAN100 wherein an agent utility (i.e. computer program) is deployed, which is adapted to carry out the steps of the managing method of the invention relating to the agent device. Advantageously, the network device wherein the agent utility is deployed is a local server of the LAN, which is an always-running device. However, it can also be a client device or a node to be managed, as exemplarily shown in the embodiment ofFIG. 2.
The agent utility is a software and/or firmware that can be installed onto a local server of theLAN100, but can also be represented by a temporary running software on a client device inside theLAN100, e.g. an active-x or a browser add-on which is active only when the user turns-on a suitable interface of the client device, such as a web-site on his/her laptop, PC, or similar device.
Theagent device120 can comprise one or more interfaces, each covering a subnet ofnodes110 ofLAN100. Each interface is advantageously identified by a specific interface identifier (e.g.:interface1,interface2, . . . interface n) and each subnet is identified by an identifier representing a part of IP address (usually the head portion of the IP address) which should be shared among allnodes110 belonging to the subnet. For example, considering a 32-bit IP address made of 4 sections A.B.C.D, the subnet identifier may represent the initial section(s) A, A.B, or A.B.C. or an intermediate section (e.g. B or B.C).
Even if in the embodiment ofFIG. 1 theLAN100 comprises asingle agent device120, each LAN can comprise more than one agent device, as shown in the embodiment ofFIG. 2. However, according to the invention, theLAN100 advantageously has a number of agent devices which is lower than the number of nodes to be managed.
TheMVC1 is the network element through which the user(s) (e.g. network administrator(s)) can reach all the nodes that might be configured, managed or monitored. In the embodiment ofFIG. 1,MVC1 is a centralized server available in Internet. However, the remote controller can also be located in a public cloud, in an intranet, in a private cloud, or even in the LAN for scalability or security reasons.
The term remote controller is used to indicate a centralized remote host or a software/firmware utility (i.e. computer program) deployed on a centralized remote host.
For example, in the embodiment ofFIG. 2, the system comprises fiveLANs100,200,300,400,500, theinternet2 and threeMVC1,1′ and1″.MVC1 is a multi-vendor controller available as a public cloud service,MVC1′ is located in a private cloud, i.e. a de-localized network controlled by a company, andMVC1″ is located insidecustomer LAN500, in order to meet specific network security policies specified by the network administrator or by the company.
In the embodiment ofFIG. 2,LAN100 is connected to Internet throughgateway130, comprises asingle agent device120 and a plurality ofnodes110 andclient devices140.
LAN200 is connected to the Internet throughgateway230 and comprises a plurality ofnodes210 andclient devices240. In order to increase the availability of the management system,LAN200 comprises two agent devices220-1 and220-2 that can be operated alternatively (backup mechanism).
LAN300 represents a hierarchically organized network comprising a sub-network300′.
LAN300 is connected to Internet throughgateway330 whilesub-network300′ is hierarchically connected to the Internet throughgateway330′ and330.LAN300 further comprises a first agent device320-1, a second agent device320-2, a third agent device320-3, a plurality ofnodes310,310′ andclient devices340,340′. Second agent device320-2,gateway330′,access points310′ andclient devices340′ are part of the sub-network300′. Third agent device320-3 is deployed inside aclient device340. This can be implemented in a variety of ways, such as a java applet, an active-x, a browser add-on, an application installed on the client, etc. The common aspect of on-client-deploy of the agent utility is the fact that when the client is turned off, if no other agent device is available for the specific network, then that network would not be manageable; this is not the case ofnetwork300, as first agent device320-1 would substitute third agent device320-3 (backup mechanism).
LAN400 is connected to Internet throughgateway430, comprises two agent devices420-1,420-2 and a plurality ofnodes410 andclient devices440. The two agent devices420-1,420-2 are implemented in two nodes410 (i.e., agent utility is deployed on two nodes410). The two agent devices420-1,420-2 allow, individually, to operate on allnodes410 ofLAN400.
LAN500 is connected to Internet throughgateway530, comprises asingle agent device520 and a plurality ofnodes510 andclient devices540.
Agent devices are configured to connect to a specific MVC, which can be available in the Internet, in a private cloud, or locally. For example, inFIG. 2,agent devices120,220-1,220-2, and320-1 are configured to be connected toMVC1 located in a public cloud service; agent devices320-2,420-1 and420-2 are configured to be connected toMVC1′ located in a private cloud service, andagent device520 is configured to be connected toMVC1″, locally deployed.
Each remote controller and agent device in thesystem10 and LAN(s) (in the following referred to only with thereference numbers1,120, and100, respectively) comprises hardware and/or software and/or firmware modules adapted to implement the corresponding steps of the management method of the invention.
As further explained hereinafter, thanks to the invention, network devices other than agent device and remote controller need no specific adaptation to be made in order to implement the management method. In particular, one innovation of the claimed invention is the fact that it is not required to deploy any specific software onto the nodes to be managed, nor assume any specific procedure or behavior, in order to manage them.
According to the management method of the invention:
- the agent device makes initial contact with theremote controller1 in order to be authenticated and establish a connection with theremote controller1;
- after the connection is established, theremote controller1 executes a discovery procedure through intermediation of theagent device120 for discoveringnodes110 ofLAN100 to be managed;
- after executing the discovery procedure, theremote controller1 identifies the discoverednodes110 through intermediation of theagent device120;
- theremote controller1 manages the discovered and identifiednodes110 through intermediation of theagent device120, by using managing procedures specific for the identifiednodes110.
 
In order to implement the management method of the invention, theremote controller1 advantageously comprises three logical sub-components, reported inFIG. 3: adatabase14; a back-end12; and a front-end16.
A user, herein after referred to as network administrator, can interact with theremote controller1 in order to access, configure, control and monitor thenodes110 through at least oneuser interface UI18.User interfaces18 can be command line interfaces, desktop applications, web applications, mobile devices applications such as iPad, iPhone or Android applications, widgets, and similar. The at least oneuser interface UI18 is adapted to exchange information with the front-end16.
The front-end16 allows, throughuser interface UI18, the network administrator to choose whichnodes110 to access, configure, control and monitor.
Thedatabase14 advantageously includes a list ofnodes110 to be managed. This list can be created by the network administrator or can be automatically generated by theremote controller1 and, optionally, confirmed by the network administrator. Thedatabase14 advantageously also include a set of protocols and procedures required to access, configure, control and monitor nodes manufactured by different manufacturers/vendors, each of them characterized by specific device characterizing parameters, such as, for example, model, vendor, manufacturer, software version, hardware version, firmware version, serial number and/or MAC address.
The back-end12 is the sub-component of theremote controller1 that allows to establish communication between theremote controller1 and theagent device120.
Thedatabase12, back-end14, and front-end16 can be physically deployed on a same server, as reported inFIG. 3, or on different servers or virtual servers or clouds, private or public.
FIG. 4A shows an embodiment of structure fordatabase14 comprising: afirst section900 containing a list of nodes, identified, for example, by IP (Internet Protocol) and/or MAC (Media Access Control) addresses; asecond section910 wherein the nodes of the list (identified by their IP and/or MAC (Media Access Control) addresses) are associated with specific device characterizing parameters such as model, vendor, manufacturer, software version, hardware version, firmware version and/or serial number; athird section920 wherein the nodes of the list are associated with specific configuration parameters that can be specified by the network administrator; and afourth section930 comprising a plurality of managing procedures, each associated with specific device characterizing parameters such as, for example, model, vendor, manufacturer, software version, hardware version, firmware version and/or serial number.
The structure ofFIG. 4B is similar toFIG. 4A except for the fact the it further comprises afifth section911 comprising a list of sequences of connection procedures, as explained further in detail herein after, when dealing with nodes identification procedure ofFIG. 15.
As shown inFIG. 5A, infirst section900 nodes can be classified in “available nodes”901, “unreachable nodes”902, and “discovered nodes”903.
As shown inFIG. 5B, infourth section930 the managing procedures can be classified inconnection procedures931, configuration/control procedures932 andmonitoring procedures933. The managing procedures can be implemented according to protocols and/or mechanisms supported by proprietary interfaces ofnodes110 to be managed. Examples of such protocols and mechanisms well known in the art are the following: CLI (Command Line Interface), SSH (Secure SHell, as for example defined by RFC4251), Telnet protocol, HTTP (Hyper Text Transfer Protocol, as for example defined by RFC2616), HTTPS (Hyper Text Transfer Protocol over Secure Socket Layer, as for example defined by RFC2818), SMTP (Simple Network Management Protocol, as for example defined by RFC1157), OPC (Open Connectivity as described for example at the web site opcfoundation.org) protocol, SCADA (Supervisory Control And Data Acquisition) architecture, mechanism to download configuration file(s) from a node and to upload into the node new configuration file(s) with modified parameters, such as FTP (File Transfer Protocol, as for example defined by RFC959), TFTP (Trivial File Transfer Protocol, as for example defined by RFC1350), SCP (Secure Copy Protocol); mechanism that mocks the navigation of a virtual user through the web-based interface of a node, emulating navigation commands, such as HTTP-based queries, HTTPS-based queries, AJAX (Asynchronous JavaScript and XML) interactions.
In order to implement the managing method of the invention, theagent device120 advantageously comprises auser interface129, anagent configuration section121, atunneling connection section124, anoperation section125, aLAN connection section126, atunneling mechanism section127, a peeragent discovery section128, as shown inFIG. 6.
Theagent configuration section121, in its turn, comprises atunneling configuration section122 and asecurity restrictions section123.
Thetunneling configuration section122 advantageously comprises parameters required by specific tunneling mechanisms (such as, for example, MVC address, proxy authentication parameters, etc.) or administrative exclusion of specific tunneling mechanisms (e.g. for security reasons).
Thesecurity restrictions section123 allows the user to deny access tospecific nodes110 and to be compliant with strict security policies. If these restrictions are specified, bothagent device120 andremote controller1 will not be able to reach the restrictednodes110.
Thetunnel connection section124 is adapted, in cooperation withtunneling mechanism section127, to establish a tunnel connection with a specifiedremote controller1 by taking the appropriate actions to overcome a variety of network obstacles that can prevent theagent device120 to successfully connect to theremote controller1, as explained in more detail herein after with reference toFIG. 7. Said obstacles can include NAT translators, firewalls, proxies, traffic-shapers and similar.
Theoperation section125 is adapted to enable execution of operations, such as managing procedures (that can be classified inconnection procedures931, configuration/control procedures932 and monitoring procedures933) requested by theremote controller1 for aspecific node110, and steps of tunneling procedure, discovery procedure, identification procedure, IP conflict avoidance procedure, subnet conflict avoidance procedure, described in detail herein after.
TheLAN connection section126 is adapted to establish LAN connections with thenodes110 according to techniques known in the art, such as for example Ethernet (IEEE 802.3), Wi-Fi (IEEE 802.11), Fiber Optic or other network standards.
Thetunneling mechanism section127 is adapted to execute a tunnelling procedure, as explained in more detail herein after with reference toFIG. 7.
The peeragent discovery section128 is adapted to implement a peer discovery procedure in order to discover any other agent device that may be present in the LAN. This procedure is described in more detail below, with reference toFIG. 7.
Theuser interface129 enables the user (e.g., the network administrator) to directly interact with theagent device120.
An advantage of the invention is that a tunnel connection does not need to be maintained between theremote controller1 and eachindividual node110 of theLAN100, as the tunnel connection is established only with the single agent device120 (or with a number of agent devices lower than the total number ofnodes110 of the LAN).
This aspect is advantageous, compared to other solutions, for at least the following reasons:
it allows to reduce the resources required at theremote controller1, as the number of connections is diminished by a factor K, equal to the average number ofnodes110 foragent device120;
it allows to reduce the resources required at the managednodes110, as it is not required any permanent connection between thenodes110 and theremote controller1;
it reduces the bandwidth occupation, as theagent device120 can adopt a variety of well-known traffic compression and aggregation techniques that allow a reduction of the traffic both in terms of number of packets per second and of bytes per second. The number of packets per second is reduced from K*fs, in non-agent-device based solution, (where K is the number ofaverage nodes110 peragent device120 and fs is the frequency with which an information is sent) to 1*fs in the solution of the invention, thus reducing the number of packets of a factor K and saving processing power at eachnode110. On the other hand, the traffic expressed in terms of bytes per second is reduced from K*fs*D (where D is the average packet size in non-agent-device based solutions) or K*fs*D′ (where D′<D is the average packet size in non-agent-device solutions implementing a local compression at the nodes110) to 1*fs*(K*D″), where K*D″ is the average size of the packet that is sent by theagent device120 and includes all the information of theK nodes110. This packet has an average size K*D″<K*D′<K*D thanks to the compression provided by well-known techniques that cannot be adopted in non-agent-device based solutions. In fact, these techniques leverage the mutual information, or correlation, among packets in order to reach higher compression ratios.
In order to connect to theremote controller1, theagent device120 establishes a tunnel connection using a variety of techniques to overcome the aforementioned network obstacles that include, but are not limited to, NAT translators, UDP blocks, firewalls, gateways, traffic shapers, http proxies, https proxies, socks proxies, and so on.
Tunneling techniques, e.g. UDP tunnel, known in the art are able to pass only a subset of said network obstacles (e.g. NAT translators). This requires the network administrator to modify the security policies of the LAN in order to guarantee the proper communication with theremote controller1. Unfortunately, especially in large enterprises and corporations, it is not always possible to modify such policies.
The tunneling procedure proposed by the invention aims at guaranteeing a tunnel connection irrespectively of any security policies configured in theLAN100, without requiring any change of such security policies.
This is obtained thanks to a procedure wherein theagent device120 tries in sequence a plurality of tunnelling protocols for establishing a tunnel connection with theremote controller1, till a tunnel connection is successfully established.
The plurality of tunnelling protocols tried by theagent device120 can be the following protocols known in the art: ip-over-ip tunneling; ip-over-udp tunneling; ip-over-tcp tunneling; ip-over-http tunneling; ip-over-http tunneling through proxy (http, https, socks, etc.); http tunneling through proxy and traffic shaper.
FIG. 7 shows an exemplary embodiment of algorithm to implement the tunneling procedure according to the invention.
Atblock701, theagent device120 sets all the available tunneling protocols as “not-tried”. Non configured protocols, i.e. the ones that require some configuration parameter to be specified by the network administrator (e.g. proxy address in case of http tunneling through proxy) are discarded. In addition, the network administrator could decide to exclude specific protocols from the list, in order to be compliant with predetermined security rules or policies. In this case, excluded protocols will not be tried.
Atblock702, theagent device120 selects the protocol having the lower cost among the ones labeled as “not-tried”. As each protocol is able to pass different kind of network obstacles and has a specific cost in term of required resources both on the agent device and on the server or cloud platform where theremote controller1 is executed, the cost can represent the required resources at theagent device120 and/orremote controller1. The required resources can include computational power, network bandwidth, power consumption, memory usage or any other aspect that can be relevant for the specific network.
The cost associated to each protocol can be assigned in a variety of ways. For example, it can be an arbitrary integer number (1: ip packets, 10: udp tunnel, 100: tcp tunnel, 1000: http tunnel, . . . ).
Atblock703 the agent device tries to establish a tunnel connection with theremote controller1 using the selected tunneling protocol.
Atblock704, theagent device120 checks if the connection is established.
In the positive case, atblock705 theagent device120 stays in a stand-by condition, waiting for instructions from theremote controller1 ornodes110.
In the negative case, atblock706 theagent device120 discards the tried protocol and returns to block702 to try another—not yet tried—tunneling protocol at a higher cost. In the worst-case scenario, the selected protocol will be the one with the highest cost (this assuming that there is at least one protocol enabling theagent device120 to access the external network (e.g. the Internet).
In a preferred embodiment, the algorithm also comprisesblocks707 and708. Atblock707, once connected, theagent device120 checks if there is any other agent device on thesame LAN100. In the negative case, the algorithm ends. In the positive case, atblock708agent device120 chooses to act as active agent or backup agent device based on a cost comparison: the agent device with the lower cost acts as active while the other one acts as backup agent device. One example of metric for cost comparison is the time availability of the agent device: this metric would allow the system to use agent devices that are resident on a local server and consider client activex or similar agents as backup.
Two exemplary strategies to allow theagent device120 to discover the existence of concurrent agent devices on the same LAN are the following: centralized and peer-based. In the centralized strategy, theremote controller1 compares the list ofnodes110, e.g. the MAC address, associated to eachagent device120; if two agent devices are associated to the same list ofnodes110, they are considered concurrent and theremote controller1 decides whichagent device120 must act as backup. In the peer-based strategy eachagent device120 sends broadcast packets to establish a connection to other peer-agent devices120 on thesame LAN100; these packets contain information about actual connectivity toremote controller1 and cost; eachagent device120 can individually decide to act as backup or active agent for theLAN100. Both of these solutions have strengths and weaknesses: the centralized approach simplifies the agent device structure and makes no assumption on agent intra-LAN connectivity but requires higher resources on theremote controller1; the peer-based approach reduces the resources used by theremote controller1, but requires a higher complexity on theagent devices120.
As stated above, after theagent device120 establishes a tunnel connection withremote controller1, theremote controller1 executes a discovery procedure through intermediation of theagent device120 for discovering thenodes110 ofLAN100.
The discovery procedure includes trying to establish a connection withnodes110, through intermediation of theagent device120, by using predetermined IP address and/or MAC address, or by using an automatic scanning procedure (not requiring any information from the network administrator) that scans a predetermined multitude of IP addresses for trying to establish a connection with thenodes110.
FIG. 8 shows a first embodiment of a discovery procedure, making use of both IP and MAC addresses.
Atblock801 network administrator specifies to theremote controller1 the IP and MAC addresses of aspecific node110 to be discovered.
In this embodiment and in the other embodiments ofFIGS. 9,10,11,14 and15 herein after described, any time there is more than oneagent device120, network administrator can select which is the preferred agent device to be used or can leave this choice to theremote controller1. In the latter case theremote controller1 can, for example, select the agent device that minimizes or maximizes some metric related to thenode110 and theagent device120 itself (e.g. maximize availability of the connection betweenagent device120 andnode110, minimize the difference between the agent device interface IP address and the node IP address, minimize the time of latest connection betweenagent device120 and node110). Theremote controller1 can also select more than oneagent device120 and execute the following steps for each selected agent device in parallel or in sequence: this possibility allows to reachnodes110 connected only to an agent device, hiding to the network administrator the complexity of agent device selection.
Atblock802 theremote controller1 instructs theagent device120 to contact thespecific node110 at the specified IP address. If required (that is if a node is reached that has a MAC address different from the specified MAC address), the agent device120 (automatically or under the control of the remote controller1) invokes an IP conflict avoidance procedure, as described in further detail below with reference toFIG. 12.
If required, (that is if the specified IP address is not included in any subnet IP identifier of the interfaces ofagent device120 and/or if the specified IP address corresponds to the IP address of the agent device interface), atblock802 the agent device120 (automatically or under the control of the remote controller1) can also invoke a subnet conflict avoidance mechanism, as described in further detail below with reference toFIG. 13.
Atblock803 the remote controller1 (or agent device120) checks if a node having an IP conflict with thespecific node110 has been discovered during any execution of the IP conflict avoidance procedure.
In the positive case, atblock806 theremote controller1 adds the IP and MAC addresses of the conflicting node in thefirst section900 ofdatabase14 in the list of “discovered nodes”.
Anyhow, atblock804 the remote controller1 (or agent device120) checks if thespecific node110 with the specified IP and MAC address has been reached.
In the negative case, atblock807 theremote controller1 adds the specified IP and MAC addresses in thefirst section900 ofdatabase14 in the list of “unreachable nodes”.
In the positive case, atblock805 theremote controller1 adds the specified IP and MAC addresses in thefirst section900 ofdatabase14 in the list of “available nodes”.
FIG. 9 shows a second embodiment of a discovery procedure, making use of only IP addresses.
Atblock901 network administrator specifies to theremote controller1 only the IP address of aspecific node110 to be discovered.
The next step depends on capabilities of theagent device120.
Atcheck902, it is checked (by the remote controller or agent device120) if theagent device120 supports a translation protocol for resolution of IP addresses into MAC addresses, as for example the ARP.
In the negative case, atblock903agent device120 tries to contact the specific node by using the specified IP address.
Atblock904, the remote controller1 (or agent device120) checks if anode110 with the specified IP address has been reached.
In the negative case, atblock905 theremote controller1 adds the specified IP address in thefirst section900 ofdatabase14 in the list of “unreachable nodes” and the procedure ends.
In the positive case, atblock906 theremote controller1 adds the specified IP address and the MAC address, as retrieved during connection with thenode110, in thefirst section900 ofdatabase14 in the list of “available nodes”.
When the check atblock902 is positive (that is theagent device120 supports a translation protocol), atblock907 theagent device120 sends a suitable request (e.g. ARP request) to theLAN100 in order to translate the specified IP address into a corresponding MAC address, according to the translation protocol.
Atblock908 theremote controller1 checks if any MAC address has been received as answer to the request.
If no MAC address is received, atblock905 theremote controller1 adds the specified IP address in thefirst section900 ofdatabase14 in the list of “unreachable nodes” and the procedure ends.
If only one MAC address is received, atblock906 theremote controller1 adds the specified IP address and the received MAC addresses in thefirst section900 ofdatabase14 in the list of “available nodes”.
If more than one MAC address is received, atblock909 theremote controller1 adds the specified IP address with associated the plurality of received MAC addresses in thefirst section900 ofdatabase14 in the list of “discovered nodes”. In this case, the remote controller1 (automatically or under the control of the network administrator) will have to solve the IP conflict, as explained for example hereinafter.
FIG. 10 shows a third embodiment of a discovery procedure, making use of only MAC addresses ofnodes110 to be discovered.
Atblock1001 network administrator specifies to theremote controller1 only the MAC address of aspecific node110 to be discovered.
The next step depends on capabilities of theagent device120.
Atcheck1002, it is checked (by the remote controller or agent device120) if theagent device120 supports a translation protocol for resolution of MAC addresses into IP addresses, as for example the RARP.
In the negative case, atblock1003 the agent device120 (automatically or under the control of remote controller1) tries to reach the specific node by making a first scan of IP addresses. The scan can be made by applying the procedure described below with reference toFIG. 11 and stopping it when a node with the specified MAC address is reached.
Atblock1004, theremote controller1 checks if the specific node with the specified MAC address has been found during the first scan.
In the positive case, atblock1012 theremote controller1 adds the specified MAC address with the associated IP address into the list of “available nodes” of thefirst section900 ofdatabase14 and the procedure ends.
In the negative case, before considering the node as unreachable, atblocks1005 and1006, the agent device120 (automatically or under the control of remote controller1) preferably executes a second scan trying to reach all IP addresses successfully reached by theagent device120, during the first scan ofblock1003.
These IP addresses are contained in an IP-list created by theagent device120 during the first scan, including the tern IP-SUBNET-INTERFACE (i.e., IP address, subnet identifier and interface identifier) indicating, for each IP address reached byagent device120 during the first scan, the identifier of the subnet and the identifier of the agent device interface at which the IP address has been reached.
Atblock1005 the agent device120 (automatically or under the control of remote controller1) selects one tern IP-SUBNET-INTERFACE from said IP-list.
Atblock1006 the agent device120 (automatically or under the control of remote controller1) tries to contact the IP address included in the tern selected atblock1005 through the interface included in said tern by invoking, if required, the subnet conflict avoidance procedure and, optionally, the IP conflict avoidance procedure, according to the procedures detailed hereinafter.
The second scan performed atblocks1005 and1006 is useful for reaching a node identified by the specific MAC address that might be hidden by an IP address contained in the IP-list, due to an IP conflict.
Atblock1007, the remote controller1 (or agent device120) checks if a node has been reached.
In the positive case, atblock1012 theremote controller1 adds the specified MAC address with the corresponding IP address in thefirst section900 ofdatabase14 in the list of “available nodes” and the procedure ends.
In the negative case, atblock1008 the remote controller1 (or agent device120) checks if all IP-SUBNET-INTERFACE terns from said IP-list have been scanned.
In the negative case, the procedure returns atblock1005.
In the positive case (that is, when no device with an IP address included in the IP-list and the specified MAC address has been reached), atblock1009 theremote controller1 adds the specified MAC address in thefirst section900 ofdatabase14 into the “unreachable nodes” list.
In the positive case of block1002 (that is, when the agent device supports a translation protocol for resolution of MAC addresses into IP addresses), atblock1010 theremote controller1 sends, through theagent device120, a request (e.g. RARP request) into theLAN100 in order to translate the specified MAC address into a corresponding IP address.
Atblock1011 theremote controller1 checks if any IP address has been received as answer to the sent request.
If no IP address is received, the procedure continues atblock1003.
If only one IP address is received, atblock1012 theremote controller1 adds the specified MAC address with the received IP address in thefirst section900 ofdatabase14 in the list of “available nodes”.
If more than one IP address is received (e.g., when there are aliases on an interface of anode110 so that there is more than one IP address associated with such interface or there is a MAC conflict case), atblock1013 theremote controller1 adds the specified MAC address with associated the plurality of received IP addresses in thefirst section900 ofdatabase14 in the list of “discovered nodes”. Then, the remote controller1 (automatically or under the control of the network administrator) can, for example, decide to use indiscriminately any of the IP addresses any time it needs to reach the node or to choose a specific one to use.
Advantageously, according to the invention, a scan based discovery procedure is also contemplated, which can be implemented automatically by theremote controller1 without requiring any information from the network administrator.
According to this scan based discovery procedure, a plurality of subnet identifiers is considered.Nodes110 inLAN100 are discovered by trying to contact—through all agent device interfaces—all IP addresses (or a selected subpart) corresponding to such plurality of subnet identifiers (that is, by trying to contact all possible combinations of IP addresses obtainable with the subnet identifiers).
The plurality of subnet identifiers preferably includes the subnet identifiers associated with the interfaces of theagent device120 and, preferably, also a plurality of subnet identifiers, corresponding to typical (preferably known a-priori) subnet identifiers set by default by different manufacturers/vendors. This last feature advantageously allows to extend the search to IP addresses, set by default by different manufacturers/vendors, which can belong to a subnet different from the ones covered by the agent device interfaces.
FIG. 11 shows an embodiment of scan based discovery procedure according to the invention.
Atblock1101, the agent device120 (under the control of remote controller1) generates a subnet-ID-list and adds into said subnet-ID-list the identifiers of the subnets covered by the agent device interfaces.
Atblock1102 theremote controller1 sends to the agent device120 a plurality of subnet identifiers, corresponding to typical (preferably known a-priori) subnet identifiers set by default by different manufacturers/vendors.
Atblock1103, theagent device120 adds them into the generated subnet-ID-list.
Atblock1104, the agent device120 (automatically or under the control of remote controller1) selects from the generated subnet-ID-list a non-scanned subnet identifier.
Atblock1105, the agent device120 (automatically or under the control of remote controller1) selects a non-scanned IP address from all possible IP addresses corresponding to the selected subnet identifier.
Atblock1106, the agent device120 (automatically or under the control of remote controller1) selects an agent device interface that has not been tried yet with the selected IP address. Preferably, the selection can be performed such as to minimize the distance between the selected IP address and the identifier of the subnet corresponding to the interface.
Atblock1107, the agent device120 (automatically or under the control of remote controller1) tries to contact the selected IP address through the selected interface. If required, the subnet conflict avoidance mechanism will be invoked, as detailed hereinafter.
Atblock1108, the remote controller1 (or agent device120) checks if a node has been reached.
In the positive case, atblock1109, theremote controller1 adds the IP and MAC addresses of the reached node in thefirst section900 ofdatabase14 in the “discovered nodes” list. Advantageously, the subnet and interface identifiers are also recorded. In fact, it can happen that a same IP address is reached through different interfaces, e.g. because different nodes connected to different interfaces are configured with identical, initial default IP address.
Anyhow, atblock1110, the remote controller1 (or agent device120) checks if there are other non-tried interfaces ofagent device120 for the selected IP address. In the positive, the procedure continues atblock1106. In the negative, atblock1111, the remote controller1 (or agent device120) checks if there are other non-scanned IP addresses for the selected subnet. In the positive, the procedure continues atblock1105. In the negative, atblock1112, the remote controller1 (or agent device120) checks if there are other non-scanned subnets. In the positive, the procedure continues atblock1104. In the negative, the procedure ends (that is, all IP addresses of all subnets have been tried through all interfaces of the agent device120).
Advantageously, the scan based discovery procedure ofFIG. 11 can be made more efficient if, before execution of the scan, at least one technique is used in order to check if it is possible to obtain couples of IP/MAC addresses for at least part ofnodes110 ofLAN100.
Examples of such techniques are:
use of broadcast ping according to ping utility well known in the art (as for example defined in RFC 792);
adopting IP sniffing techniques, or other passive scan techniques known in the art in order to monitor the traffic of the LAN and to discover the existence of nodes;
well-known device discovery protocols, such as UPnP (Universal Plug and Play, as for example described at the Internet website www.upnp.org), Bonjour (as for example described at the website http://developer.apple.com/opensource/), Zero-Configuration Networking (as, for example, defined by RFC 3927), or similar, that would allow theagent device120 to receive specific messages sent by auto-declaring nodes, supporting themselves such protocols.
The use of such techniques advantageously allows to limit the scan procedure to IP addresses not retrieved through any of such techniques (the ones retrieved can be discovered by using the discovery procedure ofFIG. 8) and to get to know about any IP address optionally included in theLAN100 but not included in the subnet-ID-list generated at blocks1101-1103.
An interesting feature of the invention is that the range of IP addresses theagent device120 can access and detect can be configured in order to increase security level. This way the network administrator can decide which subset ofnodes110 is visible by theagent device120 and, as a consequence, will be manageable through theremote controller1. The address range can be specified in different well-known techniques, e.g. using whitelists or blacklist.
It is further observed that even if in the embodiment ofFIG. 11 IP addresses are scanned considering—for each subnet identifier—all possible IP addresses corresponding to said subnet identifier and—for each IP address—all possible agent device interface, the scan can be carried out considering a different scanning sequence. For example, IP addresses can be scanned considering—for each agent device interface—the various subnet identifiers from the subnet-ID-list and—for each subnet identifier—all possible IP addresses corresponding to said subnet identifier.
In aLAN100 where multiple generic vendor nodes are deployed with their default configuration, it is very likely that two or more nodes are initially associated with a same IP address, though having a different MAC address. This event draws to IP conflicts so that no one of those nodes can be properly reached via IP by theagent device120, according to standard networking techniques.
For this reason, in a preferred embodiment, the invention provides a mechanism that allows theagent device120 to exclude all nodes with the same IP address except one having a specified MAC address, and to contact it.
According to this mechanism, after a tunnel connection withremote controller1 is established byagent device120, any time theremote controller1 needs to connect to a specific node of the LAN by using a specific IP address and MAC address, an IP conflict avoidance procedure is advantageously executed by the agent device120 (automatically or under the control of remote controller1) for guaranteeing connection to the specified IP address and MAC address even in case the specified IP addresses is associated with multiple MAC addresses.
FIG. 12 shows the IP conflict avoidance procedure according to an embodiment of the invention, wherein theagent device120 supports the ARP protocol, including ARP table.
According to ARP protocol, the ARP table of the agent device will contain, for each agent device interface, entries represented by couples of IP/MAC addresses ofnodes110 of theLAN100. These entries are updated each time the agent device sends ARP requests into theLAN100. According to ARP protocol, the ARP able can contain only one entry for each IP address. Therefore, if more than one node reply to an ARP request, the ARP table is updated with a couple IP/MAC address corresponding to only one of the replying nodes (for example, the last node answering to the ARP request).
Atblock1201 theagent device120, needing to contact a specific node identified by a specified IP address and a specified MAC address, sends (automatically or under the control of remote controller1) an ARP request for translating the specified IP address.
After sending the ARP request, atblock1102 the agent device120 (automatically or under the control of remote controller1) looks at its ARP table.
Atblock1203, the agent device120 (automatically or under the control of remote controller1) checks if the ARP table includes the specified IP address.
In the positive case, atblock1204 the agent device120 (automatically or under the control of remote controller1) checks if the retrieved MAC address, i.e. the one associated in the ARP table with the specified IP address; is equal to the specified MAC address, i.e. the one that should be contacted.
In the positive case, atblock1205 the agent device (automatically or under the control of remote controller1) tries to contact the node by using the specified IP address.
There is a variety of ways well known in the art theagent device120 can use to connect to a specific device such as using a ping request or creating a TCP (Transmission Control Protocol) socket to the specified IP address, according to techniques well known in the art.
Atblock1206 the agent device (automatically or under the control of remote controller1) checks if the node has been reached. In the positive case, the procedure ends. In the negative case, at block1211 a specific error is returned and the procedure ends.
In the negative case of block1203 (that is, the ARP table does not include the specified IP address) a specific error is returned and the procedure ends.
In the negative case of block1204 (that is the retrieved MAC address is not equal to the specified MAC address), atblock1208 the agent device120 (automatically or under the control of remote controller1) records the interface identifier through which it received the response to the ARP request.
Atblock1209 theagent device120 deletes in the ARP table the entry in correspondence with the recorded interface identifier that contains the specified IP address and the retrieved MAC address.
Atblock1210 theagent device120 adds into the ARP table, in correspondence with the recorded interface identifier, an ARP entry containing the specified IP address and the specified MAC address.
Fromblock1210 the procedure continues atblock1205.
The steps atblocks1208 to1210 allow the agent device to contact the node identified by the specific IP address and the specific MAC address, avoiding interference with other conflicting nodes having the same IP address but different MAC address.
A periodic use of the IP conflict avoidance procedure, also after the initial setup ofLAN100, advantageously allows theremote controller1 to rise warning to the network administrator any time there is an IP conflict, e.g. generated by the connection of a new conflicting node to theLAN100, in a moment successive to the initial setup.
According to a preferred embodiment (not shown), any time the IP conflict avoidance procedure avoids a conflict through execution ofsteps1208 to1210, the IP and MAC addresses of the conflicting nodes are added into the list of “discovered nodes” included in thefirst section900 ofdatabase14 of remote controller1 (as explained, for examples, atblocks803 and806 ofFIG. 8).
This allows theremote controller1 to have knowledge of the IP conflicting nodes of theLAN100 and to execute a mechanism for solving the IP conflicts and guaranteeing correct standard networking protocols operations.
The mechanism for solving the IP conflicts can be executed automatically by theremote controller1 or under the control of the network administrator.
According to an embodiment, the conflict resolution mechanism can collect all MAC addresses of the conflicting nodes and assign to each one a specific IP address, in compliance with an addressing plan, using the IP conflict avoidance procedure as described above.
For example, if nodes AA:AA:AA:AA:AA:AA, BB:BB:BB:BB:BB:BB, CC:CC:CC:CC:CC:CC have the same IP address 192.168.0.1, the conflict resolution mechanism would:
contact IP address 192.168.0.1 applying the conflict avoidance procedure for forcing MAC address AA:AA:AA:AA:AA:AA,
change IP address of device AA:AA:AA:AA:AA:AA from 192.168.0.1 to 192.168.0.101,
contact IP address 192.168.0.1 applying conflict avoidance for forcing MAC address BB:BB:BB:BB:BB:BB,
change IP address of device BB:BB:BB:BB:BB:BB from 192.168.0.1 to 192.168.0.102,
contact IP address 192.168.0.1 applying conflict avoidance for forcing MAC address CC:CC:CC:CC:CC:CC
change IP address of device CC:CC:CC:CC:CC:CC from 192.168.0.1 to 192.168.0.103
After this sequence, the three nodes are no longer in conflict.
When theagent device120 has more than one interface, the network conflict resolution mechanism can use a subnet conflict avoidance mechanism (described hereinafter) to contact each node and will assign IP addresses that are consistent with each identifier of the subnets covered by the agent device interfaces.
It is observed that, when anode110 ofLAN100 has an IP address that is not included in any subnet of the interfaces of theagent device120 or that is the same of the agent device interface, that node would not be reachable by theagent device120 according to standard networking techniques. For this reason, one aspect of the invention is the introduction of a subnet conflict avoidance procedure that forces theagent device120 to contact a specific IP address through a specific interface.
FIG. 13 shows an embodiment of subnet conflict avoidance procedure for contacting a specified IP address through a specified interface.
Atblock1300 the agent device120 (automatically or under the control of remote controller1) considers the specified IP address, i.e. the one that it has to contact, and the identifier of the subnet corresponding to the specified interface, i.e. the interface through which it has to contact the specified IP.
Atblock1301 the agent device120 (automatically or under the control of remote controller1) checks if the specified IP address is covered by the considered subnet identifier and if the IP address of the specified interface is different from the specified IP address.
In the negative case, atblock1302 the agent device120 (automatically or under the control of remote controller1) compares the specified IP with the identifiers of all the subnets of all the interfaces of theagent device120.
Atblock1303 the agent device120 (automatically or under the control of remote controller1) checks if there is any subnet including the specified IP address.
In the positive case, atblock1304, the agent device120 (automatically or under the control of remote controller1) removes the subnet from the interface to which the subnet is associated or turns such interface off. This step is useful to avoid the case of having two agent device interfaces with associated a same subnet.
Anyhow, atblock1305 the agent device120 (automatically or under the control of remote controller1) assigns the subnet corresponding to the specified IP address to the specified interface and assign to the specified interface an IP address of such subnet, which is different from the specified IP address. This assignment can be done according to techniques well known in the art.
Atblock1306, the agent device120 (automatically or under the control of remote controller1) tries to contact the specified IP address through the specified interface.
When a MAC address is also specified, in case of IP conflict, the IP conflict avoidance procedure can be invoked atblock1306 in order to reach the node having both the specified IP address and the specified MAC address.
Atblock1307 the agent device120 (automatically or under the control of remote controller1) checks if the status of any interface (that is, on/off, IP address, subnet) has been modified.
In the negative case the procedure ends.
In the positive case, atstep1308 the agent device120 (automatically or under the control of remote controller1) restores the initial status of the interfaces and the procedure ends.
It is observed that when the specified IP address corresponds to the IP address of the specified interface, actions atblock1305 will assign to the specified interface a temporarily different IP address. In this case, disconnections between theremote controller1 andagent device120 might occur, when the specified interface is the same as the one used for the remote controller1-agent device120 connection. If this happens, theagent device120 will have to re-establish the connection with theremote controller1. This can be done, for example, in two ways: 1) theagent device120 keeps the new assigned IP address for the specified interface and re-establish a connection; 2) theagent device120 continuously switches between the new assigned IP address and the specified IP address, for respectively contacting the node and theremote controller1.
As stated above, theremote controller1 implements a plurality of managing procedures (e.g. stored insection930 of database14) that allow to access, configure, control and monitor nodes ofLAN100 from different vendors and/or manufacturers. An advantageous aspect of this invention is the ability of theremote controller1 to identify anynode110 ofLAN100 discovered through intermediation ofagent device120 and to associate to it a specific set of managing procedures, suitable to manage the specific node.
Accordingly, after executing a discovery procedure according to any of the embodiments described with reference toFIGS. 8 to 11, eventually using the IP conflict avoidance procedure and/or the subnet conflict avoidance procedure, theremote controller1 identifies, through intermediation of theagent device120, the discovered nodes.
FIG. 14 shows an identification procedure according to a first embodiment of the invention, based on the idea of having an a-priori knowledge, for eachnode110 of LAN100 (e.g. identified by its MAC address), of a corresponding connection procedure, enabling theremote controller1 to properly connect, through intermediation ofagent device120, tosuch node110.
This, for example, can be implemented by using the structure ofdatabase14 ofFIG. 4A, wherein there is a-priori configuration of the content of second section910 (containing a list of nodes identifiers (e.g. MAC addresses) associated with device characterizing parameters such as model, vendor, manufacturer, software version, hardware version, firmware version and/or serial number) and fourth section930 (comprising a plurality of managing procedures, each associated with specific device characterizing parameters such as model, vendor, manufacturer, software version, hardware version, firmware version and/or serial number).
Atblock1401 ofFIG. 14, theremote controller1 selects anode110 from the list of nodes contained infirst section900 of database14 (as filled by the discovery procedure previously executed). Preferably, only the devices classified as “available” and, optionally, “discovered” are taken into account.
Then, atblock1402 theremote controller1 retrieves the specific connection procedure required to access the selected node and to be authenticated by it, by using the node identifier (e.g. MAC address) and merging the information stored in second andfourth sections910,930 ofdatabase14.
Atblock1403, theremote controller1 executes the retrieved connection procedure to be authenticated by the node.
Once authenticated, atblock1404, theremote controller1 retrieves device characterizing parameters from the node.
Atblock1405 theremote controller1 checks if the retrieved parameters correspond to the device characterizing parameters stored insecond section910 ofdatabase14.
In the positive case the procedure ends. In the negative case, atblock1406 an error is raised and the procedure ends.
An example of this procedure is reported for the sake of clarity:
first section900 of thedatabase14 contains node identifiers, among which the MAC address AA:BB:CC:DD:EE:FF of the node that must be identified and configured;
third section920 of thedatabase14 contains configuration parameters for this node, e.g. WEP KEY 1234567890 and IP ADDRESS 192.168.1.1; in order to be applied to the device, the correct vendor-model specific procedure must be used, thus the vendor, model and firmware version must be identified;
second section910 contains a list of all supported MAC addresses with the related vendor, model and firmware information (information that can be available thanks to specific agreement with the specific vendors):
MAC AA:AA:AA:AA:AA:AA, associated to vendor US Robotics, model usr808054, firmware 1.0.2;
MAC AA:AA:AA:AA:AA:EE, associated to vendor US Robotics, model usr808054, firmware 4.0.1;
MAC AA:BB:CC:DD:EE:FF, associated to vendor Netgear, model WG103, firmware 3.1.
fourth section930 contains a list of managing procedures to connect, configure and monitor specific models of nodes (this list can be expanded at runtime without generating service interruption, thus allowing to support virtually each firmware version of each model of each vendor/manufacturer):
ConnectionProcedureA, associated to vendor USRobotics, model usr808054, firmware 1.x and 2.x;
ConnectionProcedureB, associated to vendor USRobotics, model usr808054, firmware 4.x;
ConnectionProcedureC, associated to vendor Netgear, model WG101,WG102,WG103, firmware 3.1;
ConfigurationProcedureD, required to configure IP address of vendor USRobotics, model usr808054, firmware 4.x;
ConfigurationProcedureE, required to configure IP address of devices of vendor Netgear, model WG101,WG102,WG103, firmware 3.1;
ConfigurationProcedureF, required to configure WEP of devices of vendor Netgear, model WG101,WG102,WG103, firmware 3.1;
block1401: theremote controller1 selects fromfirst section900 the MAC identifier of the node that must be identified in terms of vendor/model, e.g. MAC address AA:BB:CC:DD:EE:FF;
block1402: theremote controller1 selects fromsecond section910 the information associated to AA:BB:CC:DD:EE:FF, i.e. vendor Netgear, model WG103, firmware 3.1;
block1403: theremote controller1 selects, fromfourth section930, the ConnectionProcedureC, as it corresponds to vendor Netgear, model WG103, firmware 3.1, and executes it to connect to the device;
block1404: theremote controller1 requests device parameters to the device (this request is considered to be part of the ConnectionProcedureC); if retrieved parameters correspond to vendor Netgear, model WG103, firmware 3.1, then the node is correctly identified (and its parameters, such as IP address and WEP key can be configured with, respectively, ConfigurationProcedureE and ConfigurationProcedureF).
FIG. 15 shows an identification procedure according to a second embodiment of the invention. This procedure is based on a structure ofdatabase14 according to the embodiment ofFIG. 4B and on an a-posteriori knowledge of the content ofsecond section910 of database14 (containing a list of nodes identifiers (e.g. MAC addresses) associated with device characterizing parameters such as model, vendor, manufacturer, software version, hardware version, firmware version, serial number and similar). Indeed,second section910 is filled and updated during the execution of this identification procedure.
Atblock1501 theremote controller1 selects anode110 from the list of nodes contained infirst section900 of database14 (as filled by the discovery procedure previously executed). Preferably, only the devices classified as “available” and, optionally, “discovered” are taken into account.
Then, atblock1502 theremote controller1 checks if there is a specific connection procedure for the selected node, by using the node identifier (e.g. MAC address) and merging the information stored in second andfourth sections910,930 ofdatabase14.
In the positive case, blocks1503 to1506 are executed, which correspond toblocks1402 to1405 ofFIG. 14.
In the positive case ofblock1506, the procedure ends.
In the negative case, atblock1507 the remote controller removes from thesecond section910 the association between the MAC address and the corresponding device parameters, and the procedure continues atblock1502. This feature is advantageous to reveal different firmware versions of a specific node model.
In the negative case ofblock1502, theremote controller1 selects from fifth section911 a sequence of connection procedures that is supposed to include a correct procedure for the specific, even if not characterized, node. This choice is done atblock1508, according to a predetermined selection criterion. Advantageously, the selection criterion aims at minimizing the number of connection procedures to be tested before successfully connecting to thenode110. For example, the selection criterion for a connection procedure can be the average time for the connection procedure to be successful, the difference between the MAC address of the selected node and the MAC address of nodes which have already been successfully associated with the connection procedure, or being part of a list of procedures associated with a predetermined vendor that has been specified by the network administrator as vendor for the node(s)110. Advantageously, the list of sequences stored insection911 includes a default sequence that contains all the connection procedures available in the database,section930, even if this solution wouldn't be optimized.
Atblock1509, theremote controller1 tries the connection procedures of the selected sequence.
Atblock1510 theremote controller1 checks if at least one connection procedure is successful.
In the positive case, thenode110 is considered characterized and atblock1512 its identifier (e.g. the MAC address) is inserted into thesecond section910 ofdatabase14, associated with the device characterizing parameters retrieved from the node itself during the connection procedure.
In the negative case ofblock1510, atblock1511 theremote controller1 records a warning containing all the details of the unsupported device.
An example of this second embodiment is reported for the sake clarity:
first section900 of thedatabase14 contains the node identifiers, among which the MAC address AA:BB:CC:DD:EE:FF of the node that must be identified and configured;
third section920 of thedatabase14 contains the configuration parameters, e.g. WEP KEY 1234567890 and IP ADDRESS 192.168.1.1;
second section910 ofdatabase14 is empty, as per each MAC address specified by the user, no device parameters (vendor,model, firmware version) are available;
fourth section930 ofdatabase14 contains procedures to connect, configure and monitor specific models of access points:
ConnectionProcedureA, associated to vendor USRobotics, model usr808054, firmware 1.x and 2.x;
ConnectionProcedureB, associated to vendor USRobotics, model usr808054, firmware 4.x;
ConnectionProcedureC, associated to vendor Netgear, model WG101,WG102,WG103, firmware 3.1;
ConnectionProcedureD, associated to vendor Netgear, model WG001,WG002,SK999,firmware 2;
ConfigurationProcedureE, required to configure IP address of vendor USRobotics, model usr808054, firmware 4.x;
ConfigurationProcedureF, required to configure IP address of devices of vendor Netgear, model WG101,WG102,WG103, firmware 3.1;
ConfigurationProcedureG, required to configure WEP of devices of vendor Netgear, model WG101,WG102,WG103, firmware 3.1;
section911 ofdatabase14 contains different sequences of connection procedures:
Sequence1: ConnectionProcedureA, ConnectionProcedureB;
Sequence2: ConnectionProcedureD, ConnectionProcedureC;
Sequence3: ConnectionProcedureA, ConnectionProcedureB, ConnectionProcedureC, containing all connection procedures;
Each sequence is also characterized by metric parameters, e.g. the number of times it has been executed, the number of connection established and refused, the average number of connection procedure tried before success per each MAC address range, and similar. As this latter is a measure of the ability of the sequence itself to connect to a device whose MAC lies in a specific range, and as known by those skilled in the art each MAC range is associated to a specific vendor, this metric can measure the ability of a sequence to connect to a specific vendor but unknown model.
block1501: theremote controller1 selects fromsection900 the MAC identifier of the node that must be identified in terms of vendor/model, e.g. AA:BB:CC:DD:EE:FF;
block1508: theremote controller1 selects a sequence fromsection911, in order to minimize some metric. If this metric is the average number of connection procedures tried before success, as detailed above, Sequence2 would be chosen: it contains procedures that succeed with MAC addresses that lie in the Netgear range, as MAC AA:BB:CC:DD:EE:FF does;
block1509: the multi-vendor controller selects, fromsection930, the
ConnectionProcedureD, as it corresponds to the first connection procedure of the chosen sequence (Sequence2); ConnetionProcedureD fails, as it is related to a model that is different from the device whose MAC is AA:BB:CC:DD:EE:FF; then theremote controller1 selects, fromsection930, the second procedure of Sequence2, which is ConnectionProcedureC, and executes it; as the device vendor/model/version correspond to the ones supported by ConnectionProcedureC, the procedure succeeds and device AA:BB:CC:DD:EE:FF characterization is retrieved: vendor Netgear, model WG103, firmware 3.1;
block1512: as ConnectionProcedureC succeeded, the multi-vendor controller can associate to the device MAC AA:BB:CC:DD:EE:FF vendor Netgear, model WG103, firmware 3.1, insection910 of the database;
after this procedure, theremote controller1 will be able to identify AA:BB:CC:DD:EE:FF device following the a-priori steps:1503 to1505.
Advantageously, the identifying procedure ofFIG. 15 can be made more efficient if, before execution of the procedure, at least one of the techniques mentioned above with reference to the scan discovery procedure (e.g., UPnP, Bonjour, Zero-Configuration Networking) is used in order to check if it is possible to obtain couples of IP/MAC addresses together with some device characterizing parameters (e.g. vendor) for at least part ofnodes110 ofLAN100.
This would allow the remote controller1 (or agent device120) to have a-priori knowledge of some device parameters that can be used atblock1508 in order to optimize the selection of an appropriate sequence of connection procedures.
After executing a discovery procedure according to any of the embodiments described with reference toFIGS. 8 to 11, and an identification procedure according to any of embodiments described with reference toFIGS. 14 and 15, theremote controller1 is ready to be used by the network administrator for executing managing operations onnodes110. These operations can be applied by the network administrator tosingle nodes110, to a subset of thenodes110 ofLAN100, or to allnodes110 of theLAN100. In any case, the network administrator can execute these operations through the remotecontroller user interface18. When the network administrator wishes to manage anode110, the following procedure can be executed:
network administrator starts the remote controller user interface18 (e.g., web site, tablet application, etc.);
network administrator provides authentication parameters;
network administrator selects the node he/she wishes to manage, among the “available nodes” list (box901 offirst section900 ofdatabase14, shown inFIG. 5A) provided by theremote controller1;
network administrator changes one or more configuration parameters of the selected node (e.g. the IP address and/or the SSID);
remote controller1 saves the new configuration parameters into the database14 (third section920 shown inFIG. 4), in association with the selected node;
network administrator confirms the new configuration;
by using the information contained insections910 and930 ofdatabase14,remote controller1, selects the specific vendor/model configuration procedure to be used to configure the selected node;
remote controller1 executes the specific vendor/model configuration procedure to configure the node and, when the execution is completed, returns a confirmation to the network administrator.
A similar sequence can be executed in case of monitoring operations.
As clear from the above description, the invention in the various aspects thereof allows to achieve a plurality of advantages.
A crucial innovation of the proposed invention is the fact that it is not required to deploy any software onto thenodes110 to be managed, or assume any specific procedure or behavior, in order to make them manageable by theremote controller1. This is possible thanks to the intermediation of theagent device120, which initiates contact with theremote controller1 and establish a tunnel connection with it, and to the discovery and identification procedures that enable the remote controller to discover whichnodes110 are present inLAN100 and to identify the device characterizing parameters (such as manufacturer/vendor, type, model, hardware version, firmware version, software version, serial number, MAC address and similar) of such nodes, that enable theremote controller1 to manage each device by using specific known vendor/manufactures procedures and protocols.
Proprietary available interfaces known in the art, such as HTTP, HTTPS, CLI, SSH, configuration files, which are differently supported and implemented by different vendors/manufactures, can thus be used by theremote controller1 in order to connect, control, configure and monitor the nodes, even when the nodes are not manufactured to be centrally managed, without the need of modifying the software/firmware of such nodes, or requiring any specific behavior of the nodes other than standard or proprietary exposed interfaces.
Any node, coming from a generic vendor/manufacturer, included a low-cost consumer-grade network device, can be managed by the remote management system of the invention.
It is further observed that a challenging aspect of using nodes coming from a generic vendor/manufacturer is that they typically come from the factory with a common built-in IP address. This means that any time a new node is inserted into the LAN, conflicts of IP addresses are very likely to arise.
Thanks to the IP conflict avoidance procedure and the subnet conflict avoidance procedure of the invention, IP-addresses conflicts can be centrally managed by theremote controller1, without the need of deploying any specific software onto thenodes110 to be managed or assuming any specific procedure or behavior for the nodes.
A central management of IP-addresses conflicts also guarantees that specific policies established by the network administrator are met.
This is advantageous with respect to known solutions for IP-conflicts wherein the nodes need to be configured to execute auto-assignment algorithms (such as for example the MAC-to-IP association algorithm described by U.S. Pat. No. 7,852,819) and the auto-assigned IP addresses might be not compatible with network policies established by the network administrator.