Movatterモバイル変換


[0]ホーム

URL:


US6353898B1 - Resource management in a clustered computer system - Google Patents

Resource management in a clustered computer system
Download PDF

Info

Publication number
US6353898B1
US6353898B1US09/574,094US57409400AUS6353898B1US 6353898 B1US6353898 B1US 6353898B1US 57409400 AUS57409400 AUS 57409400AUS 6353898 B1US6353898 B1US 6353898B1
Authority
US
United States
Prior art keywords
node
nodes
buff
cluster
storage device
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Lifetime
Application number
US09/574,094
Inventor
Robert A Wipfel
David Murphy
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
RPX Corp
Original Assignee
Novell Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Novell IncfiledCriticalNovell Inc
Priority to US09/574,094priorityCriticalpatent/US6353898B1/en
Application grantedgrantedCritical
Publication of US6353898B1publicationCriticalpatent/US6353898B1/en
Assigned to CPTN HOLDINGS LLCreassignmentCPTN HOLDINGS LLCASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS).Assignors: NOVELL, INC.
Assigned to NOVELL INTELLECTUAL PROPERTY HOLDINGS INC.reassignmentNOVELL INTELLECTUAL PROPERTY HOLDINGS INC.ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS).Assignors: CPTN HOLDINGS LLC
Assigned to Novell Intellectual Property Holdings, Inc.reassignmentNovell Intellectual Property Holdings, Inc.ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS).Assignors: CPTN HOLDINGS LLC
Assigned to CPTN HOLDINGS LLCreassignmentCPTN HOLDINGS LLCASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS).Assignors: NOVELL,INC.
Assigned to NOVELL INTELLECTUAL PROPERTY HOLDING, INC.reassignmentNOVELL INTELLECTUAL PROPERTY HOLDING, INC.ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS).Assignors: CPTN HOLDINGS LLC
Assigned to RPX CORPORATIONreassignmentRPX CORPORATIONASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS).Assignors: Novell Intellectual Property Holdings, Inc.
Assigned to JPMORGAN CHASE BANK, N.A., AS COLLATERAL AGENTreassignmentJPMORGAN CHASE BANK, N.A., AS COLLATERAL AGENTSECURITY AGREEMENTAssignors: RPX CLEARINGHOUSE LLC, RPX CORPORATION
Assigned to RPX CORPORATION, RPX CLEARINGHOUSE LLCreassignmentRPX CORPORATIONRELEASE (REEL 038041 / FRAME 0001)Assignors: JPMORGAN CHASE BANK, N.A.
Anticipated expirationlegal-statusCritical
Expired - Lifetimelegal-statusCriticalCurrent

Links

Images

Classifications

Definitions

Landscapes

Abstract

Methods, systems, and devices are provided for managing resources in a computing cluster. The managed resources include cluster nodes themselves, as well as sharable resources such as memory buffers and bandwidth credits that may be used by one or more nodes. Resource management includes detecting failures and possible failures by node software, node hardware, interconnects, and system area network switches and taking steps to compensate for failures and prevent problems such as uncoordinated access to a shared disk. Resource management also includes reallocating sharable resources in response to node failure, demands by application programs, or other events. Specific examples provided include failure detection by remote memory probes, emergency communication through a shared disk, and sharable resource allocation with minimal locking.

Description

RELATED APPLICATIONS
This application is a division of U.S. patent application Ser. No. 09/024,011 filed Feb. 14, 1998 now U.S. Pat. No. 6,151,688.
This application claims the benefit of commonly owned copending U.S. patent application Ser. No. 60/038,251 filed Feb. 21, 1997.
FIELD OF THE INVENTION
The present invention relates to resource management in a system of inter-connected computers, and more particularly to the monitoring and allocation of cluster nodes, cluster memory, and other cluster computing resources.
TECHNICAL BACKGROUND OF THE INVENTION
Those portions of U.S. patent application Ser. No. 60/038,251 filed Feb. 21, 1997 which describe previously known computer system components and methods are incorporated herein by this reference. These incorporated portions relate, without limitation, to specific hardware such as processors, communication interfaces, and storage devices; specific software such as directory service providers and the NetWare operating system (NETWARE is a mark of Novell, Inc.); specific methods such as TCP/IP protocols; specific tools such as the C and C++ programming languages; and specific architectures such as NORMA, NUMA, and ccNUMA. In the event of a conflict, the text herein which is not incorporated by reference shall govern. Portions of the '251 application which are claimed in this or any other Novell patent application are not incorporated into this technical background.
Clusters
A cluster is a group of interconnected computers which can present a unified system image. The computers in a cluster, which are known as the “cluster nodes”, typically share a disk, a disk array, or another nonvolatile memory. Computers which are merely networked, such as computers on the Internet or on a local area network, are not a cluster because they necessarily appear to users as a collection of connected computers rather than a single computing system. “Users” may include both human users and application programs. Unless expressly indicated otherwise, “programs” includes programs, tasks, threads, processes, routines, and other interpreted or compiled software.
Although every node in a cluster might be the same type of computer, a major advantage of clusters is their support for heterogeneous nodes. As an unusual but nonetheless possible example, one could form a cluster by interconnecting a graphics workstation, a diskless computer, a laptop, a symmetric multiprocessor, a new server, and an older version of the server. Advantages of heterogeneity are discussed below.
To qualify as a cluster, the interconnected computers must present a unified interface. That is, it must be possible to run an application program on the cluster without requiring the application program to distribute itself between the nodes. This is accomplished in part by providing cluster system software which manages use of the nodes by application programs.
In addition, the cluster typically provides rapid communication between nodes. Communication over a local area network is sometimes used, but faster interconnections are much preferred. Compared to a local area network, a cluster system area network has much lower latency and much higher bandwidth. In that respect, system area networks resemble a bus. But unlike a bus, a cluster interconnection can be plugged into computers without adding signal lines to a backplane or motherboard.
Clustering Goals
Clusters may improve performance in several ways. For instance, clusters may improve computing system availability. “Availability” refers to the availability of the overall cluster for use by application programs, as opposed to the status of individual cluster nodes. Of course, one way to improve cluster availability is to improve the reliability of the individual nodes.
However, at some point it becomes cost-effective to use less reliable nodes and swap nodes out when they fail. A node failure should not interfere significantly with an application program unless every node fails; if it must degrade, then cluster performance should degrade gracefully. Clusters should also be flexible with respect to node addition, so that applications benefit when a node is restored or a new node is added. Ideally, the application should run faster when nodes are added, and it should not halt when a node crashes or is removed for maintenance or upgrades.
Adaptation to changes in node presence provides benefits in the form of increased heterogeneity, improved scalability, and better access to upgrades. Heterogeneity allows special purpose computers such as digital signal processors, massively parallel processors, or graphics engines to be added to a cluster when their special abilities will most benefit a particular application, with the option of removing the special purpose node for later standalone use or use in another cluster. Heterogeneity also allows clusters to be formed using presently owned or leased computers, thereby increasing cluster availability by reducing cost and delay. Scalability allows cluster performance to be incrementally improved by adding new nodes as one's budget permits. The ability to add heterogeneous nodes also makes it possible to add improved hardware and software incrementally.
Clusters may also be flexible concerning the use of whatever nodes are present. For instance, some applications will benefit from special purpose nodes such as digital signal processors or graphics engines. Ideally, clusters support three types of application software: applications that take advantage of special purpose nodes, applications that view all nodes as more or less interchangeable but are nonetheless aware of individual nodes, and applications that view the cluster as a single unified system. “Cluster-aware” applications include distributed database programs that expect to run on a cluster rather than a single computer. Cluster-aware programs often influence the assignment of tasks to individual nodes, and typically control the integration of computational results from different nodes.
The following situations illustrate the importance of availability and other cluster performance goals. The events described are either so frequent or so threatening (or both) that they should not be ignored when designing or implementing a cluster architecture.
Software Node Crash
Software errors, omissions, or incompatibilities may bring to a halt any useful processing on a node. The goal of maintaining cluster availability dictates rapid detection of the crash and rapid compensation by either restoring the node or proceeding without it. Detection and compensation may be performed by cluster system software or by a cluster-aware application. Debuggers may also be used by programmers to identify the source of certain problems. Sometimes a software problem is “fixed” by simply rebooting the node. At other times, it is necessary to install different software or change the node's software configuration before returning the node to the cluster. It will often be necessary to restart the interrupted task on the restored node or on another node, and to avoid sending further work to the node until the problem has been fixed.
Hardware Node Crash
Hardware errors or incompatibilities may also prevent useful processing on a node. Once again, availability dictates rapid detection of the crash and rapid compensation, but in this case compensation often means proceeding without the node.
In many clusters, working nodes send out a periodic “heartbeat” signal. Problems with a node are detected by noticing that regular heartbeats are no longer coming from the node. Although heartbeats are relatively easy to implement, they continually consume processing cycles and bandwidth. Moreover, the mere lack of a heartbeat signal does not indicate why the silent node failed; the problem could be caused by node hardware, node software, or even by an interconnect failure.
Interconnect Failure
If the interconnection between a node and the rest of the cluster is unplugged or fails for some other reason, the node itself may continue running. If the node might still access a shared disk or other sharable resource, the cluster must block that access to prevent “split brain” problems (also known as “cluster partitioning” or “sundered network” problems). Unless access to the shared resource is coordinated, the disconnected node may destroy data placed on the resource by the rest of the cluster.
Accordingly, many clusters connect nodes both through a high-bandwidth low-latency system area network and through a cheaper and less powerful backup link such as a local area network or a set of RS-232 serial lines. The system area network is used for regular node communications; the backup link is used when the system area network interconnection fails. Unfortunately, adding a local area network that is rarely used reduces the cluster's cost-effectiveness. Moreover, serial line protocols used by different nodes are sometimes inconsistent with one another, making the backup link difficult to implement.
Sharable Resource Reallocation
Sharable resources may take different forms. For instance, shared memory may be divided into buffers which are allocated to different nodes as needed, with the unallocated buffers kept in a reserve “pool”. In some clusters, credits that can be redeemed for bandwidth, processing cycles, priority upgrades, or other resources are also allocated from a common pool.
Nodes typically have varying needs for sharable resources over time. In particular, when a node crashes or is intentionally cut off from the cluster to prevent split brain problems, the shared buffers, credits, and other resources that were allocated to the node are no longer needed; they should be put back in the pool or reallocated to working nodes. Many clusters do this by locking the pool, reallocating the resources, and then unlocking the pool. Locking the pool prevents all nodes except the allocation manager from accessing the allocation lists while they are being modified, thereby preserving the consistency of the lists. Locking is implemented using a mutex or semaphore. Unfortunately, locking reduces cluster performance because it may block processing by all nodes.
SUMMARY
In short, improvements to cluster resource management are needed. For instance, it would be an advance in the art to distinguish further between different causes of cluster node failure. It would also be an advance to provide a way to coordinate shared resource access when an interconnect fails without relying on a local area network or a serial link. In addition, it would be an advance to reallocate sharable resources without interrupting work on all nodes. Such improved systems and methods are disclosed and claimed herein.
BRIEF SUMMARY OF THE INVENTION
The present invention provides methods, systems, and devices for resource management in clustered computing systems. The invention aids rapid, detailed diagnosis of communication problems, thereby promoting rapid and correct compensation by the cluster when a communication failure occurs.
When a node or part of a system area network becomes inoperative, remote probing retrieves either a value identifying the problem or an indication that the remote memory is inaccessible; verifying inaccessibility also aids in problem diagnosis. In various embodiments the retrieved value may include a counter, a validation value, a status summary, an epoch which is incremented (or decremented) by each restart or each reboot, a root pointer that bootstraps higher level communication with other cluster nodes, and a message area that provides additional diagnostic information.
Remote memory probing allows the system to more effectively select between different compensating steps when an error condition occurs. One of the most potentially damaging problems is a “split brain.” This occurs when two or more nodes cannot communicate to coordinate access to shared storage. Thus, a significant risk arises that the node will corrupt data in their shared storage area. In some embodiments, the invention uses an emergency message location on a shared disk to remove the failed node from the cluster while allowing the failed node to be made aware of its status and thus prevent data corruption. The remaining active nodes may also coordinate their behavior through the emergency message location. When a node is disconnected from a cluster the invention provides methods that make reduced use of locks by coordinating locking with interrupt handling to release the global resources that were previously allocated to the node. These methods also provide an improved system to reallocate resources throughout the cluster. Other features and advantages of the present invention will become more fully apparent through the following description.
BRIEF DESCRIPTION OF THE DRAWINGS
To illustrate the manner in which the advantages and features of the invention are obtained, a more particular description of the invention will be given with reference to the attached drawings. These drawings only illustrate selected aspects of the invention and thus do not limit the invention's scope. In the drawings:
FIG. 1 is a diagram illustrating one of many clustered computer systems suitable for use according to the present invention.
FIG. 2 is a diagram further illustrating two nodes in a cluster according to the invention.
FIG. 3 is a diagram illustrating method steps performed and results obtained for failure detection and diagnosis according to the invention.
FIG. 4 is a diagram relating the method of FIG. 3 to the nodes in FIG.2.
FIG. 5 is a diagram illustrating structures used by the method of FIG.4.
FIG. 6 is a diagram illustrating structures for using a shared disk as an alternative communication path according to the invention.
FIG. 7 is a diagram illustrating queues and related components for managing allocation of resources according to the invention.
FIG. 8 is a flowchart illustrating a method for managing resource allocation according to the invention.
DETAILED DESCRIPTION OF THE PREFERRED EMBODIMENTS
The present invention relates to methods, devices, and systems for managing resources in a clustered computing system. Before detailing the architectures of the invention, the meaning of several important terms is clarified. Specific examples are given to illustrate aspects of the invention, but those of skill in the art will understand that other examples may also fall within the meaning of the terms used. Some terms are also defined, either explicitly or implicitly, elsewhere herein. In particular, all portions of U.S. patent application Ser. No. 60/03 8,251 filed Feb. 21, 1997, which were not incorporated by reference into the technical background above are hereby incorporated by reference into this detailed description. In the event of a conflict, the text herein which is not incorporated by reference shall govern.
Some Terminology
As used here, “cluster” means a group of at least two interconnected computers (“nodes”) which can present a unified system image. Note that the cluster may also support execution of cluster-aware applications which pierce the unified system image to directly influence or control the division of labor between nodes. In many cases, but not all, the cluster will also include a shared disk or shared disk array or other shared nonvolatile storage subsystem which is directly accessible to more than one of the nodes.
The interconnected cluster nodes form a “system area network” which differs from legacy networks in that system area networks support presentation of a unified system image while legacy networks do not. Toward this end, system area networks generally have much greater bandwidth and much lower latency than legacy networks. Bandwidth and latency are thus measured with respect to local area networks and other legacy networks, and the numbers will change as the technologies of both system area networks and legacy networks advance.
As used here, “legacy network” includes many local area networks, wide area networks, metropolitan area networks, and/or various “Internet” networks such as the World Wide Web, a private Internet, a secure Internet, a value-added network, a virtual private network, an extranet, or an intranet. Clusters may be standalone, or they may be connected to one or more legacy networks; discussions of the cluster as a “node” on a legacy network should not be confused with discussions of intra-cluster nodes. Clusters may also use a legacy network as a backup link, as discussed in connection with FIG. 2, for instance.
Clusters Generally
One of many possible clusters suitable for use according to the invention is shown in FIG. 1, as indicated by the arrow labeled100. Thecluster100 includesseveral servers102 and aworkstation node104; other suitable clusters may contain other combinations of servers, workstations, diskless computers, laptops, multiprocessors, mainframes, so-called “network computers” or “lean clients”, personal digital assistants, and/or other computers asnodes106.
The illustratedcluster100 includes a special-purpose node108; other clusters may contain additional such nodes108 or omit such nodes108. The special-purpose node108 is a computer tailored, by special-purpose hardware and/or software (usually both), to perform particular tasks more efficiently thangeneral purpose servers102 orworkstations104. To give but a few of the many possible examples, the node108 may be a graphics engine designed for rendering computer-generated images, a digital signal processor designed for enhancing visual or audio signals, a parallel processor designed for query or transaction processing, a symmetric multiprocessor designed for molecular modeling or other numeric simulations, or some other special-purpose computer or computer system (the node108 could itself be a cluster which is presently dedicated to a specific application).
Although clusters are typically formed using standalone computers asnodes106, embedded computer systems such as those used in automated manufacturing, process control, real-time sensing, and other facilities and devices may also serve asnodes106. Clusters may also include I/O systems, such as printers, process controllers, sensors, numerically controlled manufacturing or rapid prototyping devices, robots, other data or control ports, or other interfaces with the world outside the cluster.
Thenodes106 communicate through asystem area network110 usinginterconnects112.Suitable interconnects112 include Scalable Coherent Interface (LAMP) interconnects, serial express (SciLite), asynchronous transfer mode, HiPPI, Super HiPPI, FibreChannel, Myrinet, Tandem ServerNet, and SerialBus (IEEE 1394/“FireWire”) interconnects (marks of their respective owners). Thesystem area network110 includes software for routing, switching, transport, and other networking functions. Software implementing the claimed invention may be integrated with the pre-existingsystem area network110 functionality or it may be implemented separately.
The illustrated cluster also includes a shareddisk array114, such as a redundant array of disks. Other cluster embodiments include other shared nonvolatile storage such as uninterruptible-power-supply-backed random access memory or magnetic tapes. At least twoservers102 have access to the shareddisks114 through achannel116 which does not rely on theinterconnects112 to operate.
One ormore servers102 may connect the cluster to anetwork118 of workstations ormobile clients120 and/or connect the cluster toother networks122. Thenetworks118 and122 are legacy networks (as opposed to system area networks) which may include communications or networking software such as the software available from Novell, Microsoft, Artisoft, and other vendors, and may operate using TCP/IP, SPX, IPX, and other protocols over twisted pair, coaxial, or optical fiber cables, telephone lines, satellites, microwave relays, modulated AC power lines, and/or other data transmission “wires” known to those of skill in the art. Thenetworks118 and122 may encompass smaller networks and/or be connectable to other networks through a gateway or similar mechanism.
As suggested by FIG. 1, at least one of thenodes106 is capable of using a floppy drive, tape drive, optical drive, magneto-optical drive, or other means to read astorage medium124. Asuitable storage medium124 includes a magnetic, optical, or other computer-readable storage device having a specific physical configuration. Suitable storage devices include floppy disks, hard disks, tape, CD-ROMs, PROMs, random access memory, and other computer system storage devices. The physical configuration represents data and instructions which cause the cluster and/or its nodes to operate in a specific and predefined manner as described herein. Thus, the medium124 tangibly embodies a program, functions, and/or instructions that are executable by computer(s) to assist cluster resource management substantially as described herein.
Suitable software for implementing the invention is readily provided by those of skill in the art using the teachings presented here and programming languages and tools such as Java, Pascal, C++, C, CGI, Perl, SQL, APIs, SDKs, assembly, firmware, microcode, and/or other languages and tools.
Cluster Nodes
An overview of two cluster nodes200,202 and their immediate environment is now given with reference to FIG.2. The nodes200,202 are inter-connected byinter-connects112 and one or more system area network switches204.Suitable interconnects112 andswitches204 include commercially available devices from Dolphin, Tandem, Myricom, and other suppliers, including without limitation devices described in materials filed with the Patent Office in connection with this application.
In the illustrated cluster, the nodes200 and202 are also connected by abackup link206 such as an RS-232 link, an Ethernet, or another local area network. The relatively low bandwidth and/or high latency of thebackup link206 in comparison to thesystem area network112,204 requires that use of the backup link be infrequent; thebackup link206 is typically used only in emergencies such as a failure of the system area network interconnection. In such emergencies, familiar protocols are used to avoid “split-brain” problems that damage or destroy data on the shareddisk114.
Other clusters do not include thebackup link206. Indeed, as explained below, the present invention provides a substitute for thebackup link206 in the form of an emergency communication channel using the shareddisk114. However, the inventive emergency communication channel may also be used to advantage inclusters100 that include abackup link206, to provide additional redundancy in communication paths.
As discussed below, each of the illustrated nodes200,202 includes software, hardware in the form of processors and memory, and sharable resources which have been allocated to the node. Node A200 also contains apool212 of resources which are not presently allocated.
Thenode106 software includes a local (to the node)operating system208 such as Novell NetWare, Microsoft Windows NT, UNIX, IBM AIX, Linux, or another operating system (NETWARE is a mark of Novell; WINDOWS NT is a mark of Microsoft; other marks belong to their respective owners). Interrupt handlers andvectors210 are part of theoperating system208 and/or provided in loadable modules, drivers, exception handlers, or similar low-level routines. Many of the interrupthandlers210 are standard, commercially available components. However, the interrupthandlers210 may also include routines implemented according to the present invention for managing apool212 of sharable resources such as memory buffers or bandwidth credits.
The illustratednode106 software also includes adebugger214.Cluster100 debuggers will generally be more complex than debuggers on standalone computers. For instance, it may be desirable to kick everynode106 into debugging mode when onenode106 enters that mode. For this reason, and for convenience, thedebuggers214 onseparate nodes106 preferably communicate with one another, either through the systemarea network switch204, thebackup link206, or the emergency communication channel of the present invention.
Eachnode106 includes one ormore processors216. Suitable processors include commercially available processors such as Intel processors, Motorola processors, Digital Equipment processors, and others. For purposes of the present invention, theprocessors216 may include PALs, ASICs, microcoded engines, numeric or graphics coprocessors, processor cache, associated logic, and other processing hardware and firmware.
Eachnode106 also includeslocal memory218 for storing data and instructions used and manipulated by the processors, including data and instructions for the software described above or elsewhere herein. The local memory may include RAM, ROM, flash memory, or other memory devices. The illustrated nodes200,202 also include sharedmemory220 which is accessible byother nodes106.Other cluster100 configurations place all shared memory on asingle node106, or in a separate device which supports memory transfers but lacks aprocessor216.
Each of the illustratednodes106 also containsresources222 which have been allocated to thenode106 from theresource pool212. As noted, the allocated resources may be memory buffers (residing in shared memory220); credits toward bandwidth, priority or otherscarce cluster100 resources, or any other computational resource which it is more cost-effective to share among nodes than it is to dedicate permanently to each node. By contrast, theprocessors216 andinterconnects112 are typically dedicated rather than pooled. At other times during execution of instructions by thenodes106, one or both the illustratednodes106 might have returned the resources to thepool212. Inother clusters100, thepool212 and/or associated structures that manage the allocation could also be distributed amongseveral nodes106 instead of residing on a single node200.
Resource Management Generally
Theprocessors216,memories218 and220,sharable resources212 and222, shareddisk114, backup link206 (if any), and other cluster components are resources that must be efficiently managed to make clusters cost-effective. Good cluster resource management includes methods and tools for (a) detecting failures, (b) compensating for failures, and (c) reallocating sharable resources betweennodes106 when cluster membership or other circumstances change significantly.
For instance, maximizing availability of the cluster's resources to application software requires (a) rapid detection of inter-node communication problems, (b) rapid and accurate diagnosis of the source of such a problem, and (c) rapid compensation steps to either restore the system area network or else remove a node when it can no longer be reached through the network. When a node is removed from working membership in the cluster, the node's access to the shareddisk114 must be blocked to prevent the removed node from destroying data.Sharable resources222 allocated to the removed node should also be returned to thepool212.
Likewise, when anode106 is restored to membership in the workingcluster100, or when anode106 is first added to thecluster100, resources must be managed appropriately. The rest of thecluster100 must be notified of thenew node106 so theother nodes106 can detect any subsequent failure of thenew node106. Thenew node106 must typically be given access to the shareddisk114 and a chance to requestsharable resources222 from thepool212.
Moreover, during the course of normal operation, bothnew nodes106 andother nodes106 must be capable of obtaining or returningsharable resources222 as needed to perform their assigned tasks and allow theother nodes106 to perform their assigned tasks. For instance, memory buffers222 that are no longer needed should be promptly returned to thepool212, without interfering withnodes106 that are busy on tasks that don't use buffers222.
Various aspects of resource management are discussed in greater detail below, including failure detection and diagnosis, compensation for inter-node communication failures, and reallocation of sharable resources. Embodiments and processes according to the present invention may include any or all of the novel improvements presented here.
Failure Detection and Diagnosis
One conventional approach to failure detection includes broadcasting a heartbeat signal; in effect each node continually tells the other nodes (or a cluster manager node) “I am still running.” When a predetermined time passes without another heartbeat signal arriving, the node whose heartbeat is missing is presumed to have failed. Another known approach monitors a remote interconnect register; during normal operation the register's value is regularly changed. When a predetermined time passes without a change in the register value, the software on the associated remote node is presumed to have failed.
Unfortunately, these conventional methods provide little or no helpful information with which to diagnose the nature and cause of communication problems. The heartbeat signal may not arrive because the sending node suffered a software failure, because it suffered a hardware failure, because it was placed in debugging mode (which slows or temporarily stops execution), or because one or more of the interconnects or system area network switches failed. More than one of these causes may also be present.
FIGS. 3 through 5 illustrate an approach to failure detection and diagnosis provided by the present invention. The invention makes specific diagnosis of problems easier and more accurate, thereby promoting rapid and correct compensation by thecluster100 when a communication failure occurs.
During an initialprobing step300, afirst node106,400 (denoted K) probes remote memory located in a second node106 (denoted J) in an attempt to obtain initial values from a probe structure. Suitable probe structures, which are discussed below, include without limitation aregister402, word, byte, or other addressable memory location and/or a structure502 residing in several addressable memory locations. The probingstep300 generally supplies the probing node400 with a copy of the value stored in the remote memory location(s) probed, such asremote registers402 or memory pages404. In one embodiment, the retrieved value merely contains a counter value508 or other value which is regularly updated by theremote node106,interconnect112, or other probed device so long as that probed device is operating normally.
However, in other embodiments the retrieved value contains more than just the counter value508. For instance, the retrieved value may include avalidation component510. Thevalidation510 is used during a validatingstep302 to reduce the risk that the counter value508 is not valid. For instance, in devices whose memory on startup contains random values, the validation may be set to an unusual value (such as all zero bits or all one bits) after the counter value508 is properly set by the device being probed. In devices whose memory is initialized on startup (by being zeroed, for instance)validation510 may be set to a value other than the initial value. Alternatively, thevalidation510 may be a checksum computed from the counter value508 and/or based on the value of other components of the probe structure502.
If the validatingstep302 does not find a valid checksum or other validation of the counter508, then the probingnode106 proceeds on the assumption that the probed device is presently unavailable. Retry loops can then be made. If a valid counter508 is not available, the probing node400 proceeds on the assumption that the probed device is not available. Conclusions can be drawn about the cause of the unavailability using a process similar to that described below in connection with anormal operating step304.
During thestep304, the probing node400 performs tasks which may require communication with the probed node. For clarity of illustration, only those aspects of the tasks that involve detecting and diagnosing failures to communicate with node J are shown. Two basic approaches to failure detection are possible, as indicated bysteps306 and308, respectively.
As indicated bystep306, the probing node400 may closely monitor node J or another device such as aninterconnect112 or systemarea network switch204, regardless of whether the probing node400 and the probed device need to send computational results back and forth. That is, the probing node K may serve as a “watchdog” to detect failures as rapidly as possible. The probe structure update interval and the monitoring interval should be staggered, such as being twice and thrice some interval T, to avoid false conclusions. One suitable T is 0.5 seconds. Such a watchdog approach could be used, for example, in a real-time sensorydata gathering cluster100 when communications between the twonodes106 are critical but also relatively infrequent, allowing time for most problems to be fixed if they are detected quickly enough.
On the other hand, the probing node400 may take the approach indicated bystep308 and probe the device to determine its status only when the probingnode308 is ready for data or control information to move between it and the probed device. This approach reduces use of thesystem area network110 by remote memory probes, freeing bandwidth and possibly alsoprocessors216 to perform other work.
Regardless of whetherstep306,step308, or some mixture of the two steps is used, assume now that the probing node400 needs to determine whether it can still communicate with the remote device. In one embodiment, the probing node400 assumes during astep310 that communication is still possible if the probing node400 communicated with the device not long ago. That is, thecluster100 includes resource management means for remotely probing memory in a device (such as aremote node106, aninterconnect112, or a switch204) when a most recent communication with the device occurred more than a predetermined period of time in the past.
The length of the predetermined period necessarily varies betweenclusters100, and may vary within a givencluster100 in response to changing circumstances. Using a longer duration increases the risk of a “false positive,” that is, of concluding that communication is still possible when it actually is not. The duration used will normally be orders of magnitude less than the mean time between failure of the communications path in question. In general, the duration used should also be less than the time needed to reroute the data to another destination or recapture the data that was lost because the communications failed. It may also be appropriate to reduce the duration used based on the size of the remote device's buffers and the rate at which it receives or produces data to be sent to the probing node400.
If communication between the probing node400 (or another probing device) and the remote device is not recent enough, then the probing node400 tries duringstep312 to probe the device's memory to obtain a copy of at least the counter508, and to receive copies of any other probe structure502 components present in the embodiment. Attempts to probe remote memory duringsteps300 and312 may invoke different routines for different devices, but a uniform interface such as an application program interface (“API”) call is also possible. One suitable API includes two functions which return results from a predefined set of outcomes, as shown in the following pseudocode:
ProbeGet( LONG RemoteDeviceId, PROBE* ProbePtr) returns ProbeResult;
ProbeSet( LONG RemoteDeviceld, PROBE* ProbePtr) returns ProbeResult;
Enumerated type ProbeResult is {
RESULT_SUCCESS, // successful call
RESULT_BADARG, // bad argument
RESULT_NOMEM, // no memory for operation
RESULT_INUSE, // port or item already in use
RESULT_UNKNOWN, // reference to unknown item
RESULT_UNREACHABLE, // target node unreachable
RESULT_LINKDOWN, // interconnect link is down
RESULT_FAILURE // general failure
};
As indicated byresults314 through328, the present invention provides detailed information regarding the cause of communication failures. For instance, if the ProbeGet() call or other remote memory read is successful and the counter508 is validated by thevalidation field510 and the counter508 value read differs from the last value read (duringstep300 or a previous step312), then the likelihood is high that both the remote device or node and the intervening interconnect(s) are working. That is,condition314 holds.
However, it may happen that the remote memory read is successful but the counter value508 is not valid. This could indicate either condition320 (node software has crashed) or condition316 (node operating system208 is rebooting and/or node applications software is restarting). To distinguish between these conditions, one embodiment uses a bitflag or other status values in a status summary506. The bitflag is set when the software is about to restart/reboot, and is cleared otherwise.
Some embodiments also include anepoch value504 which is incremented (or decremented) by each restart/reboot. This allows the probing node400 to distinguish betweenconditions314 and318, that is, between a valid counter508 set during the previous software execution on the remote device and a valid counter508 set during the current execution. Overly frequent reboots or restarts may be worth investigating even if communication is eventually possible, because they tend to reducecluster100 availability and efficiency.
In some embodiments, thedebugger214 sets status bits506 when it is invoked. This allows the probing node400 to detect condition322 (remote device in debugging mode) by determining that the remote memory read succeeded, the probe structure502 was validated by thefield510, and the debugger flag506 is set. This condition may then be propagated, so that when onenode106 is forced into the debugger by an overflow, illegal address, abend, or similar problem, that fact is rapidly detected and the other cluster nodes are asked (or forced) to also yield control to theirrespective debuggers214.
In each of the preceding examples, the attempt to read remote memory succeeded in retrieving a value from that memory. However, if one or more of theinterconnections112 or system area network switches204 or hardware within the remote device fails, then the remote memory will often be inaccessible, making the remote memory's contents unavailable. Some embodiments include hardware that allows the ProbeGet() call or other remote memory read to distinguish between reading a value from memory and failing to read a value from memory. Thus, the probing node400 may detectconditions324 through328 (some type of hardware failure).
To localize the hardware failure, additional attempts may be made to read remote memory from different devices in the communication path, as illustrated in FIG.4. For instance, if a register402 (containing a counter508 or containing some other value) can be read but apage404 of sharedmemory220 in theremote node106 cannot be read, then condition324 (node hardware crashed but interconnect works) is likely. If theinter-connect register402 cannot be read, then either condition326 (interconnect failed) or condition328 (interconnect and/or node failed) is present. By checking for continued activity by theremote node106 through a different communication channel, such as thebackup link206 or the shareddisk114, the probing node400 may determine either that theinterconnect112 andremote node106 have both crashed or that theinterconnect112 is down but theremote node106 is still running.
In addition to the information already discussed, a remote memory read may provide additional data, as shown in FIG. 5. Aroot pointer512 may direct the probing node400 to bootstrapping information to allow a remote reboot of the failednode106 or failed device, with the reboot being aided or initiated by the probing node400. Aroot pointer512 may also allow a booting node to locate communications buffers in a remote node in order to establish higher level communication. Aroot pointer512 may also be used to allow a booting node to download code from anothernode106 that is already running. More generally, theroot pointer512 may point to boot code or to a communications buffer.
The status summary506 and/or aseparate message area514 may contain diagnostic information such as debugging traces, the call chain, the ID of the last thread or task invoked before the remote device orremote node106 failed (in effect, identifying the node's “killer”), error messages, load module maps, system usage statistics, or communication logs. This information may prove very helpful in determining the cause of failures (especially software failures) and selecting steps to compensate for the failure. Possible compensating steps include cutting the node out of the cluster until an administrator puts it back in, rebooting the node, restarting a particular task or thread, creating a certain file or directory or loading certain code and then retrying the operation, and so on; which steps should be tried depend on the likely cause of the failure.
Although specific examples are given, those of skill will appreciate that various combinations of the illustrated elements are also possible. For instance, the method steps illustrated and discussed here may be performed in various orders, except in those cases in which the results of one step are required as input to another step. Likewise, steps may be omitted unless called for in the claims, regardless of whether they are expressly described as optional in this Detailed Description. Steps may also be repeated, or combined, or named differently. As a few of the many possible examples, some embodiments omitstep310, and some have every node probe every other node while others have only designated monitor nodes do the probing.
Likewise, some embodiments group the conditions differently. For instance, one tracks restarts usingepoch values504 but does not distinguishinterconnect112 hardware failures fromremote node106 hardware failures. Another embodiment reads hardware status registers to obtain more detail regarding hardware failures, such as distinguishing between a loss of power and a loss of signal connection.
As shown in FIGS. 4 and 5, different embodiments also organize the remote memory probe structures in different ways. Some use a read-only register or two while others use RAM that is both remotely readable and remotely writable. Some read the counter508 directly while others follow an address pointer500 or additional levels of indirection. Some use only a few bytes or words of memory while others dedicate an entire block or page (probably one having identical physical and logical addresses). Some use all the fields shown in FIG. 5, while others use only a counter508, or only a counter508 andvalidation checksum510, or some other subset of the fields shown, or supplement the subset with additional information. Some embodiments probe both theinterconnect112 and theremote node106, while others probe only theinterconnect112 or only theremote node106; yet others also probe the system area network switches204.
In each embodiment, however, the remote memory probe provides useful information about the nature and/or location of acluster100 component failure, which can be used to select between different compensating steps. This in turn promotes cluster availability and effectiveness.
Failure Management by Node Removal
A “split brain” occurs when regular communication with one ormore interconnect112 and/or switch204 failures prevent communication with one ormore nodes106 and there is a significant risk that thesilent nodes106 will corrupt or damage data on the sharedstorage114. Determining whether it is necessary to “freeze out” or “fence off” (temporarily remove) the silent node(s)106 and/or block their access to the sharedstorage114 is faster and easier if an alternative communication path to the silent node(s) is available. Many clusters use the backup network or serial/parallel link206 as such as path.
To avoid the expense, complexity, and maintenance requirements of using thebackup link206, some embodiments according to the present invention use the shareddisk114 as an alternative communication path during possible split brain episodes and/or other situations in which thesystem area network110 is unavailable (e.g., interconnects112 orswitches204 are down) or inappropriate (e.g., nonvolatile storage is desired). In addition, some embodiments use both thebackup link206 and the shareddisk114 as communication paths, since redundancy increasesoverall cluster100 reliability.
Thenodes106 in question will already have access to the shareddisk114 throughchannels116. Implementing the shared disk communication path according to the invention involves selecting anemergency message location224 on thedisk114. Thelocation224 may be made known to allnodes106 by hard-coding it in node software such as theoperating system208 or interrupthandlers210. Alternatively, the location may be dependent on some event such as the last file written by thenode106 or the most recent log entry written. Or the location may be specified in a boot sector on thedisk224.
Although thelocation224 may be a partition reserved for emergency communications, this uses an entry in a partition table that may be limited to very few entries. It is therefore preferred that thelocation224 be specified as a particular disk sector, a particular file, or another fixed address relative to an addressing scheme that allows at least dozens or hundreds of entries.
The messages stored at thelocation224 may include information organized in a structure such as that indicated generally at600 in FIG.6. Theemergency communication structure600 may also serve as acluster node registry600 which is maintained during operation of thecluster100 asnodes106 are added, removed, or assigned to different roles. Thestructure600 may be implemented as an array, linked list, doubly linked list, balanced tree, or other data structure.
The illustrated structure includes aheader602 and a collection of two or more node records604. Theheader602 includes a field specifying the number of currentlyactive nodes606; active nodes are those running and in normal communication with the rest of thecluster100. Another field specifies the number oftotal nodes608, that is, the maximum number of active nodes in the current hardware configuration.
Acluster master field610 identifies the node that is currently responsible for coordinating node removal in the event of a split brain event. Thecluster master node106 may also be responsible for monitoring the other nodes using remote memory probes as discussed above, or using conventional heartbeat monitoring. Alternatively, all nodes may monitor one another, or each node may monitor only the nodes it communicates with.
Each of the illustratednode records604 includes anode ID612, such as a node system area network address, node table index, node name, or other identifier. Anepoch field614 indicates the number of times thenode106 in question has rebooted since thecluster100 started running; theepoch614 may also track transaction rollbacks, application program restarts, or other retry indicators. Anode role field616 indicates whether thenode106 in question is suitable for service as a cluster master, whether thenode106 includes special purpose features such as a graphics engine, and/or whether thenode106 serves as the primary interface to users or I/O devices. Anode status618 field may contain status and diagnostic information of the type discussed in connection with FIGS. 3 through 5.
In other embodiments, thecommunication structure600 may omit some of the illustrated fields and/or contain other fields. For instance, a semaphore or mutex may be present to synchronize updates to thestructure600; a checksum or other validation field may be present; and bootstrapping information of the kind discussed in connection with FIG. 5 may be present.
In operation, thestructure600 at theemergency message location224 is used by the cluster master and theother nodes106 to coordinate their actions when communication through the system area network110 is prevented. The coordination may include settingstatus618 flags that order a silent node to stop accessing the application area on the shareddisk114, to shut a certain task down, to enter thedebugger214, and/or to shut itself down (node “poison pill”). Coordination may include reassigning the role of cluster master if the cluster master goes silent.
Thestructure600 may also be used by the cluster master and/ordebuggers214 to maintain a persistent store of diagnostic information, such as epoch counts614, task IDs, stack snapshots, memory dumps, and the like, in a location that is accessible to theother nodes106 in the cluster. Indeed, the remote memory probes discussed above in connection with FIGS. 3 through 5 may be mirrored or initially performed on embodiments of thestructure600 which include fields such as those shown in FIG.5.
Resource Reallocation
Methods and tools for mediating requests forsharable resources222 from aglobal pool212 are well-known; the available tools and methods include those for preventing deadlock, for load-balancing, for scheduling, and for cache management, among others. The present invention provides a new approach to performing reallocation once thecluster100 has determined where theresources212,222 should be placed. One illustration of reallocation according to the invention is shown in FIGS. 7 and 8.
As shown in FIG. 7, thecluster100 includes aglobal queue700 and severallocal queues702 identifyingfree resources704; “free” in this context means “available for allocation” rather than “without cost.” A queue and lock management means706 controls access to theglobal queue700 using ahead pointer708 pointing to a linked list ofresources704 and alock710. Of course, a global group and corresponding local groups of arrays, doubly-linked lists, trees, and other structures may be used in place of the linked lists shown to manage thesharable resources704. The queue and lock management means706 controls access to the local queues usinghead pointers712 and interrupthandlers210.Suitable locks710 include mutexes, semaphores, and other concurrent process synchronization tools, including many which are familiar to those of skill in the art. One implementation of the queue and lock management means706 using a mutex and interrupts is described by pseudo-code below.
FIG. 8 further illustrates aportion800 of the queue and lock management means706, corresponding to the routine Getbuffer() in the pseudo-code. During an interrupt disablingstep802, interrupts on anode106 are disabled and theprocessor216 state is saved. This is accomplished using push status word and clear interrupt or similar assembly language instructions. If thelocal queue702 from which aresource704 is being requested is empty, as will be the case the first time through the routine800 and thereafter on occasion, then astep804 attempts to distributeresources704 to this (and possibly other)local queues702 from theglobal queue700. Distribution includes obtaining thelock710 during astep806, parceling out theresources704 during astep808, and then releasing theglobal queue lock710 during astep810.
Ifresources704 are available in thelocal queue702 in question, then one ormore resources704 are removed from thelocal queue702 during astep812. The resource(s)704 are then given to the calling application or other process after the processor state is restored and interrupts are re-enabled, during astep814. This is accomplished using pop status word, set interrupt and/or similar assembly language instructions, and by passing a pointer to the released resource(s)704 to the caller as a return value on the stack. Of course, the pointer could also be placed in shared memory or returned in another manner.Resources704 which are represented compactly, such as bandwidth credits in someclusters100, may be returned directly rather than being returned through a pointer.
As used herein, “interrupt handler” means code that runs while an interrupt is disabled. Interrupt handlers in this sense are not limited to device drivers. Interrupts are not necessarily re-enabled when processor state is restored, because they may have been disabled when the interrupt handler took control.
Prior to initialization and after resource demands are placed, theglobal queue700 may be empty. In this event, anoptional step816 makes room formore resources704 in theglobal queue700 by allocating memory, for example, or by negotiating with a bandwidth credit allocation manager. If there is room in theglobal queue700,resources704 are added to theglobal queue700 during astep818. Thenew resources704 may be effectively created in place, as during allocation ofmemory buffers704, or they may be moved into theglobal queue700 from another location. In particular,resources704 may on occasion be moved into theglobal queue700 from one or more of thelocal queues702.
The RetumBuffer()routine in the pseudo-code, and similar portions of other embodiments of the queue and lock management means706, operate in a manner similar to GetBuffer() and theGet Resource step800. However, RetumBuffer() and its variations returnresources704 to thelocal queue702 after theresources704 are no longer needed by an application program or other software on anode106. In particular,resources704 are preferably returned to thelocal queue702 when the application dies or is killed.Resources704 are preferably returned to theglobal queue700 when anode106 is removed from thecluster100 to avoid split brain problems or to free thenode106 for use in anothercluster100 or as a standalone computer. In such cases, access to thelocal queue702 is through the interrupthandler210 and access to theglobal queue700 is controlled by thelock710.
One advantage of the present invention is reduced use of locks, which in turn reduces the frequency and extent ofprocessor216 execution delays. Only theglobal queue700 requires a mutex or similarglobal lock710. Thelocal queues702 are manipulated inside interrupthandlers210 that are local to thenode106 to which thelocal queue702 in question belongs. Thus, operations which can alter the local queues702 (such as the addition or removal ofresources704 or the reordering of thequeue702 or updates to timestamps on resources704) only prevent other processes from working on thenode106 in question; theother nodes106 can continue application or other tasks without delay. Only when theglobal queue700 is being modified is access globally blocked. Reducing lock usage improvescluster100 throughput. Allocation and return are also independent. That is, aresource704 allocated by oneprocessor216 may be returned for subsequent use by anotherprocessor216.
Although one embodiment of the invention provides eachprocessor216 with its ownlocal resource queue702, in other embodiments someprocessors216 have noresource queue702. In some embodiments, alocal queue702 is associated with a set ofprocessors216 rather than asingle processor216, with a set of processes or tasks or threads, and/or with a set of one ormore cluster nodes106.
Heuristics are also used during the parceling outstep808 and/or theresource creation step818 to determine actual and expectedresource704 allocation. One approach uses thresholds such as the number ofresources704 and/or the number ofprocessors216. For instance, anylocal queue702 containing more than twice its percapita share of theavailable resources704 may be required to return resources to theglobal queue700 for subsequent redistribution to otherlocal queues702. Time thresholds may also be used. For instance,resources704 not allocated to an application program within a minute of being created may be freed from theglobal queue700 back to theoperating system208.
In one embodiment, thecluster100 uses means and methods described in the following pseudo-code as part of the queue andlock management code706 to manageresources704 in the form of dynamic buffers:
/*-------- DATA DECLARATIONS --------*/
/* Maximum number of processors in this machine */
#define MAXIMUM_NUMBER_OF_PROCESSORS 4
/* Maximum number of Buffers allowed ever */
#define MAXIMUM_NUMBER_OF_BUFFERS_POSSIBLE 5000;
/* Number of buffers you want to leave in global queue
when allocating all the buffers among the Local Queues. */
#define RSVD_NUMBER_BUFFERS10
/*  Number of buffers to add to a queue at one time */
#define NUMBER_ADD_BUFFERS10
/* Generic Buffer */
typedefstruct _buff_t {
struct _buff_t *nextLink;
struct _buff_t *prevLink;
uint8  buffer[1024];
} buff_t;
/* Generic Mutual Exclusion Variable */
typedef_mutex_t mutex_t;
/*MUTEX which controls access to Global Free Queue*/
mutex_t*buff_FreeQueue_Lock = NULL;
LONGbuff_FreeQueue_TotalCount = 0;/* Total # of buffers allocated (global) */
LONGbuff_FreeQueue_Count = 0;/* Current # of free buffers (global) */
LONGbuff_FreeQueue_MaxLocalCount = 0;/* Max # of buffers per local queue */
/* Global Buffer Queue, Head / Tail Pointers */
buff_t*buff_FreeQueue_Head = NULL;
buff_t*buff_FreeQueue_Tail = (buff_t*) &buff_FreeQueue_Head;
/* Local Buffer Queues, Head/Tail Pointers indexed by number of Processors */
buff_t*buff_FreeLocQueue_Head[MAXIMUM_NUMBER_OF_PROCESSORS];
buff_t*buff_FreeLocQueue_Tail[MAXIMUM_NUMBER_OF_PROCESSORS];
/* -1 implies needs buffs assigned to local queue
If buff_FreeLocQueue_Count[i] exceeds buff_FreeLocQueue_MaxCount[i]
implies reached limit for this Processor
*/
/* Local Buffer Queues/processor, Current Count and Maximum Count */
LONGbuff_FreeLocQueue_Count[MAXIMUM_NUMBER_OF_PROCESSORS];
LONGbuff_FreeLocQueue_MaxCount[MAXIMUM_NUMBER_OF_PROCESSORS];
/* Variable size for Queue In/De creasing number dependent on number
of Processors. If buff_FreeLocQueue_Count[i] exceeds implies return
RSVD_NUMBER_BUFFERS to Global Queue for re-distribution */
LONG buff_FreeLocQueue_MaxCount[MAXIMUM_NUMBER_OF_PROCESSORS];
externinitlock(mutex_t *pmutex);/* Function, initialize a mutex */
externlock(mutex_t *pmutex);/* Function, obtain a LOCK on a mutex */
externunlock(mutex_t *pmutex);/* Function, release a LOCK on a mutex */
/*-------- INITIALIZATION OF DATA QUEUES AND VARIABLES --------*/
buff_t *buffp;
LONG i = 0, j = 0, NumProcs = 0, CPUsActiveMask = 0, CPUMask = 1;
for (i=0; i < (MAXIMUM_NUMBER_OF_BUFFERS_POSSIBLE / 2); i++)
{
buffp = (buff_t*) Alloc (sizeof (buff_t);
/* initialize buffer fields */
buffp->nextLink = NULL;
buffp->prevLink = NULL;
buff_FreeQueue_Tail = buffp;
buff_FreeQueue_Count++;
/* keep count of Total number of buffers allocated */
buff_FreeQueue_TotalCount++;
}
initLock (buff_FreeQueue_Lock); /*initialize Mutex */
/* Initialize Local buff Free Queues */
NumProcs = MAXIMUM_NUMBER_OF_PROCESSORS;
/* Calculate the maximum number of buffers available for a local queue */
buff_FreeQueue_MaxLocalCount = (buff_FreeQueue_TotalCount -
RSVD_NUMBER_BUFFERS) / NumProcs;
for (i = 0; i < MAXIMUM_NUMBER_OF_PROCESSORS; i++)
{
buff_FreeLocQueue_Tail[i] = (buff_t*) &buff_FreeLocQueue_Head[i];
buff_FreeLocQueue_Count[i] = 0;
/* Set minimum value in case take Interrupt before get an
Event that a processor has come on line */
buff_FreeLocQueue_MaxCount[i] = RSVD_NUMBER_BUFFERS * 2;
}
/* Now having allocated the buffers, let's parcel them out to the
Local buffer Free Queues. */
/* get bit mask of current processors OnLine */
GetActiveCPUMap (&CPUsActiveMask);
for (i=0; i < MAXIMUM_NUMBER_OF_PROCESSORS; i++)
{
if (CPUsActiveMask & CPUMask) /* increase max allowed on other local queues */
{
buff_FreeLocQueue_MaxCount[i] = buff_FreeQueue_MaxLocalCount;
ReDistBuffersToLocalQ (buff_FreeQueue_MaxLocalCount, i);/* parcel out */
}
CPUMask = CPUMask << 1;
}
RegisterForEventProcessorComesOnLine (ProcStatusOnLine);
RegisterForEventProcessorGoesOffLine (ProcStatusOffLine);
/*--------END OF INITIALIZATION--------*/
/*-------- OPERATIONAL LIBRARY ROUTINES ------------*/
;
; LONG DisableProcessorAndSaveState (void); “C” Language syntax
;
; Disable the current processor and return its state.
; Example using X86 instructions,
; assumes normal X86 Assembly to C calling convention
;
DisableProcessorAndSaveState proc
pushfd;place current processor state on stack
pop eax;now get it in register EAX
cli;disable interrupts for current processor
ret;return to caller, processor state in EAX
DisableProcessorAndSaveState endp
;
; void RestoreProcessorState (LONG state); “C” Language syntax
;
; Restore the current processor to the input state.
; Example using X86 instructions.
; assumes normal X86 Assembly to C calling convention
;
RestoreProcessorState proc
mov eax, [esp + 4];get input state from the stack
push eax;place it on the stack
popfd;now get it into current processor's state
ret;return to caller
RestoreProcessorState endp
/*
*Name:
*buff_t *Getbuffer (void)
*
*Description:
*This routine returns a buffer to be used for any of several purposes.
*
*Values returned:
*a pointer to a buffer, NULL reports an error
*/
buff_t *Getbuffer (void)
{
buff_t *buffp;
LONG flags, CPUNumber = 0, j=0;
LONG NumLocalbuff = 0, CPUNumberOnEntry = −1, CurrAvail = 0;
flags = DisableProcessorAndSaveState( );
CPUNumber = GetCPUNumber( );  /* get the CPU running on */
CPUNumberOnEntry = CPUNumber;
if (buff_FreeLocQueue_Count[CPUNumber] == −1) /* test if need to alloc buffers for
1st time */
goto DistributeLocalbuffQ;  /* re-distribute buffer's to Local Queue */
GettheBuffer:
if ( (buff_desc = buff_FreeLocQueue_Head[CPUNumber]) != NULL)
{
/* take it out of local free queue */
buff_FreeLocQueue_Count[CPUNumber]--;
buff_FreeLocQueue_Head[CPUNumber] = buff_t->nextLink;
if (buff_FreeLocQueue_Head[CPUNumber] == NULL)  /* reset head & tail */
buff_FreeLocQueue_Tail[CPUNumber] =
(buff_t*) &buff_FreeLocQueue_Head[CPUNumber];
RestoreProcessorState (flags);
return(buffp);
}
/* Out of Local buffer Free Queue buffers,
need to allocate more to the Global Queue and from there
disperse them to the local buffer Free Queues
*/
if ( (buff_FreeQueue_Count+1) > NUMBER_ADD_BUFFERS)
CurrAvail = 1;
else
CurrAvail = 0;
if (CurrAvail)
{
/* buff_FreeQueue_Head/Tail has spare buffer's,
get them from that Queue */
lock(buff_FreeQueue_Lock); /* get lock */
/* test again with the LOCK in case somebody came in ahead of us */
if ((buff_FreeQueue_Count+1) > NUMBER_ADD_BUFFERS)
{
ReDistBuffersToLocalQ (NUMBER_ADD_BUFFERS, CPUNumber);
unlock(buff_FreeQueue_Lock);  /* free lock */
goto GettheBuffer;/* run through the allocation code */
}
else
unlock(buff_FreeQueue_Lock); /* free lock */
}
/* buff_FreeQueue_Head/Tail out of spare buffer's,
try to allocate some more,
Add plus one, so ensure never hit NULL terminator case when redistribute
since, may be here in uniprocessor case due to on empty. */
if ((buff_FreeQueue_TotalCount - (NUMBER_ADD_BUFFERS+1))
> MAXIMUM_NUMBER_OF_BUFFERS_POSSIBLE)
{
RestoreProcessorState (flags);
return (NULL);
}
/* Can now attempt to add NUMBER_ADD_BUFFERS buffers to the Global List */
lock(buff_FreeQueue_Lock); /* get lock */
for (j=1; j < (NUMBER_ADD_BUFFERS+1); j++)
{
buffp = (buff_t*) Alloc (sizeof (buff_t);
if(!buffp)  /* Out of memory */
{
unlock(buff_FreeQueue_Lock); /* free lock */
RestoreProcessorState (flags);
return (NULL);
}
/* initialize buffer fields */
buffp->nextLink = NULL;
buffp->prevLink = NULL;
buff_FreeQueue_Tail = buffp;
buff_FreeQueue_Count++;
/* keep count of Total number of buffers allocated */
buff_FreeQueue_TotalCount++;
}
/* Have added NUMBER_ADD_BUFFERS to the Global List,
must now distribute them to the Local buffer Free Queue
and adjust the MAX COUNT for the Local buffer Free Queue.
*/
buff_FreeQueue_MaxLocalCount =
(buff_FreeQueue_TotalCount - RSVD_NUMBER_BUFFERS) /
MAXIMUM_NUMBER_OF_PROCESSORS;
buff_FreeLocQueue_MaxCount[CPUNumber] += NUMBER_ADD_BUFFERS;
/* Now distribute the buffers amongst the local queue */
ReDistBuffersToLocalQ (NUMBER_ADD_BUFFERS, CPUNumber);
unlock(buff_FreeQueue_Lock);  /* free lock */
goto GettheBuffer;/* run thru′ the allocation code */
/* distribute buffers to Local Queue for 1st time */
DistributeLocalbuffQ:
lock(buff_FreeQueue_Lock); /* get lock */
if (buff_FreeQueue_Count > buff_FreeLocQueue_MaxCount[CPUNumber])
NumLocalbuff = buff_FreeLocQueue_MaxCount[CPUNumber];
else
NumLocalbuff = buff_FreeQueue_Count / 2;/* take half of what's left */
buff_FreeLocQueue_Count[CPUNumber] = 0;/* set to zero */
if (NumLocalbuff)/* parcel out buffer's */
ReDistBuffersToLocalQ (NumLocalbuff, CPUNumber);
unlock(buff_FreeQueue_Lock); /* free lock */
goto GettheBuffer;/* run through the allocation code */
} /* end GetBuffer */
/*
*Name:
*void  ReturnBuffer (buff_t *returnedbuff)
*
*Description:
*This routine returns a previously allocated buff_t buffer
*to the current processor's buffer pool.
*
*Parameters in:
*returnedbuff - has a pointer to a buffer to return to queue
*
*/
void ReturnBuffer (buff_t *pbuff)
{
LONG  flags, CPUNumber = 0, j = 0, NumLocalbuff = 0;
buff_t buff_tmp1 = NULL, buff_tmp2 = NULL;
flags = DisableProcessorAndSaveState( );
CPUNumber = GetCPUNumber( );  /* Get Processor running on */
if (buff_FreeLocQueue_Count[CPUNumber] == −1) /* 1st time through, need setup */
{
lock(buff_FreeQueue_Lock); /* get lock */
if (buff_FreeQueue_Count > buff_FreeLocQueue_MaxCount[CPUNumber])
NumLocalbuff = buff_FreeLocQueue_MaxCount[CPUNumber];
else
NumLocalbuff = buff_FreeQueue_Count / 2;/* take half of what's left */
buff_FreeLocQueue_Count[CPUNumber] = 0;/* set to zero */
if (NumLocalbuff)/* parcel out buff's */
ReDistBuffersToLocalQ (NumLocalbuff, CPUNumber);
unlock(buff_FreeQueue_Lock); /* free lock */
}
if (buff_FreeLocQueue_Tail[CPUNumber] ==
(buff_t*) &buff_FreeLocQueue_Head[CPUNumber])
{
/* place buffer as first
buff_FreeLocQueue_Head[CPUNumber] = pbuff;
buff_FreeLocQueue_Tail[CPUNumber] = pbuff;
}
else
{
pbuff->nextLink = buff_FreeLocQueue_Head[CPUNumber];
buff_FreeLocQueue_Head[CPUNumber] = pbuff;
}
buff_FreeLocQueue_Count[CPUNumber]++;
pbuff->nextLink = NULL;
pbuff->prevLink = NULL;
/* Check if have too many buffers on Local Queue,
if so return specific number to Global Queue. */
if (buff_FreeLocQueue_Count[CPUNumber] >
buff_FreeLocQueue_MaxCount[CPUNumber])
{
/* Need to shed buffers to Global Queue */
if ((buff_FreeLocQueue_Count[CPUNumber] - NUMBER_ADD_BUFFERS) > 0)
{
if (buff_FreeLocQueue_Head[CPUNumber] != NULL) /* get 1st in link */
{
lock(buff_FreeQueue_Lock); /* get lock */
buff_tmp1 = buff_FreeLocQueue_Head[CPUNumber]; /* get 1st in link */
buff_tmp2 = buff_tmp1;/* and keep it */
for (j = 1;j < NUMBER_ADD_BUFFERS;j++)
buff_tmp1 = buff_tmp1->nextLink; /* move down the link */
/* remove Number of extra elements from Link */
buff_FreeLocQueue_Head[CPUNumber] = buff_tmp1->nextLink;
buff_FreeLocQueue_Count[CPUNumber] =
buff_FreeLocQueue_Count[CPUNumber] - NUMBER_ADD_BUFFERS;
buff_tmp1->nextLink = NULL;  /* terminate the Link */
/* add removed elements to the Global Queue */
buff_FreeQueue_Tail->nextLink = buff_tmp2;
buff_FreeQueue_Tail = buff_tmp2;
buff_FreeQueue_Count += NUMBER_ADD_BUFFERS;
if (buff_FreeLocQueue_Head[CPUNumber] == NULL) /* reset Head / Tail */
buff_FreeLocQueue_Tail[CPUNumber] =
(buff_t*) &buff_FreeLocQueue_Head[CPUNumber];
unlock(buff_FreeQueue_Lock); /* free lock */
}
}
}
RestoreProcessorState (flags);
} /* end ReturnBuffer */
/*
*Name:
*void ProcStatusOnLine (LONG  CPUNumber);
*
*Description:
*Function is notified when a Processor comes ONLINE
*which in turn calls functions to redistribute the buffers among the
*Local buffer Free Queues based on the number of processor's
*
*Parameters in:
*CPUNumber Number that identifies CPU that went ONLINE
*
*Values returned:none
*
*/
void ProcStatusOnline (LONG CPUNumber)
{
LONG flags = 0, i = 0, CPUMask = 1;
LONG NumProcs = 0, CPUsActiveMask = 0;
flags = DisableProcessorAndSaveState( );
lock(buff_FreeQueue_Lock); /* get lock */
/* Set Local Queue Parameters */
buff_FreeLocQueue_Tail[CPUNumber] =
(buff_t*) &buff_FreeLocQueue_Head[CPUNumber];
buff_FreeLocQueue_Head[CPUNumber] = NULL;
/* set counter to no buffers alloc'd for Local Queue yet */
buff_FreeLocQueue_Count[CPUNumber] = −1;
/* Now update max allowed for Local Free buffer's queue based on
equal share of all buffers allocated for Global queue
buff_FreeQueue_Head/Tail so far. */
NumProcs = MAXIMUM_NUMBER_OF_PROCESSORS;
buff_FreeQueue_MaxLocalCount =
(buff_FreeQueue_TotalCount - RSVD_NUMBER_BUFFERS) / NumProcs;
GetActiveCPUMap (&CPUsActiveMask);
for (i=0; i < MAXIMUM_NUMBER_OF_PROCESSORS; i++)
{
if (CPUsActiveMask & CPUMask)  /* set max allowed on local queues */
buff_FreeLocQueue_MaxCount[i] = buff_FreeQueue_MaxLocalCount;
CPUMask = CPUMask << 1;
}
unlock(buff_FreeQueue_Lock); /* free lock */
RestoreProcessorState (flags);
} /* end ProcStatusOnLine */
/*
*Name:
*void ProcStatusOffLine (LONG CPUNumber);
*
*Description:
*Function is notified when a Processor goes OFFLINE,
*which in turn calls functions to redistribute the buffers among the
*Local buffer Free Queues based on the number of processor's
*
*Parameters in:
*CPUNumber Number that identifies CPU that went ONLINE
*
*Values returned:  none
*
*/
void ProcStatusOffLine (LONG CPUNumber)
{
LONG flags = 0, i = 0, CPUMask = 1;
LONG NumProcs = 0, NumLocalbuff = 0, CPUsActiveMask = 0, NumExtra = 0;
buff_t *pbuff = NULL, *pbuff_tail = NULL;
flags = DisableProcessorAndSaveState( );
lock(buff_FreeQueue_Lock); /* get lock */
/* return all Free local buffers to Global list */
if (buff_FreeLocQueue_Head[CPUNumber])  /* test if have any */
{
/* get links and add to global links */
pbuff = buff_FreeLocQueue_Head[CPUNumber];
pbuff_tail = buff_FreeLocQueue_Tail[CPUNumber];
/*  Reset HEad / Tail pointers  */
buff_FreeLocQueue_Tail[CPUNumber] =
(buff_t*) &buff_FreeLocQueue_Head[CPUNumber];
buff_FreeLocQueue_Head[CPUNumber] = NULL;
NumExtra = buff_FreeLocQueue_Count[CPUNumber];
buff_FreeLocQueue_Count[CPUNumber] = 0; /* reset counter */
/*  Add buffers removed from Local Queue to Global Queue */
buff_FreeQueue_Tail->nextLink = pbuff;
buff_FreeQueue_Tail = pbuff_tail;
buff_FreeQueue_Count += NumExtra;
}
/* Set minimum value in case take Interrupt before get an
Event that a processor has come on line  */
buff_FreeLocQueue_MaxCount[CPUNumber] = RSVD_NUMBER_BUFFERS * 2;
/* Now update other Local Free buffer's queues that more buffers are
available from Global queue buff_FreeQueue_Head/Tail/Count */
NumProcs = MAXIMUM_NUMBER_OF_PROCESSORS;
buff_FreeQueue_MaxLocalCount =
(buff_FreeQueue_TotalCount - RSVD_NUMBER_BUFFERS) / NumProcs;
NumLocalbuff = NumExtra / NumProcs;
GetActiveCPUMap (&CPUsActiveMask);
for (i=0; i < MAXIMUM_NUMBER_OF_PROCESSORS, i++)
{
if (CPUsActiveMask & CPUMask) /* increase max allowed on other local queues */
buff_FreeLocQueue_MaxCount[i] += NumLocalbuff;
CPUMask = CPUMask << 1;
}
unlock(buff_FreeQueue_Lock); /* free lock */
RestoreProcessorState (flags);
} /* end ProcStatusOffLine */
/*
*Name:
*LONG ReDistBuffersToLocalQ (LONG NumXtra, LONG CPU);
*
*Description:
*Function redistributes the buffers from the Global Free Queue
*to the Local buffer Free Queue based on the processor's number input
*
*Parameters in:
*NumXtraNumber of buffers to place on each Local Free Queue
*CPUCPU's number to add buffs too
*
*Values returned:0Success
*1Failure
*
*Implied parameters:  buff_FreeQueue_Head, buff_FreeQueue_Tail
*buff_FreeQueue_Count, buff_FreeQueue_TotalCount
*buff_FreeLocQueue_Head[ ], buff_FreeLocQueue_Tail[ ]
*
*Assumes protected by Mutex.
*
*/
LONG ReDistBuffersToLocalQ (LONG NumXtra, LONG CPU)
{
buff_t *buff = NULL, *buff_tmp = NULL;
LONG  j = 0;
if (NumXtra && ((buff = buff_FreeQueue_Head) != NULL))
{
for (j = 1; j < NumXtra; j++)
buff = buff->nextLink;
buff_tmp = buff_FreeQueue_Head;
buff_FreeQueue_Head = buff->nextLink;
buff_FreeQueue_Count = buff_FreeQueue_Count - NumXtra;
buff->nextLink = NULL;
buff_FreeLocQueue_Tail[CPU]->nextLink = buff_tmp;
buff_FreeLocQueue_Tail[CPU] = buff;
buff_FreeLocQueue_Count[CPU] += NumXtra,
if (buff_FreeQueue_Head == NULL)
buff_FreeQueue_Tail = (buff_t*) &buff_FreeQueueHead;
}
return (0);
} /* end ReDistBuffersToLocalQ */
SUMMARY
In summary, the present invention provides a novel system and method for managing resources in a cluster. Remote memory probes and emergency messages through a shared disk can be used to manage the nodes themselves, as well as the interconnects and the system area network switches. Minimal locking in concert with careful use of interrupts can be used to manage sharable resources when a node or processor is taken down, comes up, or needs to obtain or release for some other reason sharable resources such as memory buffers.
Although particular methods embodying the present invention are expressly illustrated and described herein, it will be appreciated that apparatus and article embodiments may be formed according to methods of the present invention. Unless otherwise expressly indicated, the description herein of methods of the present invention therefore extends to corresponding apparatus and articles, and the description of apparatus and articles of the present invention extends likewise to corresponding methods.
The invention may be embodied in other specific forms without departing from its essential characteristics. The described embodiments are to be considered in all respects only as illustrative and not restrictive. Any explanations provided herein of the scientific principles employed in the present invention are illustrative only. The scope of the invention is, therefore, indicated by the appended claims rather than by the foregoing description. All changes which come within the meaning and range of equivalency of the claims are to be embraced within their scope.

Claims (15)

What is claimed and desired to be secured by patent is:
1. A method for managing resources in a cluster, the method comprising the steps of:
determining that reliable communications with a cluster node over a system area network has failed, the cluster node including a memory; and
updating a node record stored at an emergency message location on a shared non-volatile storage device to remove the node from the cluster.
2. The method ofclaim 1, wherein the determining step uses a value resulting from an attempt to remotely read the node's memory.
3. A computer system comprising:
at least two interconnected nodes capable of presenting a uniform system image such that an application program views the interconnected nodes as a single computing platform, the nodes including respective memories;
a management means for managing computational resources for use by the nodes; and
a shared nonvolatile storage device, wherein the management means comprises an accessing means for accessing an emergency message location on the shared nonvolatile storage device in response to detection of a possible node failure.
4. The system ofclaim 3, wherein the accessing means comprises a channel to a shared disk.
5. The system ofclaim 3, wherein at least one of the nodes is a special purpose graphics node.
6. The system ofclaim 3, wherein at least one of the nodes is a special purpose signal processing node.
7. A computer system comprising:
at least two interconnected nodes capable of presenting a uniform system image such that an application program views the interconnected nodes as a single computing platform;
a management means for managing computational resources for use by the nodes; and
a shared nonvolatile storage device, wherein the management means comprises an accessing means for accessing an emergency message location on the shared nonvolatile storage device in response to detection of a possible node failure, wherein the emergency message location is specified as a predetermined sector location on a disk.
8. A computer system comprising:
at least two interconnected nodes capable of presenting a uniform system image such that an application program views the interconnected nodes as a single computing platform;
a management means for managing computational resources for use by the nodes; and
a shared nonvolatile storage device, wherein the management means comprises an accessing means for accessing an emergency message location on the shared nonvolatile storage device in response to detection of a possible node failure, wherein the emergency message location is specified as a predetermined file.
9. A computer system comprising:
at least two interconnected nodes capable of presenting a uniform system image such that an application program views the interconnected nodes as a single computing platform;
a management means for managing computational resources for use by the nodes; and
a shared nonvolatile storage device, wherein the management means comprises an accessing means for accessing an emergency message location on the shared nonvolatile storage device in response to detection of a possible node failure, wherein the emergency message location stores an emergency communication structure that identifies node epochs.
10. A computer system comprising:
at least two interconnected nodes capable of presenting a uniform system image such that an application program views the interconnected nodes as a single computing platform;
a management means for managing computational resources for use by the nodes; and
a shared nonvolatile storage device, wherein the management means comprises an accessing means for accessing an emergency message location on the shared nonvolatile storage device in response to detection of a possible node failure, wherein the emergency message location stores an emergency communication structure that identifies node roles.
11. A computer system comprising:
at least two interconnected nodes capable of presenting a uniform system image such that an application program views the interconnected nodes as a single computing platform;
a management means for managing computational resources for use by the nodes; and
a shared nonvolatile storage device, wherein the management means comprises an accessing means for accessing an emergency message location on the shared nonvolatile storage device in response to detection of a possible node failure, wherein the emergency message location stores an emergency communication structure that identifies a cluster master node.
12. A computer system comprising:
at least two interconnected nodes capable of presenting a uniform system image such that an application program views the interconnected nodes as a single computing platform;
a management means for managing computational resources for use by the nodes; and
a shared nonvolatile storage device, wherein the management means comprises an accessing means for accessing an emergency message location on the shared nonvolatile storage device in response to detection of a possible node failure, wherein the emergency message location stores an emergency communication structure that contains a status value indicating that a particular node should shut down a particular task.
13. A computer system comprising:
at least two interconnected nodes capable of presenting a uniform system image such that an application program views the interconnected nodes as a single computing platform;
a management means for managing computational resources for use by the nodes; and
a shared nonvolatile storage device, wherein the management means comprises an accessing means for accessing an emergency message location on the shared nonvolatile storage device in response to detection of a possible node failure, wherein the emergency message location stores an emergency communication structure that contains a status value indicating that a particular node should shut down all tasks.
14. A computer system comprising:
at least two interconnected nodes capable of presenting a uniform system image such that an application program views the interconnected nodes as a single computing platform;
a management means for managing computational resources for use by the nodes; and
a shared nonvolatile storage device, wherein the management means comprises an accessing means for accessing an emergency message location on the shared nonvolatile storage device in response to detection of a possible node failure, wherein the emergency message location stores an emergency communication structure that contains a status value indicating that a particular node should yield control to a debugger.
15. A computer storage medium having a configuration that represents data and instructions which will cause at least a portion of a computer system to perform method steps for managing resources in a cluster computing system, the method steps comprising the steps of determining that reliable communication with a cluster node over a system area network has failed, said cluster node including a memory, and updating a node record stored at an emergency message location on a shared nonvolatile storage device to remove the node from the cluster.
16.
US09/574,0941997-02-212000-05-18Resource management in a clustered computer systemExpired - LifetimeUS6353898B1 (en)

Priority Applications (1)

Application NumberPriority DateFiling DateTitle
US09/574,094US6353898B1 (en)1997-02-212000-05-18Resource management in a clustered computer system

Applications Claiming Priority (3)

Application NumberPriority DateFiling DateTitle
US3825197P1997-02-211997-02-21
US09/024,011US6151688A (en)1997-02-211998-02-14Resource management in a clustered computer system
US09/574,094US6353898B1 (en)1997-02-212000-05-18Resource management in a clustered computer system

Related Parent Applications (1)

Application NumberTitlePriority DateFiling Date
US09/024,011DivisionUS6151688A (en)1997-02-211998-02-14Resource management in a clustered computer system

Publications (1)

Publication NumberPublication Date
US6353898B1true US6353898B1 (en)2002-03-05

Family

ID=26697912

Family Applications (3)

Application NumberTitlePriority DateFiling Date
US09/024,011Expired - LifetimeUS6151688A (en)1997-02-211998-02-14Resource management in a clustered computer system
US09/574,094Expired - LifetimeUS6353898B1 (en)1997-02-212000-05-18Resource management in a clustered computer system
US09/574,093Expired - LifetimeUS6338112B1 (en)1997-02-212000-05-18Resource management in a clustered computer system

Family Applications Before (1)

Application NumberTitlePriority DateFiling Date
US09/024,011Expired - LifetimeUS6151688A (en)1997-02-211998-02-14Resource management in a clustered computer system

Family Applications After (1)

Application NumberTitlePriority DateFiling Date
US09/574,093Expired - LifetimeUS6338112B1 (en)1997-02-212000-05-18Resource management in a clustered computer system

Country Status (1)

CountryLink
US (3)US6151688A (en)

Cited By (160)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
US20010029519A1 (en)*1999-12-032001-10-11Michael HallinanResource allocation in data processing systems
US20020040450A1 (en)*2000-10-032002-04-04Harris Jeremy GrahamMultiple trap avoidance mechanism
US20020040451A1 (en)*2000-10-032002-04-04Harris Jeremy GrahamResource access control
US20020040422A1 (en)*2000-10-032002-04-04Harris Jeremy GrahamResource access control for a processor
US20020042895A1 (en)*2000-10-032002-04-11Harris Jeremy GrahamMemory access control
US20020055989A1 (en)*2000-11-082002-05-09Stringer-Calvert David W.J.Methods and apparatus for scalable, distributed management of virtual private networks
US20020083036A1 (en)*2000-12-212002-06-27Price Daniel M.Method of improving the availability of a computer clustering system through the use of a network medium link state function
US6449641B1 (en)*1997-10-212002-09-10Sun Microsystems, Inc.Determining cluster membership in a distributed computer system
US20020133655A1 (en)*2001-03-162002-09-19Ohad FalikSharing of functions between an embedded controller and a host processor
US20020133675A1 (en)*2001-03-142002-09-19Kabushiki Kaisha ToshibaCluster system, memory access control method, and recording medium
US20020184241A1 (en)*2001-05-312002-12-05Yu-Fu WuSystem and method for shared directory management
US20020194370A1 (en)*2001-05-042002-12-19Voge Brendan AlexanderReliable links for high performance network protocols
US20030014507A1 (en)*2001-03-132003-01-16International Business Machines CorporationMethod and system for providing performance analysis for clusters
WO2003012646A1 (en)*2001-08-012003-02-13Valaran CorporationMethod and system for multimode garbage collection
US20030050993A1 (en)*2001-09-132003-03-13International Business Machines CorporationEntity self-clustering and host-entity communication as via shared memory
US20030112232A1 (en)*2000-03-312003-06-19Nektarios GeorgalasResource creation method and tool
US6654801B2 (en)*1999-01-042003-11-25Cisco Technology, Inc.Remote system administration and seamless service integration of a data communication network management system
US6658417B1 (en)*1997-12-312003-12-02International Business Machines CorporationTerm-based methods and apparatus for access to files on shared storage devices
US20030226013A1 (en)*2002-05-312003-12-04Sri InternationalMethods and apparatus for scalable distributed management of wireless virtual private networks
US6665587B2 (en)*2000-11-292003-12-16Xerox CorporationProduct template for a personalized printed product incorporating workflow sequence information
US6681390B2 (en)*1999-07-282004-01-20Emc CorporationUpgrade of a program
US6697970B1 (en)*2000-07-142004-02-24Nortel Networks LimitedGeneric fault management method and system
US20040039816A1 (en)*2002-08-232004-02-26International Business Machines CorporationMonitoring method of the remotely accessible resources to provide the persistent and consistent resource states
US20040068677A1 (en)*2002-10-032004-04-08International Business Machines CorporationDiagnostic probe management in data processing systems
US6725311B1 (en)2000-09-142004-04-20Microsoft CorporationMethod and apparatus for providing a connection-oriented network over a serial bus
US6757715B1 (en)*1998-09-112004-06-29L.V. Partners, L.P.Bar code scanner and software interface interlock for performing encrypted handshaking and for disabling the scanner in case of handshaking operation failure
US20040139371A1 (en)*2003-01-092004-07-15Wilson Craig Murray MansellPath commissioning analysis and diagnostic tool
US20040139365A1 (en)*2002-12-272004-07-15Hitachi, Ltd.High-availability disk control device and failure processing method thereof and high-availability disk subsystem
US20040153496A1 (en)*2003-01-312004-08-05Smith Peter AshwoodMethod for computing a backup path for protecting a working path in a data transport network
US20040158687A1 (en)*2002-05-012004-08-12The Board Of Governors For Higher Education, State Of Rhode Island And Providence PlantationsDistributed raid and location independence caching system
US20040158777A1 (en)*2003-02-122004-08-12International Business Machines CorporationScalable method of continuous monitoring the remotely accessible resources against the node failures for very large clusters
US20040187047A1 (en)*2003-03-192004-09-23Rathunde Dale FrankMethod and apparatus for high availability distributed processing across independent networked computer fault groups
US20040193388A1 (en)*2003-03-062004-09-30Geoffrey OuthredDesign time validation of systems
US20040194098A1 (en)*2003-03-312004-09-30International Business Machines CorporationApplication-based control of hardware resource allocation
US20040199806A1 (en)*2003-03-192004-10-07Rathunde Dale FrankMethod and apparatus for high availability distributed processing across independent networked computer fault groups
US20040199811A1 (en)*2003-03-192004-10-07Rathunde Dale FrankMethod and apparatus for high availability distributed processing across independent networked computer fault groups
US20040199804A1 (en)*2003-03-192004-10-07Rathunde Dale FrankMethod and apparatus for high availability distributed processing across independent networked computer fault groups
US20040220931A1 (en)*2003-04-292004-11-04Guthridge D. ScottDiscipline for lock reassertion in a distributed file system
US6820150B1 (en)2001-04-112004-11-16Microsoft CorporationMethod and apparatus for providing quality-of-service delivery facilities over a bus
US20040230762A1 (en)*2003-05-152004-11-18International Business Machines CorporationMethods, systems, and media for managing dynamic storage
US20040268358A1 (en)*2003-06-302004-12-30Microsoft CorporationNetwork load balancing with host status information
US20040267920A1 (en)*2003-06-302004-12-30Aamer HydrieFlexible network load balancing
US20050021696A1 (en)*2000-10-242005-01-27Hunt Galen C.System and method providing automatic policy enforcement in a multi-computer service application
US20050038833A1 (en)*2003-08-142005-02-17Oracle International CorporationManaging workload by service
US20050038772A1 (en)*2003-08-142005-02-17Oracle International CorporationFast application notification in a clustered computing system
US20050038801A1 (en)*2003-08-142005-02-17Oracle International CorporationFast reorganization of connections in response to an event in a clustered computing system
US20050055435A1 (en)*2003-06-302005-03-10Abolade GbadegesinNetwork load balancing with connection manipulation
US20050055418A1 (en)*2001-10-292005-03-10Sun Microsystems IncMethod to manage high availability equipments
US6871222B1 (en)1999-05-282005-03-22Oracle International CorporationQuorumless cluster using disk-based messaging
US20050071126A1 (en)*2003-09-252005-03-31Hitachi Global Storage Technologies Netherlands B. V.Computer program product for performing testing of a simulated storage device within a testing simulation environment
US20050071125A1 (en)*2003-09-252005-03-31Hitachi Global Storage Technologies Netherlands B.V.Method for performing testing of a simulated storage device within a testing simulation environment
US20050080963A1 (en)*2003-09-252005-04-14International Business Machines CorporationMethod and system for autonomically adaptive mutexes
US20050091078A1 (en)*2000-10-242005-04-28Microsoft CorporationSystem and method for distributed management of shared computers
US20050102388A1 (en)*2000-10-242005-05-12Microsoft CorporationSystem and method for restricting data transfers and managing software components of distributed computers
US20050102538A1 (en)*2000-10-242005-05-12Microsoft CorporationSystem and method for designing a logical model of a distributed computer system and deploying physical resources according to the logical model
US20050132379A1 (en)*2003-12-112005-06-16Dell Products L.P.Method, system and software for allocating information handling system resources in response to high availability cluster fail-over events
US20050138479A1 (en)*2003-11-202005-06-23International Business Machines CorporationMethod and apparatus for device error log persistence in a logical partitioned data processing system
US20050160431A1 (en)*2002-07-292005-07-21Oracle CorporationMethod and mechanism for debugging a series of related events within a computer system
US20050193259A1 (en)*2004-02-172005-09-01Martinez Juan I.System and method for reboot reporting
US20050210469A1 (en)*2004-03-042005-09-22International Business Machines CorporationMechanism for dynamic workload rebalancing in a multi-nodal computer system
US20050210470A1 (en)*2004-03-042005-09-22International Business Machines CorporationMechanism for enabling the distribution of operating system resources in a multi-node computer system
US20050229028A1 (en)*1997-05-132005-10-13Bruce FindlaySystem and method for communicating a software-generated pulse waveform between two servers in a network
US20050235289A1 (en)*2004-03-312005-10-20Fabio BarillariMethod for allocating resources in a hierarchical data processing system
US20050256971A1 (en)*2003-08-142005-11-17Oracle International CorporationRuntime load balancing of work across a clustered computing system using current service performance levels
US20050262183A1 (en)*2003-08-142005-11-24Oracle International CorporationConnection pool use of runtime load balancing service performance advisories
US20050268154A1 (en)*2000-12-062005-12-01Novell, Inc.Method for detecting and resolving a partition condition in a cluster
US6973484B1 (en)*2000-12-292005-12-063Pardata, Inc.Method of communicating data in an interconnect system
US20060059565A1 (en)*2004-08-262006-03-16Novell, Inc.Allocation of network resources
US7020695B1 (en)*1999-05-282006-03-28Oracle International CorporationUsing a cluster-wide shared repository to provide the latest consistent definition of the cluster (avoiding the partition-in time problem)
US20060080367A1 (en)*2004-10-072006-04-13Microsoft CorporationMethod and system for limiting resource usage of a version store
US7055172B2 (en)2002-08-082006-05-30International Business Machines CorporationProblem determination method suitable for use when a filter blocks SNMP access to network components
US20060149994A1 (en)*2000-09-062006-07-06Srikrishna KurapatiData replication for redundant network components
US7076783B1 (en)1999-05-282006-07-11Oracle International CorporationProviding figure of merit vote from application executing on a partitioned cluster
US7093288B1 (en)2000-10-242006-08-15Microsoft CorporationUsing packet filters and network virtualization to restrict network communications
US7124320B1 (en)2002-08-062006-10-17Novell, Inc.Cluster failover via distributed configuration repository
US20060271341A1 (en)*2003-03-062006-11-30Microsoft CorporationArchitecture for distributed computing system and automated design, deployment, and management of distributed applications
US20070006218A1 (en)*2005-06-292007-01-04Microsoft CorporationModel-based virtual system provisioning
US7165190B1 (en)2002-07-292007-01-16Oracle International CorporationMethod and mechanism for managing traces within a computer system
US7165097B1 (en)*2000-09-222007-01-16Oracle International CorporationSystem for distributed error reporting and user interaction
US20070016393A1 (en)*2005-06-292007-01-18Microsoft CorporationModel-based propagation of attributes
CN1296850C (en)*2003-12-102007-01-24中国科学院计算技术研究所Partition lease method for cluster system resource management
US7200588B1 (en)2002-07-292007-04-03Oracle International CorporationMethod and mechanism for analyzing trace data using a database management system
US20070083641A1 (en)*2005-10-072007-04-12Oracle International CorporationUsing a standby data storage system to detect the health of a cluster of data storage servers
US20070101191A1 (en)*2005-10-312007-05-03Nec CorporationMemory dump method, computer system, and memory dump program
US20070112847A1 (en)*2005-11-022007-05-17Microsoft CorporationModeling IT operations/policies
US7243374B2 (en)2001-08-082007-07-10Microsoft CorporationRapid application security threat analysis
US20070168740A1 (en)*2006-01-102007-07-19Telefonaktiebolaget Lm Ericsson (Publ)Method and apparatus for dumping a process memory space
US20070250695A1 (en)*1998-09-112007-10-25Lv Partners, L.P.Automatic configuration of equipment software
US20070255757A1 (en)*2003-08-142007-11-01Oracle International CorporationMethods, systems and software for identifying and managing database work
US7317734B2 (en)2001-01-262008-01-08Microsoft CorporationMethod and apparatus for emulating ethernet functionality over a serial bus
US7343441B1 (en)1999-12-082008-03-11Microsoft CorporationMethod and apparatus of remote computer management
US7346811B1 (en)2004-08-132008-03-18Novell, Inc.System and method for detecting and isolating faults in a computer collaboration environment
AU2004239607B2 (en)*2003-01-172008-03-20Insitu, Inc.Compensation for overflight velocity when stabilizing an airborne camera
US7376937B1 (en)2001-05-312008-05-20Oracle International CorporationMethod and mechanism for using a meta-language to define and analyze traces
US7380239B1 (en)2001-05-312008-05-27Oracle International CorporationMethod and mechanism for diagnosing computer applications using traces
US20080134210A1 (en)*2004-03-302008-06-05Nektarios GeorgalasDistributed Computer
US20080183961A1 (en)*2001-05-012008-07-31The Board Of Governors For Higher Education, State Of Rhode Island And Providence PlantationsDistributed raid and location independent caching system
US20080222642A1 (en)*2007-03-082008-09-11Oracle International CorporationDynamic resource profiles for clusterware-managed resources
US20080228923A1 (en)*2007-03-132008-09-18Oracle International CorporationServer-Side Connection Resource Pooling
US20080275998A1 (en)*1998-09-112008-11-06Lv Partners, L.P.Software downloading using a television broadcast channel
US7451359B1 (en)*2002-11-272008-11-11Oracle International Corp.Heartbeat mechanism for cluster systems
US20080288622A1 (en)*2007-05-182008-11-20Microsoft CorporationManaging Server Farms
US20090037585A1 (en)*2003-12-302009-02-05Vladimir MiloushevApparatus, method and system for aggregrating computing resources
US7490089B1 (en)*2004-06-012009-02-10Sanbolic, Inc.Methods and apparatus facilitating access to shared storage among multiple computers
US7536478B2 (en)1998-09-112009-05-19Rpx-Lv Acquisition LlcMethod and apparatus for opening and launching a web browser in response to an audible signal
US7546323B1 (en)*2004-09-302009-06-09Emc CorporationSystem and methods for managing backup status reports
US7567504B2 (en)2003-06-302009-07-28Microsoft CorporationNetwork load balancing with traffic routing
US7571439B1 (en)*2002-05-312009-08-04Teradata Us, Inc.Synchronizing access to global resources
US7574343B2 (en)2000-10-242009-08-11Microsoft CorporationSystem and method for logical modeling of distributed computer systems
US7596786B2 (en)1998-09-112009-09-29Rpx-Lv Acquisition LlcMethod and apparatus for utilizing an existing product code to issue a match to a predetermined location on a global network
US7613822B2 (en)2003-06-302009-11-03Microsoft CorporationNetwork load balancing with session information
US7636788B2 (en)1998-09-112009-12-22Rpx-Lv Acquisition LlcMethod and apparatus for matching a user's use profile in commerce with a broadcast
US20090323640A1 (en)*2008-06-262009-12-31Qualcomm IncorporatedFair resource sharing in wireless communications
US20090327798A1 (en)*2008-06-272009-12-31Microsoft CorporationCluster Shared Volumes
US7644161B1 (en)*2005-01-282010-01-05Hewlett-Packard Development Company, L.P.Topology for a hierarchy of control plug-ins used in a control system
US7669235B2 (en)2004-04-302010-02-23Microsoft CorporationSecure domain join for computing devices
US7684964B2 (en)2003-03-062010-03-23Microsoft CorporationModel and system state synchronization
US7689676B2 (en)2003-03-062010-03-30Microsoft CorporationModel-based policy application
US7739353B2 (en)1998-09-112010-06-15Rpx-Lv Acquisition LlcLaunching a web site using a personal device
US7778422B2 (en)2004-02-272010-08-17Microsoft CorporationSecurity associations for devices
US20100229177A1 (en)*2004-03-042010-09-09International Business Machines CorporationReducing Remote Memory Accesses to Shared Data in a Multi-Nodal Computer System
US7797147B2 (en)2005-04-152010-09-14Microsoft CorporationModel-based system monitoring
US7802144B2 (en)2005-04-152010-09-21Microsoft CorporationModel-based system monitoring
US7822829B2 (en)1998-09-112010-10-26Rpx-Lv Acquisition LlcMethod for interfacing scanned product information with a source for the product over a global network
US7819316B2 (en)1998-09-112010-10-26Lv Partners, L.P.Portable scanner for enabling automatic commerce transactions
US7836329B1 (en)2000-12-292010-11-163Par, Inc.Communication link protocol optimized for storage architectures
US7835896B1 (en)*1998-04-062010-11-16Rode Consulting, Inc.Apparatus for evaluating and demonstrating electronic circuits and components
US7853960B1 (en)*2005-02-252010-12-14Vmware, Inc.Efficient virtualization of input/output completions for a virtual device
US7870189B2 (en)1998-09-112011-01-11Rpx-Lv Acquisition LlcInput device having positional and scanning capabilities
US7904344B2 (en)1998-09-112011-03-08Rpx-Lv Acquisition LlcAccessing a vendor web site using personal account information retrieved from a credit card company web site
US7913105B1 (en)2006-09-292011-03-22Symantec Operating CorporationHigh availability cluster with notification of resource state changes
US7925780B2 (en)1998-09-112011-04-12Rpx-Lv Acquisition LlcMethod for connecting a wireless device to a remote location on a network
US7979576B2 (en)1998-09-112011-07-12Rpx-Lv Acquisition LlcMethod and apparatus for connecting a user location to one of a plurality of destination locations on a network
US8005985B2 (en)1998-09-112011-08-23RPX—LV Acquisition LLCMethod and apparatus for utilizing an audibly coded signal to conduct commerce over the internet
US8108715B1 (en)*2010-07-022012-01-31Symantec CorporationSystems and methods for resolving split-brain scenarios in computer clusters
US20120151265A1 (en)*2010-12-092012-06-14Ibm CorporationSupporting cluster level system dumps in a cluster environment
US8296440B2 (en)1998-09-112012-10-23Rpx CorporationMethod and apparatus for accessing a remote location with an optical reader having a programmable memory system
US20120331019A1 (en)*2011-06-272012-12-27Ivan SchreterReplacement policy for resource container
US8346719B2 (en)2007-05-172013-01-01Novell, Inc.Multi-node replication systems, devices and methods
US8385718B1 (en)*1998-06-082013-02-26Thomson LicensingProcess for programming actions of resources in a domestic communication network
US8458515B1 (en)2009-11-162013-06-04Symantec CorporationRaid5 recovery in a high availability object based file system
US8489728B2 (en)2005-04-152013-07-16Microsoft CorporationModel-based system monitoring
US8495323B1 (en)2010-12-072013-07-23Symantec CorporationMethod and system of providing exclusive and secure access to virtual storage objects in a virtual machine cluster
US8560273B2 (en)2008-12-232013-10-15Novell, Inc.Techniques for distributed testing
US20140059265A1 (en)*2012-08-232014-02-27Dell Products, LpFabric Independent PCIe Cluster Manager
US8788465B2 (en)2010-12-012014-07-22International Business Machines CorporationNotification of configuration updates in a cluster system
US8938062B2 (en)1995-12-112015-01-20Comcast Ip Holdings I, LlcMethod for accessing service resource items that are for use in a telecommunications system
US8943082B2 (en)2010-12-012015-01-27International Business Machines CorporationSelf-assignment of node identifier in a cluster system
US9069571B2 (en)2010-12-012015-06-30International Business Machines CorporationPropagation of unique device names in a cluster system
US9146791B2 (en)2013-03-112015-09-29International Business Machines CorporationCommunication failure source isolation in a distributed computing system
US9183148B2 (en)2013-12-122015-11-10International Business Machines CorporationEfficient distributed cache consistency
US9191505B2 (en)2009-05-282015-11-17Comcast Cable Communications, LlcStateful home phone service
US9454444B1 (en)2009-03-192016-09-27Veritas Technologies LlcUsing location tracking of cluster nodes to avoid single points of failure
US20170116315A1 (en)*2015-10-212017-04-27International Business Machines CorporationFast path traversal in a relational database-based graph structure
US20170318092A1 (en)*2016-04-292017-11-02Netapp, Inc.Location-Based Resource Availability Management in a Partitioned Distributed Storage Environment
US10069745B2 (en)2016-09-122018-09-04Hewlett Packard Enterprise Development LpLossy fabric transmitting device
US10380041B2 (en)2012-08-232019-08-13Dell Products, LpFabric independent PCIe cluster manager
US10474653B2 (en)2016-09-302019-11-12Oracle International CorporationFlexible in-memory column store placement
US11985076B1 (en)2022-12-142024-05-14Red Hat, Inc.Configuring cluster nodes for sharing network resources
US12242733B1 (en)2023-10-232025-03-04International Business Machines CorporationDetermining a memory contention state of a node

Families Citing this family (232)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
DE69817192D1 (en)*1997-09-052003-09-18Sun Microsystems Inc MORE PROCESSOR COMPUTER SYSTEM USING A GROUP PROTECTION MECHANISM
US20020152289A1 (en)*1997-09-102002-10-17Schneider Automation Inc.System and method for accessing devices in a factory automation network
US6044471A (en)1998-06-042000-03-28Z4 Technologies, Inc.Method and apparatus for securing software to reduce unauthorized use
US6311217B1 (en)*1998-06-042001-10-30Compaq Computer CorporationMethod and apparatus for improved cluster administration
US20040117664A1 (en)*1998-06-042004-06-17Z4 Technologies, Inc.Apparatus for establishing a connectivity platform for digital rights management
US20040107368A1 (en)*1998-06-042004-06-03Z4 Technologies, Inc.Method for digital rights management including self activating/self authentication software
US20040117631A1 (en)*1998-06-042004-06-17Z4 Technologies, Inc.Method for digital rights management including user/publisher connectivity interface
US20040117644A1 (en)*1998-06-042004-06-17Z4 Technologies, Inc.Method for reducing unauthorized use of software/digital content including self-activating/self-authenticating software/digital content
US6986063B2 (en)*1998-06-042006-01-10Z4 Technologies, Inc.Method for monitoring software using encryption including digital signatures/certificates
US20040117628A1 (en)*1998-06-042004-06-17Z4 Technologies, Inc.Computer readable storage medium for enhancing license compliance of software/digital content including self-activating/self-authenticating software/digital content
US6654881B2 (en)*1998-06-122003-11-25Microsoft CorporationLogical volume mount manager
EP0969377B1 (en)*1998-06-302009-01-07International Business Machines CorporationMethod of replication-based garbage collection in a multiprocessor system
US6370583B1 (en)*1998-08-172002-04-09Compaq Information Technologies Group, L.P.Method and apparatus for portraying a cluster of computer systems as having a single internet protocol image
US7099943B1 (en)*1998-08-262006-08-29Intel CorporationRegulating usage of computer resources
US6219805B1 (en)*1998-09-152001-04-17Nortel Networks LimitedMethod and system for dynamic risk assessment of software systems
IL126552A (en)*1998-10-132007-06-03Nds LtdRemote administration of smart cards for secure access systems
US6401110B1 (en)*1998-11-302002-06-04International Business Machines CorporationMethod for managing concurrent processes using dual locking
US6438497B1 (en)1998-12-112002-08-20Symyx TechnologiesMethod for conducting sensor array-based rapid materials characterization
EP1055121A1 (en)*1998-12-112000-11-29Symyx Technologies, Inc.Sensor array-based system and method for rapid materials characterization
US6477479B1 (en)1998-12-112002-11-05Symyx TechnologiesSensor array for rapid materials characterization
US6338068B1 (en)*1998-12-142002-01-08International Business Machines CorporationMethod to demonstrate software that performs database queries
US6665304B2 (en)*1998-12-312003-12-16Hewlett-Packard Development Company, L.P.Method and apparatus for providing an integrated cluster alias address
US6718382B1 (en)*1999-02-112004-04-06Yunzhou LiTechnique for detecting leaky points within a network protocol domain
US6789118B1 (en)*1999-02-232004-09-07AlcatelMulti-service network switch with policy based routing
US6980515B1 (en)1999-02-232005-12-27AlcatelMulti-service network switch with quality of access
US6674756B1 (en)1999-02-232004-01-06AlcatelMulti-service network switch with multiple virtual routers
US6717913B1 (en)*1999-02-232004-04-06AlcatelMulti-service network switch with modem pool management
US6628608B1 (en)*1999-03-262003-09-30Cisco Technology, Inc.Method and apparatus of handling data that is sent to non-existent destinations
US7774469B2 (en)*1999-03-262010-08-10Massa Michael TConsistent cluster operational data in a server cluster using a quorum of replicas
US6442713B1 (en)*1999-03-302002-08-27International Business Machines CorporationCluster node distress signal
US6748381B1 (en)*1999-03-312004-06-08International Business Machines CorporationApparatus and method for maintaining consistency of shared data resources in a cluster environment
US7756830B1 (en)1999-03-312010-07-13International Business Machines CorporationError detection protocol
US7401112B1 (en)*1999-05-262008-07-15Aspect Communication CorporationMethods and apparatus for executing a transaction task within a transaction processing system employing symmetric multiprocessors
US6634008B1 (en)*1999-06-202003-10-14Fujitsu LimitedMethodology server based integrated circuit design
CA2378088A1 (en)*1999-06-252001-01-04Massively Parallel Computing, Inc.Massive collective network processing system and methods
US6523065B1 (en)*1999-08-032003-02-18Worldcom, Inc.Method and system for maintenance of global network information in a distributed network-based resource allocation system
US7272649B1 (en)1999-09-302007-09-18Cisco Technology, Inc.Automatic hardware failure detection and recovery for distributed max sessions server
US6553387B1 (en)1999-11-292003-04-22Microsoft CorporationLogical volume configuration data management determines whether to expose the logical volume on-line, off-line request based on comparison of volume epoch numbers on each extents of the volume identifiers
US6684231B1 (en)1999-11-292004-01-27Microsoft CorporationMigration of friendly volumes
US6938256B2 (en)2000-01-182005-08-30Galactic Computing CorporationSystem for balance distribution of requests across multiple servers using dynamic metrics
US6438737B1 (en)*2000-02-152002-08-20Intel CorporationReconfigurable logic for a computer
US6766470B1 (en)*2000-03-292004-07-20Intel CorporationEnhancing reliability and robustness of a cluster
WO2001084338A2 (en)*2000-05-022001-11-08Sun Microsystems, Inc.Cluster configuration repository
US7934232B1 (en)2000-05-042011-04-26Jerding Dean FNavigation paradigm for access to television services
US7185076B1 (en)*2000-05-312007-02-27International Business Machines CorporationMethod, system and program products for managing a clustered computing environment
US7133891B1 (en)*2000-05-312006-11-07International Business Machines CorporationMethod, system and program products for automatically connecting a client to a server of a replicated group of servers
US6725261B1 (en)*2000-05-312004-04-20International Business Machines CorporationMethod, system and program products for automatically configuring clusters of a computing environment
US7487152B1 (en)2000-05-312009-02-03International Business Machines CorporationMethod for efficiently locking resources of a global data repository
US6801937B1 (en)*2000-05-312004-10-05International Business Machines CorporationMethod, system and program products for defining nodes to a cluster
US6847993B1 (en)*2000-05-312005-01-25International Business Machines CorporationMethod, system and program products for managing cluster configurations
US8538843B2 (en)2000-07-172013-09-17Galactic Computing Corporation Bvi/BcMethod and system for operating an E-commerce service provider
US6816905B1 (en)2000-11-102004-11-09Galactic Computing Corporation Bvi/BcMethod and system for providing dynamic hosted service management across disparate accounts/sites
DE60135165D1 (en)*2000-08-152008-09-11Nortel Networks Ltd Optical service agent for managing communication services in an optical communication system
WO2002017034A2 (en)*2000-08-242002-02-28Voltaire Advanced Data Security Ltd.System and method for highly scalable high-speed content-based filtering and load balancing in interconnected fabrics
US20020049859A1 (en)*2000-08-252002-04-25William BruckertClustered computer system and a method of forming and controlling the clustered computer system
US6981244B1 (en)*2000-09-082005-12-27Cisco Technology, Inc.System and method for inheriting memory management policies in a data processing systems
US6725401B1 (en)*2000-10-262004-04-20Nortel Networks LimitedOptimized fault notification in an overlay mesh network via network knowledge correlation
US7500036B2 (en)*2000-12-282009-03-03International Business Machines CorporationQuad aware locking primitive
US7577701B1 (en)2001-01-222009-08-18Insightete CorporationSystem and method for continuous monitoring and measurement of performance of computers on network
US7024479B2 (en)*2001-01-222006-04-04Intel CorporationFiltering calls in system area networks
US7389265B2 (en)*2001-01-302008-06-17Goldman Sachs & Co.Systems and methods for automated political risk management
US7181428B2 (en)*2001-01-302007-02-20Goldman, Sachs & Co.Automated political risk management
US9411532B2 (en)*2001-09-072016-08-09Pact Xpp Technologies AgMethods and systems for transferring data between a processing device and external devices
US6795895B2 (en)*2001-03-072004-09-21Canopy GroupDual axis RAID systems for enhanced bandwidth and reliability
US6952766B2 (en)*2001-03-152005-10-04International Business Machines CorporationAutomated node restart in clustered computer system
US8121937B2 (en)2001-03-202012-02-21Goldman Sachs & Co.Gaming industry risk management clearinghouse
US6918051B2 (en)*2001-04-062005-07-12International Business Machines CorporationNode shutdown in clustered computer system
US7058858B2 (en)*2001-04-232006-06-06Hewlett-Packard Development Company, L.P.Systems and methods for providing automated diagnostic services for a cluster computer system
US7197536B2 (en)*2001-04-302007-03-27International Business Machines CorporationPrimitive communication mechanism for adjacent nodes in a clustered computer system
US6675264B2 (en)*2001-05-072004-01-06International Business Machines CorporationMethod and apparatus for improving write performance in a cluster-based file system
US20030018696A1 (en)*2001-05-102003-01-23Sanchez Humberto A.Method for executing multi-system aware applications
US20020169738A1 (en)*2001-05-102002-11-14Giel Peter VanMethod and system for auditing an enterprise configuration
US20020184327A1 (en)2001-05-112002-12-05Major Robert DrewSystem and method for partitioning address space in a proxy cache server cluster
US7640582B2 (en)2003-04-162009-12-29Silicon Graphics InternationalClustered filesystem for mix of trusted and untrusted nodes
US7617292B2 (en)2001-06-052009-11-10Silicon Graphics InternationalMulti-class heterogeneous clients in a clustered filesystem
US8010558B2 (en)*2001-06-052011-08-30Silicon Graphics InternationalRelocation of metadata server with outstanding DMAPI requests
US20040139125A1 (en)*2001-06-052004-07-15Roger StrassburgSnapshot copy of data volume during data access
US6990609B2 (en)*2001-06-122006-01-24Sun Microsystems, Inc.System and method for isolating faults in a network
US7073083B2 (en)*2001-07-182006-07-04Thomas LicensingMethod and system for providing emergency shutdown of a malfunctioning device
US6925582B2 (en)*2001-08-012005-08-02International Business Machines CorporationForwarding of diagnostic messages in a group
US7055152B1 (en)2001-08-152006-05-30Microsoft CorporationMethod and system for maintaining buffer registrations in a system area network
US6823382B2 (en)*2001-08-202004-11-23Altaworks CorporationMonitoring and control engine for multi-tiered service-level management of distributed web-application servers
US7000016B1 (en)2001-10-192006-02-14Data Return LlcSystem and method for multi-site clustering in a network
US6938031B1 (en)2001-10-192005-08-30Data Return LlcSystem and method for accessing information in a replicated database
US20030101160A1 (en)*2001-11-262003-05-29International Business Machines CorporationMethod for safely accessing shared storage
WO2003048934A2 (en)*2001-11-302003-06-12Oracle International CorporationReal composite objects for providing high availability of resources on networked systems
US7194563B2 (en)*2001-12-052007-03-20Scientific-Atlanta, Inc.Disk driver cluster management of time shift buffer with file allocation table structure
US8565578B2 (en)2001-12-062013-10-22Harold J. Plourde, Jr.Dividing and managing time-shift buffering into program specific segments based on defined durations
US7962011B2 (en)2001-12-062011-06-14Plourde Jr Harold JControlling substantially constant buffer capacity for personal video recording with consistent user interface of available disk space
US6990603B2 (en)*2002-01-022006-01-24Exanet Inc.Method and apparatus for securing volatile data in power failure in systems having redundancy
AU2002237272A1 (en)*2002-01-092003-07-24Telefonaktiebolaget Lm Ericsson (Publ)Method of and equipment for credit management for access in a telecommunications network
US7120782B2 (en)*2002-01-162006-10-10Telefonaktiebolaget Lm Ericsson (Publ)Methods, systems and computer program products for accessing descriptive information associated with a TDMA/GSM switch
US7996517B2 (en)*2002-01-232011-08-09Novell, Inc.Transparent network connection takeover
US7076555B1 (en)2002-01-232006-07-11Novell, Inc.System and method for transparent takeover of TCP connections between servers
US7047291B2 (en)*2002-04-112006-05-16International Business Machines CorporationSystem for correlating events generated by application and component probes when performance problems are identified
US7043549B2 (en)*2002-01-312006-05-09International Business Machines CorporationMethod and system for probing in a network environment
US8086720B2 (en)2002-01-312011-12-27International Business Machines CorporationPerformance reporting in a network environment
US7412502B2 (en)2002-04-182008-08-12International Business Machines CorporationGraphics for end to end component mapping and problem-solving in a network environment
US8527620B2 (en)2003-03-062013-09-03International Business Machines CorporationE-business competitive measurements
US7269651B2 (en)*2002-09-262007-09-11International Business Machines CorporationE-business operations measurements
FI20020210A7 (en)*2002-02-042003-08-05Nokia Corp Hardware-based signal for multiprocessor environments
CA2377649C (en)*2002-03-202009-02-03Ibm Canada Limited-Ibm Canada LimiteeDynamic cluster database architecture
US7237026B1 (en)*2002-03-222007-06-26Cisco Technology, Inc.Sharing gateway resources across multi-pop networks
US7590740B1 (en)2002-03-222009-09-15Cisco Technology, Inc.Expediting port release in distributed networks
US7376742B1 (en)2002-03-222008-05-20Cisco Technology, Inc.Resource and AAA service device
US7529249B1 (en)2002-03-222009-05-05Cisco Technology, IncVoice and dial service level agreement enforcement on universal gateway
US7631066B1 (en)*2002-03-252009-12-08Symantec Operating CorporationSystem and method for preventing data corruption in computer system clusters
US6691370B2 (en)*2002-04-152004-02-17Markar Aritectural Products, Inc.Continuous door hinge with multi-plastic bearings
US8103748B2 (en)*2002-05-202012-01-24International Business Machines CorporationRule-based method and system for managing heterogenous computer clusters
US7620678B1 (en)*2002-06-122009-11-17Nvidia CorporationMethod and system for reducing the time-to-market concerns for embedded system design
EP1372075B1 (en)*2002-06-132004-08-25Fujitsu Siemens Computers, LLCMethod for eliminating a computer from a cluster
US7810133B2 (en)2002-08-232010-10-05Exit-Cube, Inc.Encrypting operating system
FI119407B (en)*2002-08-282008-10-31Sap Ag A high-quality software-based contact server
US7765299B2 (en)*2002-09-162010-07-27Hewlett-Packard Development Company, L.P.Dynamic adaptive server provisioning for blade architectures
US8181205B2 (en)2002-09-242012-05-15Russ Samuel HPVR channel and PVR IPG information
US20040085908A1 (en)*2002-10-312004-05-06Brocade Communications Systems, Inc.Method and apparatus for managing locking of resources in a cluster by use of a network fabric
US8145759B2 (en)*2002-11-042012-03-27Oracle America, Inc.Dynamically configurable resource pool
US6950913B2 (en)*2002-11-082005-09-27Newisys, Inc.Methods and apparatus for multiple cluster locking
JP2006511870A (en)*2002-12-182006-04-06イー・エム・シー・コーポレイシヨン Create resource allocation aware queue for requests related to media resources
US7003645B2 (en)*2002-12-182006-02-21International Business Machines CorporationUse of a storage medium as a communications network for liveness determination in a high-availability cluster
JP3944449B2 (en)*2002-12-192007-07-11株式会社日立製作所 Computer system, magnetic disk device, and disk cache control method
US7228351B2 (en)*2002-12-312007-06-05International Business Machines CorporationMethod and apparatus for managing resource contention in a multisystem cluster
US7030739B2 (en)*2003-01-272006-04-18Audiovox CorporationVehicle security system and method for programming an arming delay
US7379444B2 (en)*2003-01-272008-05-27International Business Machines CorporationMethod to recover from node failure/recovery incidents in distributed systems in which notification does not occur
US20040205184A1 (en)*2003-03-062004-10-14International Business Machines CorporationE-business operations measurements reporting
US7302609B2 (en)*2003-03-122007-11-27Vladimir MatenaMethod and apparatus for executing applications on a distributed computer system
US7451183B2 (en)*2003-03-212008-11-11Hewlett-Packard Development Company, L.P.Assembly and method for balancing processors in a partitioned server
EP1465342A1 (en)*2003-04-012004-10-06STMicroelectronics S.r.l.Multichannel electronic ignition device with high voltage controller
US7610305B2 (en)2003-04-242009-10-27Sun Microsystems, Inc.Simultaneous global transaction and local transaction management in an application server
US7743083B2 (en)2003-04-242010-06-22Oracle America, Inc.Common transaction manager interface for local and global transactions
US7376744B2 (en)*2003-05-092008-05-20Oracle International CorporationUsing local locks for global synchronization in multi-node systems
US7724671B2 (en)2003-05-132010-05-25Intel-Tel, Inc.Architecture for resource management in a telecommunications network
US7069392B2 (en)*2003-06-122006-06-27Newisys, Inc.Methods and apparatus for extended packet communications between multiprocessor clusters
EP1634176B1 (en)*2003-06-182014-07-02Fujitsu Technology Solutions Intellectual Property GmbHCluster arrangement
US20040267910A1 (en)*2003-06-242004-12-30Nokia Inc.Single-point management system for devices in a cluster
US7739252B2 (en)*2003-07-142010-06-15Oracle America, Inc.Read/write lock transaction manager freezing
US7640545B2 (en)*2003-07-142009-12-29Sun Microsytems, Inc.Transaction manager freezing
US7739541B1 (en)*2003-07-252010-06-15Symantec Operating CorporationSystem and method for resolving cluster partitions in out-of-band storage virtualization environments
US20050044226A1 (en)*2003-07-312005-02-24International Business Machines CorporationMethod and apparatus for validating and ranking resources for geographic mirroring
US20050033822A1 (en)*2003-08-052005-02-10Grayson George DaleMethod and apparatus for information distribution and retrieval
JP4590841B2 (en)*2003-08-072010-12-01富士ゼロックス株式会社 Image forming apparatus
US7302607B2 (en)*2003-08-292007-11-27International Business Machines CorporationTwo node virtual shared disk cluster recovery
US8521875B2 (en)*2003-09-042013-08-27Oracle America, Inc.Identity for data sources
US7689685B2 (en)*2003-09-262010-03-30International Business Machines CorporationAutonomic monitoring for web high availability
US8892702B2 (en)2003-09-302014-11-18International Business Machines CorporationPolicy driven autonomic computing-programmatic policy definitions
US7533173B2 (en)*2003-09-302009-05-12International Business Machines CorporationPolicy driven automation - specifying equivalent resources
US7451201B2 (en)*2003-09-302008-11-11International Business Machines CorporationPolicy driven autonomic computing-specifying relationships
US7730501B2 (en)*2003-11-192010-06-01Intel CorporationMethod for parallel processing of events within multiple event contexts maintaining ordered mutual exclusion
US7426578B2 (en)2003-12-122008-09-16Intercall, Inc.Systems and methods for synchronizing data between communication devices in a networked environment
US8316110B1 (en)*2003-12-182012-11-20Symantec Operating CorporationSystem and method for clustering standalone server applications and extending cluster functionality
FR2864658B1 (en)*2003-12-302006-02-24Trusted Logic DATA ACCESS CONTROL THROUGH DYNAMIC VERIFICATION OF LICENSED REFERENCES
US8161388B2 (en)2004-01-212012-04-17Rodriguez Arturo AInteractive discovery of display device characteristics
US8151103B2 (en)2004-03-132012-04-03Adaptive Computing Enterprises, Inc.System and method for providing object triggers
US8782654B2 (en)2004-03-132014-07-15Adaptive Computing Enterprises, Inc.Co-allocating a reservation spanning different compute resources types
US8977651B2 (en)*2004-04-142015-03-10Hewlett-Packard Development Company, L.P.Method and apparatus for multi-process access to a linked-list
US7962453B2 (en)*2004-04-262011-06-14Oracle International CorporationDynamic redistribution of a distributed memory index when individual nodes have different lookup indexes
US7461179B1 (en)*2004-04-302008-12-02Cisco Technology, Inc.Universal SFP support
US8347145B2 (en)*2004-05-042013-01-01Northrop Grumman Systems CorporationSystem and method for providing a mission based management system
US20070266388A1 (en)2004-06-182007-11-15Cluster Resources, Inc.System and method for providing advanced reservations in a compute environment
US7577959B2 (en)*2004-06-242009-08-18International Business Machines CorporationProviding on-demand capabilities using virtual machines and clustering processes
US8996481B2 (en)2004-07-022015-03-31Goldman, Sach & Co.Method, system, apparatus, program code and means for identifying and extracting information
US8510300B2 (en)*2004-07-022013-08-13Goldman, Sachs & Co.Systems and methods for managing information associated with legal, compliance and regulatory risk
US8442953B2 (en)2004-07-022013-05-14Goldman, Sachs & Co.Method, system, apparatus, program code and means for determining a redundancy of information
US8762191B2 (en)2004-07-022014-06-24Goldman, Sachs & Co.Systems, methods, apparatus, and schema for storing, managing and retrieving information
US7590737B1 (en)2004-07-162009-09-15Symantec Operating CorporationSystem and method for customized I/O fencing for preventing data corruption in computer system clusters
US8898246B2 (en)*2004-07-292014-11-25Hewlett-Packard Development Company, L.P.Communication among partitioned devices
US8176490B1 (en)2004-08-202012-05-08Adaptive Computing Enterprises, Inc.System and method of interfacing a workload manager and scheduler with an identity manager
US20060098790A1 (en)*2004-11-052006-05-11Mendonca John JAutomatically configuring remote monitoring of a provisionable resource
WO2006053093A2 (en)2004-11-082006-05-18Cluster Resources, Inc.System and method of providing system jobs within a compute environment
JP4191672B2 (en)*2004-12-142008-12-03ザイオソフト株式会社 Image processing system such as volume rendering
US8219823B2 (en)2005-03-042012-07-10Carter Ernst BSystem for and method of managing access to a system using combinations of user information
US8863143B2 (en)2006-03-162014-10-14Adaptive Computing Enterprises, Inc.System and method for managing a hybrid compute environment
US9413687B2 (en)2005-03-162016-08-09Adaptive Computing Enterprises, Inc.Automatic workload transfer to an on-demand center
US9231886B2 (en)2005-03-162016-01-05Adaptive Computing Enterprises, Inc.Simple integration of an on-demand compute environment
US9015324B2 (en)2005-03-162015-04-21Adaptive Computing Enterprises, Inc.System and method of brokering cloud computing resources
US8782120B2 (en)2005-04-072014-07-15Adaptive Computing Enterprises, Inc.Elastic management of compute resources between a web server and an on-demand compute environment
ES2614751T3 (en)2005-04-072017-06-01Iii Holdings 12, Llc Access on demand to computer resources
US20060248371A1 (en)*2005-04-282006-11-02International Business Machines CorporationMethod and apparatus for a common cluster model for configuring, managing, and operating different clustering technologies in a data center
US7747763B2 (en)*2005-07-262010-06-29Novell, Inc.System and method for ensuring a device uses the correct instance of a network service
US8055725B2 (en)*2006-01-122011-11-08International Business Machines CorporationMethod, apparatus and program product for remotely restoring a non-responsive computing system
US8209162B2 (en)2006-05-012012-06-26Microsoft CorporationMachine translation split between front end and back end processors
US7685476B2 (en)*2006-09-122010-03-23International Business Machines CorporationEarly notification of error via software interrupt and shared memory write
US20080147915A1 (en)*2006-09-292008-06-19Alexander KleymenovManagement of memory buffers for computer programs
US8359495B2 (en)*2007-03-272013-01-22Teradata Us, Inc.System and method for using failure casting to manage failures in computer systems
US8145819B2 (en)*2007-06-042012-03-27International Business Machines CorporationMethod and system for stealing interrupt vectors
US8041773B2 (en)2007-09-242011-10-18The Research Foundation Of State University Of New YorkAutomatic clustering for self-organizing grids
US20090158299A1 (en)*2007-10-312009-06-18Carter Ernst BSystem for and method of uniform synchronization between multiple kernels running on single computer systems with multiple CPUs installed
US9167034B2 (en)*2007-11-122015-10-20International Business Machines CorporationOptimized peer-to-peer file transfers on a multi-node computer system
US8010917B2 (en)*2007-12-262011-08-30Cadence Design Systems, Inc.Method and system for implementing efficient locking to facilitate parallel processing of IC designs
TWI356301B (en)*2007-12-272012-01-11Ind Tech Res InstMemory management system and method for open platf
US20100274385A1 (en)*2008-01-182010-10-28Abb Technology AbControl system for controlling an industrial robot
US8069228B2 (en)*2009-05-082011-11-29Hewlett-Packard Development Company, L.P.Preventing access of a network facility in response to an operation
US11720290B2 (en)2009-10-302023-08-08Iii Holdings 2, LlcMemcached server functionality in a cluster of data processing nodes
US10877695B2 (en)2009-10-302020-12-29Iii Holdings 2, LlcMemcached server functionality in a cluster of data processing nodes
US8291136B2 (en)*2009-12-022012-10-16International Business Machines CorporationRing buffer
US8250398B2 (en)*2010-02-192012-08-21Coulomb Technologies, Inc.Event time management in an electric vehicle charging station without a battery-backed real time clock
US10051074B2 (en)*2010-03-292018-08-14Samsung Electronics Co, Ltd.Techniques for managing devices not directly accessible to device management server
US8868487B2 (en)2010-04-122014-10-21Sandisk Enterprise Ip LlcEvent processing in a flash memory-based object store
US9047351B2 (en)2010-04-122015-06-02Sandisk Enterprise Ip LlcCluster of processing nodes with distributed global flash memory using commodity server technology
US9164554B2 (en)2010-04-122015-10-20Sandisk Enterprise Ip LlcNon-volatile solid-state storage system supporting high bandwidth and random access
US8219769B1 (en)*2010-05-042012-07-10Symantec CorporationDiscovering cluster resources to efficiently perform cluster backups and restores
US8666939B2 (en)2010-06-282014-03-04Sandisk Enterprise Ip LlcApproaches for the replication of write sets
US8365008B2 (en)*2010-10-132013-01-29International Business Machines CorporationProviding unsolicited global disconnect requests to users of storage
US9081522B2 (en)*2010-11-232015-07-14Konica Minolta Laboratory U.S.A., Inc.Method and system for searching for missing resources
US8874515B2 (en)2011-04-112014-10-28Sandisk Enterprise Ip LlcLow level object version tracking using non-volatile memory write generations
US9135064B2 (en)*2012-03-072015-09-15Sandisk Enterprise Ip LlcFine grained adaptive throttling of background processes
US9063974B2 (en)2012-10-022015-06-23Oracle International CorporationHardware for table scan acceleration
US8874811B2 (en)*2012-10-152014-10-28Oracle International CorporationSystem and method for providing a flexible buffer management interface in a distributed data grid
US9679084B2 (en)*2013-03-142017-06-13Oracle International CorporationMemory sharing across distributed nodes
US9135293B1 (en)2013-05-202015-09-15Symantec CorporationDetermining model information of devices based on network device identifiers
WO2014186814A1 (en)*2013-05-212014-11-27Fts Computertechnik GmbhMethod for integration of calculations having a variable running time into a time-controlled architecture
CA2831134A1 (en)*2013-10-242015-04-24Ibm Canada Limited - Ibm Canada LimiteeIdentification of code synchronization points
US9898414B2 (en)2014-03-282018-02-20Oracle International CorporationMemory corruption detection support for distributed shared memory applications
JP6378584B2 (en)*2014-08-292018-08-22キヤノン株式会社 Communication system, image processing apparatus, image processing apparatus control method, and program
CN104865938B (en)*2015-04-032017-08-22深圳市前海安测信息技术有限公司Applied to the node connection chip and its meshed network for assessing personal injury's situation
US9954958B2 (en)*2016-01-292018-04-24Red Hat, Inc.Shared resource management
US10394713B2 (en)2016-05-262019-08-27International Business Machines CorporationSelecting resources to make available in local queues for processors to use
US10209982B2 (en)2017-05-162019-02-19Bank Of America CorporationDistributed storage framework information server platform architecture
US10467139B2 (en)2017-12-292019-11-05Oracle International CorporationFault-tolerant cache coherence over a lossy network
US10452547B2 (en)2017-12-292019-10-22Oracle International CorporationFault-tolerant cache coherence over a lossy network
EP3543881B1 (en)*2018-01-292021-08-11Shenzhen Goodix Technology Co., Ltd.Chip access method, security control module, chip and debugging device
US11144354B2 (en)*2018-07-312021-10-12Vmware, Inc.Method for repointing resources between hosts
CN109614260B (en)*2018-11-282022-06-03北京小米移动软件有限公司Communication failure judgment method and device, electronic equipment and storage medium
US11936624B2 (en)*2020-07-232024-03-19Dell Products L.P.Method and system for optimizing access to data nodes of a data cluster using a data access gateway and bidding counters
US11895093B2 (en)2020-07-232024-02-06Dell Products L.P.Method and system for optimizing access to data nodes of a data cluster using a data access gateway
US11882098B2 (en)2020-07-232024-01-23Dell Products L.P.Method and system for optimizing access to data nodes of a data cluster using a data access gateway and metadata mapping based bidding
US11736447B2 (en)2020-07-232023-08-22Dell Products L.P.Method and system for optimizing access to data nodes of a data cluster using a data access gateway and metadata mapping based bidding in an accelerator pool
US11288005B2 (en)2020-08-142022-03-29Dell Products L.P.Method and system for generating compliance and sequence aware replication in a multiple data cluster system
US11526284B2 (en)*2020-08-142022-12-13Dell Products L.P.Method and system for storing data in a multiple data cluster system
CN112202617B (en)*2020-10-092024-02-23腾讯云计算(北京)有限责任公司Resource management system monitoring method, device, computer equipment and storage medium
US11809571B2 (en)*2021-06-142023-11-07Cisco Technology, Inc.Vulnerability analysis using continuous application attestation
US12124335B1 (en)*2023-07-112024-10-22GM Global Technology Operations LLCFault tolerant distributed computing system based on dynamic reconfiguration

Citations (21)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
US3825902A (en)1973-04-301974-07-23IbmInterlevel communication in multilevel priority interrupt system
US4377852A (en)1980-03-311983-03-22Texas Instruments IncorporatedTerminal emulator
US4980820A (en)1985-02-281990-12-25International Business Machines CorporationInterrupt driven prioritized queue
US5067107A (en)1988-08-051991-11-19Hewlett-Packard CompanyContinuous computer performance measurement tool that reduces operating system produced performance data for logging into global, process, and workload files
US5125093A (en)1990-08-141992-06-23Nexgen MicrosystemsInterrupt control for multiprocessor computer system
US5197130A (en)1989-12-291993-03-23Supercomputer Systems Limited PartnershipCluster architecture for a highly parallel scalar/vector multiprocessor system
US5247517A (en)1989-10-201993-09-21Novell, Inc.Method and apparatus for analyzing networks
US5353412A (en)1990-10-031994-10-04Thinking Machines CorporationPartition control circuit for separately controlling message sending of nodes of tree-shaped routing network to divide the network into a number of partitions
US5392446A (en)1990-02-281995-02-21Hughes Aircraft CompanyMultiple cluster signal processor architecture
US5455932A (en)1990-09-241995-10-03Novell, Inc.Fault tolerant computer system
US5475860A (en)1992-06-151995-12-12Stratus Computer, Inc.Input/output control system and method for direct memory transfer according to location addresses provided by the source unit and destination addresses provided by the destination unit
US5664198A (en)1994-10-261997-09-02Intel CorporationHigh speed access to PC card memory using interrupts
US5666486A (en)1995-06-231997-09-09Data General CorporationMultiprocessor cluster membership manager framework
US5666532A (en)1994-07-261997-09-09Novell, Inc.Computer method and apparatus for asynchronous ordered operations
US5796939A (en)1997-03-101998-08-18Digital Equipment CorporationHigh frequency sampling of processor performance counters
US5878420A (en)*1995-08-311999-03-02Compuware CorporationNetwork monitoring and management system
US5886643A (en)1996-09-171999-03-23Concord Communications IncorporatedMethod and apparatus for discovering network topology
US5923840A (en)1997-04-081999-07-13International Business Machines CorporationMethod of reporting errors by a hardware element of a distributed computer system
US5958009A (en)1997-02-271999-09-28Hewlett-Packard CompanySystem and method for efficiently monitoring quality of service in a distributed processing environment
US5964891A (en)1997-08-271999-10-12Hewlett-Packard CompanyDiagnostic system for a distributed data access networked system
US5991893A (en)*1997-08-291999-11-23Hewlett-Packard CompanyVirtually reliable shared memory

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
US5555430A (en)*1994-05-311996-09-10Advanced Micro DevicesInterrupt control architecture for symmetrical multiprocessing system
US5564060A (en)*1994-05-311996-10-08Advanced Micro DevicesInterrupt handling mechanism to prevent spurious interrupts in a symmetrical multiprocessing system
US5790780A (en)*1996-07-161998-08-04Electronic Data Systems CorporationAnalysis of failures in a computing environment
US5867483A (en)*1996-11-121999-02-02Visual Networks, Inc.Method and apparatus for measurement of peak throughput in packetized data networks
US6170033B1 (en)*1997-09-302001-01-02Intel CorporationForwarding causes of non-maskable interrupts to the interrupt handler
US6148361A (en)*1998-12-172000-11-14International Business Machines CorporationInterrupt architecture for a non-uniform memory access (NUMA) data processing system

Patent Citations (21)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
US3825902A (en)1973-04-301974-07-23IbmInterlevel communication in multilevel priority interrupt system
US4377852A (en)1980-03-311983-03-22Texas Instruments IncorporatedTerminal emulator
US4980820A (en)1985-02-281990-12-25International Business Machines CorporationInterrupt driven prioritized queue
US5067107A (en)1988-08-051991-11-19Hewlett-Packard CompanyContinuous computer performance measurement tool that reduces operating system produced performance data for logging into global, process, and workload files
US5247517A (en)1989-10-201993-09-21Novell, Inc.Method and apparatus for analyzing networks
US5197130A (en)1989-12-291993-03-23Supercomputer Systems Limited PartnershipCluster architecture for a highly parallel scalar/vector multiprocessor system
US5392446A (en)1990-02-281995-02-21Hughes Aircraft CompanyMultiple cluster signal processor architecture
US5125093A (en)1990-08-141992-06-23Nexgen MicrosystemsInterrupt control for multiprocessor computer system
US5455932A (en)1990-09-241995-10-03Novell, Inc.Fault tolerant computer system
US5353412A (en)1990-10-031994-10-04Thinking Machines CorporationPartition control circuit for separately controlling message sending of nodes of tree-shaped routing network to divide the network into a number of partitions
US5475860A (en)1992-06-151995-12-12Stratus Computer, Inc.Input/output control system and method for direct memory transfer according to location addresses provided by the source unit and destination addresses provided by the destination unit
US5666532A (en)1994-07-261997-09-09Novell, Inc.Computer method and apparatus for asynchronous ordered operations
US5664198A (en)1994-10-261997-09-02Intel CorporationHigh speed access to PC card memory using interrupts
US5666486A (en)1995-06-231997-09-09Data General CorporationMultiprocessor cluster membership manager framework
US5878420A (en)*1995-08-311999-03-02Compuware CorporationNetwork monitoring and management system
US5886643A (en)1996-09-171999-03-23Concord Communications IncorporatedMethod and apparatus for discovering network topology
US5958009A (en)1997-02-271999-09-28Hewlett-Packard CompanySystem and method for efficiently monitoring quality of service in a distributed processing environment
US5796939A (en)1997-03-101998-08-18Digital Equipment CorporationHigh frequency sampling of processor performance counters
US5923840A (en)1997-04-081999-07-13International Business Machines CorporationMethod of reporting errors by a hardware element of a distributed computer system
US5964891A (en)1997-08-271999-10-12Hewlett-Packard CompanyDiagnostic system for a distributed data access networked system
US5991893A (en)*1997-08-291999-11-23Hewlett-Packard CompanyVirtually reliable shared memory

Cited By (296)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
US8938062B2 (en)1995-12-112015-01-20Comcast Ip Holdings I, LlcMethod for accessing service resource items that are for use in a telecommunications system
US20050229025A1 (en)*1997-05-132005-10-13Bruce FindlaySystem and method for communicating a software-generated pulse waveform between two servers in a network
US20050229024A1 (en)*1997-05-132005-10-13Bruce FindlaySystem and method for communicating a software-generated pulse waveform between two servers in a network
US20050229028A1 (en)*1997-05-132005-10-13Bruce FindlaySystem and method for communicating a software-generated pulse waveform between two servers in a network
US7444550B2 (en)*1997-05-132008-10-28Micron Technology, Inc.System and method for communicating a software-generated pulse waveform between two servers in a network
US7444537B2 (en)*1997-05-132008-10-28Micron Technology, Inc.System and method for communicating a software-generated pulse waveform between two servers in a network
US7451343B2 (en)*1997-05-132008-11-11Micron Technology, Inc.System and method for communicating a software-generated pulse waveform between two servers in a network
US6449641B1 (en)*1997-10-212002-09-10Sun Microsystems, Inc.Determining cluster membership in a distributed computer system
US6658417B1 (en)*1997-12-312003-12-02International Business Machines CorporationTerm-based methods and apparatus for access to files on shared storage devices
US7835896B1 (en)*1998-04-062010-11-16Rode Consulting, Inc.Apparatus for evaluating and demonstrating electronic circuits and components
US8385718B1 (en)*1998-06-082013-02-26Thomson LicensingProcess for programming actions of resources in a domestic communication network
US20070250695A1 (en)*1998-09-112007-10-25Lv Partners, L.P.Automatic configuration of equipment software
US7548988B2 (en)1998-09-112009-06-16Rpx-Lv Acquisition LlcSoftware downloading using a television broadcast channel
US7596786B2 (en)1998-09-112009-09-29Rpx-Lv Acquisition LlcMethod and apparatus for utilizing an existing product code to issue a match to a predetermined location on a global network
US7908467B2 (en)1998-09-112011-03-15RPX-LV Acquistion LLCAutomatic configuration of equipment software
US8296440B2 (en)1998-09-112012-10-23Rpx CorporationMethod and apparatus for accessing a remote location with an optical reader having a programmable memory system
US7636788B2 (en)1998-09-112009-12-22Rpx-Lv Acquisition LlcMethod and apparatus for matching a user's use profile in commerce with a broadcast
US7739353B2 (en)1998-09-112010-06-15Rpx-Lv Acquisition LlcLaunching a web site using a personal device
US20080275998A1 (en)*1998-09-112008-11-06Lv Partners, L.P.Software downloading using a television broadcast channel
US7819316B2 (en)1998-09-112010-10-26Lv Partners, L.P.Portable scanner for enabling automatic commerce transactions
US7822829B2 (en)1998-09-112010-10-26Rpx-Lv Acquisition LlcMethod for interfacing scanned product information with a source for the product over a global network
US8069098B2 (en)1998-09-112011-11-29Rpx-Lv Acquisition LlcInput device for allowing interface to a web site in association with a unique input code
US7870189B2 (en)1998-09-112011-01-11Rpx-Lv Acquisition LlcInput device having positional and scanning capabilities
US7536478B2 (en)1998-09-112009-05-19Rpx-Lv Acquisition LlcMethod and apparatus for opening and launching a web browser in response to an audible signal
US8005985B2 (en)1998-09-112011-08-23RPX—LV Acquisition LLCMethod and apparatus for utilizing an audibly coded signal to conduct commerce over the internet
US7904344B2 (en)1998-09-112011-03-08Rpx-Lv Acquisition LlcAccessing a vendor web site using personal account information retrieved from a credit card company web site
US6757715B1 (en)*1998-09-112004-06-29L.V. Partners, L.P.Bar code scanner and software interface interlock for performing encrypted handshaking and for disabling the scanner in case of handshaking operation failure
US7979576B2 (en)1998-09-112011-07-12Rpx-Lv Acquisition LlcMethod and apparatus for connecting a user location to one of a plurality of destination locations on a network
US7925780B2 (en)1998-09-112011-04-12Rpx-Lv Acquisition LlcMethod for connecting a wireless device to a remote location on a network
US7912760B2 (en)1998-09-112011-03-22Rpx-Lv Acquisition LlcMethod and apparatus for utilizing a unique transaction code to update a magazine subscription over the internet
US7912961B2 (en)1998-09-112011-03-22Rpx-Lv Acquisition LlcInput device for allowing input of unique digital code to a user's computer to control access thereof to a web site
US7580999B1 (en)1999-01-042009-08-25Cisco Technology, Inc.Remote system administration and seamless service integration of a data communication network management system
US6654801B2 (en)*1999-01-042003-11-25Cisco Technology, Inc.Remote system administration and seamless service integration of a data communication network management system
US7020695B1 (en)*1999-05-282006-03-28Oracle International CorporationUsing a cluster-wide shared repository to provide the latest consistent definition of the cluster (avoiding the partition-in time problem)
US7076783B1 (en)1999-05-282006-07-11Oracle International CorporationProviding figure of merit vote from application executing on a partitioned cluster
US6871222B1 (en)1999-05-282005-03-22Oracle International CorporationQuorumless cluster using disk-based messaging
US6681390B2 (en)*1999-07-282004-01-20Emc CorporationUpgrade of a program
US20010029519A1 (en)*1999-12-032001-10-11Michael HallinanResource allocation in data processing systems
US6996614B2 (en)*1999-12-032006-02-07International Business Machines CorporationResource allocation in data processing systems
US7343441B1 (en)1999-12-082008-03-11Microsoft CorporationMethod and apparatus of remote computer management
US20030112232A1 (en)*2000-03-312003-06-19Nektarios GeorgalasResource creation method and tool
US7019740B2 (en)*2000-03-312006-03-28British Telecommunications Public Limited CompanyResource creation method and tool
US6697970B1 (en)*2000-07-142004-02-24Nortel Networks LimitedGeneric fault management method and system
US20060149994A1 (en)*2000-09-062006-07-06Srikrishna KurapatiData replication for redundant network components
US7568125B2 (en)*2000-09-062009-07-28Cisco Technology, Inc.Data replication for redundant network components
US6725311B1 (en)2000-09-142004-04-20Microsoft CorporationMethod and apparatus for providing a connection-oriented network over a serial bus
US7165097B1 (en)*2000-09-222007-01-16Oracle International CorporationSystem for distributed error reporting and user interaction
US20020040450A1 (en)*2000-10-032002-04-04Harris Jeremy GrahamMultiple trap avoidance mechanism
US6795939B2 (en)*2000-10-032004-09-21Sun Microsystems, Inc.Processor resource access control with response faking
US6795937B2 (en)*2000-10-032004-09-21Sun Microsystems, Inc.Multiple traps after faulty access to a resource
US6795938B2 (en)*2000-10-032004-09-21Sun Microsystems, Inc.Memory access controller with response faking
US6795936B2 (en)*2000-10-032004-09-21Sun Microsystems, Inc.Bus bridge resource access controller
US20020040451A1 (en)*2000-10-032002-04-04Harris Jeremy GrahamResource access control
US20020040422A1 (en)*2000-10-032002-04-04Harris Jeremy GrahamResource access control for a processor
US20020042895A1 (en)*2000-10-032002-04-11Harris Jeremy GrahamMemory access control
US7370103B2 (en)2000-10-242008-05-06Hunt Galen CSystem and method for distributed management of shared computers
US20060069758A1 (en)*2000-10-242006-03-30Microsoft CorporationProviding automatic policy enforcement in a multi-computer service application
US7155380B2 (en)2000-10-242006-12-26Microsoft CorporationSystem and method for designing a logical model of a distributed computer system and deploying physical resources according to the logical model
US7096258B2 (en)2000-10-242006-08-22Microsoft CorporationSystem and method providing automatic policy enforcement in a multi-computer service application
US20050091078A1 (en)*2000-10-242005-04-28Microsoft CorporationSystem and method for distributed management of shared computers
US20050097097A1 (en)*2000-10-242005-05-05Microsoft CorporationSystem and method for distributed management of shared computers
US20050097147A1 (en)*2000-10-242005-05-05Microsoft CorporationSystem and method for distributed management of shared computers
US20050097058A1 (en)*2000-10-242005-05-05Microsoft CorporationSystem and method for distributed management of shared computers
US20050102388A1 (en)*2000-10-242005-05-12Microsoft CorporationSystem and method for restricting data transfers and managing software components of distributed computers
US20050102538A1 (en)*2000-10-242005-05-12Microsoft CorporationSystem and method for designing a logical model of a distributed computer system and deploying physical resources according to the logical model
US20050108381A1 (en)*2000-10-242005-05-19Microsoft CorporationSystem and method for distributed management of shared computers
US20050125212A1 (en)*2000-10-242005-06-09Microsoft CorporationSystem and method for designing a logical model of a distributed computer system and deploying physical resources according to the logical model
US7093288B1 (en)2000-10-242006-08-15Microsoft CorporationUsing packet filters and network virtualization to restrict network communications
US7080143B2 (en)2000-10-242006-07-18Microsoft CorporationSystem and method providing automatic policy enforcement in a multi-computer service application
US7200655B2 (en)2000-10-242007-04-03Microsoft CorporationSystem and method for distributed management of shared computers
US20050192971A1 (en)*2000-10-242005-09-01Microsoft CorporationSystem and method for restricting data transfers and managing software components of distributed computers
US7043545B2 (en)2000-10-242006-05-09Microsoft CorporationSystem and method for restricting data transfers and managing software components of distributed computers
US7574343B2 (en)2000-10-242009-08-11Microsoft CorporationSystem and method for logical modeling of distributed computer systems
US7739380B2 (en)2000-10-242010-06-15Microsoft CorporationSystem and method for distributed management of shared computers
US7711121B2 (en)2000-10-242010-05-04Microsoft CorporationSystem and method for distributed management of shared computers
US20060259609A1 (en)*2000-10-242006-11-16Microsoft CorporationSystem and Method for Distributed Management of Shared Computers
US20050021696A1 (en)*2000-10-242005-01-27Hunt Galen C.System and method providing automatic policy enforcement in a multi-computer service application
US7606898B1 (en)2000-10-242009-10-20Microsoft CorporationSystem and method for distributed management of shared computers
US20050021697A1 (en)*2000-10-242005-01-27Hunt Galen C.System and method providing automatic policy enforcement in a multi-computer service application
US7395320B2 (en)2000-10-242008-07-01Microsoft CorporationProviding automatic policy enforcement in a multi-computer service application
US7406517B2 (en)*2000-10-242008-07-29Microsoft CorporationSystem and method for distributed management of shared computers
US7403980B2 (en)*2000-11-082008-07-22Sri InternationalMethods and apparatus for scalable, distributed management of virtual private networks
US20020055989A1 (en)*2000-11-082002-05-09Stringer-Calvert David W.J.Methods and apparatus for scalable, distributed management of virtual private networks
US6665587B2 (en)*2000-11-292003-12-16Xerox CorporationProduct template for a personalized printed product incorporating workflow sequence information
US20050268154A1 (en)*2000-12-062005-12-01Novell, Inc.Method for detecting and resolving a partition condition in a cluster
US8239518B2 (en)*2000-12-062012-08-07Emc CorporationMethod for detecting and resolving a partition condition in a cluster
US6785678B2 (en)*2000-12-212004-08-31Emc CorporationMethod of improving the availability of a computer clustering system through the use of a network medium link state function
WO2002050678A1 (en)*2000-12-212002-06-27Legato Systems, Inc.Method of 'split-brain' prevention in computer cluster systems
AU2002231167B2 (en)*2000-12-212005-10-06Emc CorporationMethod of "split-brain" prevention in computer cluster systems
US7409395B2 (en)2000-12-212008-08-05Emc CorporationMethod of improving the availability of a computer clustering system through the use of a network medium link state function
US20020083036A1 (en)*2000-12-212002-06-27Price Daniel M.Method of improving the availability of a computer clustering system through the use of a network medium link state function
US6973484B1 (en)*2000-12-292005-12-063Pardata, Inc.Method of communicating data in an interconnect system
US7836329B1 (en)2000-12-292010-11-163Par, Inc.Communication link protocol optimized for storage architectures
USRE40877E1 (en)*2000-12-292009-08-183Par, Inc.Method of communicating data in an interconnect system
US7317734B2 (en)2001-01-262008-01-08Microsoft CorporationMethod and apparatus for emulating ethernet functionality over a serial bus
US20030014507A1 (en)*2001-03-132003-01-16International Business Machines CorporationMethod and system for providing performance analysis for clusters
US20020133675A1 (en)*2001-03-142002-09-19Kabushiki Kaisha ToshibaCluster system, memory access control method, and recording medium
US6961828B2 (en)*2001-03-142005-11-01Kabushiki Kaisha ToshibaCluster system, memory access control method, and recording medium
US20020133655A1 (en)*2001-03-162002-09-19Ohad FalikSharing of functions between an embedded controller and a host processor
US7089339B2 (en)*2001-03-162006-08-08National Semiconductor CorporationSharing of functions between an embedded controller and a host processor
US7865646B1 (en)2001-03-162011-01-04National Semiconductor CorporationSharing of functions between an embedded controller and a host processor
US6820150B1 (en)2001-04-112004-11-16Microsoft CorporationMethod and apparatus for providing quality-of-service delivery facilities over a bus
US7093044B2 (en)2001-04-112006-08-15Microsoft CorporationMethod and apparatus for providing quality-of-service delivery facilities over a bus
US20080183961A1 (en)*2001-05-012008-07-31The Board Of Governors For Higher Education, State Of Rhode Island And Providence PlantationsDistributed raid and location independent caching system
US20020194370A1 (en)*2001-05-042002-12-19Voge Brendan AlexanderReliable links for high performance network protocols
US7380239B1 (en)2001-05-312008-05-27Oracle International CorporationMethod and mechanism for diagnosing computer applications using traces
US7162497B2 (en)*2001-05-312007-01-09Taiwan Semiconductor Manufacturing Co., Ltd.System and method for shared directory management
US7376937B1 (en)2001-05-312008-05-20Oracle International CorporationMethod and mechanism for using a meta-language to define and analyze traces
US20020184241A1 (en)*2001-05-312002-12-05Yu-Fu WuSystem and method for shared directory management
WO2003012646A1 (en)*2001-08-012003-02-13Valaran CorporationMethod and system for multimode garbage collection
US6961740B2 (en)2001-08-012005-11-01Valaran CorporationMethod and system for multimode garbage collection
US7243374B2 (en)2001-08-082007-07-10Microsoft CorporationRapid application security threat analysis
US20030050993A1 (en)*2001-09-132003-03-13International Business Machines CorporationEntity self-clustering and host-entity communication as via shared memory
US6993566B2 (en)*2001-09-132006-01-31International Business Machines CorporationEntity self-clustering and host-entity communication such as via shared memory
US7975016B2 (en)*2001-10-292011-07-05Oracle America, Inc.Method to manage high availability equipments
US20050055418A1 (en)*2001-10-292005-03-10Sun Microsystems IncMethod to manage high availability equipments
US20040158687A1 (en)*2002-05-012004-08-12The Board Of Governors For Higher Education, State Of Rhode Island And Providence PlantationsDistributed raid and location independence caching system
US7246232B2 (en)2002-05-312007-07-17Sri InternationalMethods and apparatus for scalable distributed management of wireless virtual private networks
US7571439B1 (en)*2002-05-312009-08-04Teradata Us, Inc.Synchronizing access to global resources
US20030226013A1 (en)*2002-05-312003-12-04Sri InternationalMethods and apparatus for scalable distributed management of wireless virtual private networks
US7512954B2 (en)*2002-07-292009-03-31Oracle International CorporationMethod and mechanism for debugging a series of related events within a computer system
US7200588B1 (en)2002-07-292007-04-03Oracle International CorporationMethod and mechanism for analyzing trace data using a database management system
US7165190B1 (en)2002-07-292007-01-16Oracle International CorporationMethod and mechanism for managing traces within a computer system
US20050160431A1 (en)*2002-07-292005-07-21Oracle CorporationMethod and mechanism for debugging a series of related events within a computer system
US7124320B1 (en)2002-08-062006-10-17Novell, Inc.Cluster failover via distributed configuration repository
US7055172B2 (en)2002-08-082006-05-30International Business Machines CorporationProblem determination method suitable for use when a filter blocks SNMP access to network components
US20040039816A1 (en)*2002-08-232004-02-26International Business Machines CorporationMonitoring method of the remotely accessible resources to provide the persistent and consistent resource states
US20040068677A1 (en)*2002-10-032004-04-08International Business Machines CorporationDiagnostic probe management in data processing systems
US7254745B2 (en)2002-10-032007-08-07International Business Machines CorporationDiagnostic probe management in data processing systems
US20090043887A1 (en)*2002-11-272009-02-12Oracle International CorporationHeartbeat mechanism for cluster systems
US7451359B1 (en)*2002-11-272008-11-11Oracle International Corp.Heartbeat mechanism for cluster systems
US7590898B2 (en)2002-11-272009-09-15Oracle International Corp.Heartbeat mechanism for cluster systems
US6970972B2 (en)2002-12-272005-11-29Hitachi, Ltd.High-availability disk control device and failure processing method thereof and high-availability disk subsystem
US20040139365A1 (en)*2002-12-272004-07-15Hitachi, Ltd.High-availability disk control device and failure processing method thereof and high-availability disk subsystem
US20040139371A1 (en)*2003-01-092004-07-15Wilson Craig Murray MansellPath commissioning analysis and diagnostic tool
US7206972B2 (en)*2003-01-092007-04-17AlcatelPath commissioning analysis and diagnostic tool
AU2004239607B2 (en)*2003-01-172008-03-20Insitu, Inc.Compensation for overflight velocity when stabilizing an airborne camera
US20040153496A1 (en)*2003-01-312004-08-05Smith Peter AshwoodMethod for computing a backup path for protecting a working path in a data transport network
US20080313333A1 (en)*2003-02-122008-12-18International Business Machines CorporationScalable method of continuous monitoring the remotely accessible resources against node failures for very large clusters
US7401265B2 (en)2003-02-122008-07-15International Business Machines CorporationScalable method of continuous monitoring the remotely accessible resources against the node failures for very large clusters
US20060242454A1 (en)*2003-02-122006-10-26International Business Machines CorporationScalable method of continuous monitoring the remotely accessible resources against the node failures for very large clusters
US20070277058A1 (en)*2003-02-122007-11-29International Business Machines CorporationScalable method of continuous monitoring the remotely accessible resources against the node failures for very large clusters
US20040158777A1 (en)*2003-02-122004-08-12International Business Machines CorporationScalable method of continuous monitoring the remotely accessible resources against the node failures for very large clusters
US7296191B2 (en)2003-02-122007-11-13International Business Machines CorporationScalable method of continuous monitoring the remotely accessible resources against the node failures for very large clusters
US7814373B2 (en)2003-02-122010-10-12International Business Machines CorporationScalable method of continuous monitoring the remotely accessible resources against node failures for very large clusters
US7137040B2 (en)2003-02-122006-11-14International Business Machines CorporationScalable method of continuous monitoring the remotely accessible resources against the node failures for very large clusters
US20060037002A1 (en)*2003-03-062006-02-16Microsoft CorporationModel-based provisioning of test environments
US7792931B2 (en)2003-03-062010-09-07Microsoft CorporationModel-based system provisioning
US7689676B2 (en)2003-03-062010-03-30Microsoft CorporationModel-based policy application
US7684964B2 (en)2003-03-062010-03-23Microsoft CorporationModel and system state synchronization
US7890951B2 (en)2003-03-062011-02-15Microsoft CorporationModel-based provisioning of test environments
US7890543B2 (en)2003-03-062011-02-15Microsoft CorporationArchitecture for distributed computing system and automated design, deployment, and management of distributed applications
US7630877B2 (en)2003-03-062009-12-08Microsoft CorporationArchitecture for distributed computing system and automated design, deployment, and management of distributed applications
US7886041B2 (en)2003-03-062011-02-08Microsoft CorporationDesign time validation of systems
US20060271341A1 (en)*2003-03-062006-11-30Microsoft CorporationArchitecture for distributed computing system and automated design, deployment, and management of distributed applications
US20040193388A1 (en)*2003-03-062004-09-30Geoffrey OuthredDesign time validation of systems
US8122106B2 (en)2003-03-062012-02-21Microsoft CorporationIntegrating design, deployment, and management phases for systems
US7219254B2 (en)2003-03-192007-05-15Lucent Technologies Inc.Method and apparatus for high availability distributed processing across independent networked computer fault groups
US20040199806A1 (en)*2003-03-192004-10-07Rathunde Dale FrankMethod and apparatus for high availability distributed processing across independent networked computer fault groups
US20040199811A1 (en)*2003-03-192004-10-07Rathunde Dale FrankMethod and apparatus for high availability distributed processing across independent networked computer fault groups
US20040199804A1 (en)*2003-03-192004-10-07Rathunde Dale FrankMethod and apparatus for high availability distributed processing across independent networked computer fault groups
US7127637B2 (en)*2003-03-192006-10-24Lucent Technologies Inc.Method and apparatus for high availability distributed processing across independent networked computer fault groups
US7149918B2 (en)*2003-03-192006-12-12Lucent Technologies Inc.Method and apparatus for high availability distributed processing across independent networked computer fault groups
US20040187047A1 (en)*2003-03-192004-09-23Rathunde Dale FrankMethod and apparatus for high availability distributed processing across independent networked computer fault groups
US7134046B2 (en)*2003-03-192006-11-07Lucent Technologies Inc.Method and apparatus for high availability distributed processing across independent networked computer fault groups
US20080092138A1 (en)*2003-03-312008-04-17International Business Machines CorporationResource allocation in a numa architecture based on separate application specified resource and strength preferences for processor and memory resources
US8141091B2 (en)2003-03-312012-03-20International Business Machines CorporationResource allocation in a NUMA architecture based on application specified resource and strength preferences for processor and memory resources
US7334230B2 (en)2003-03-312008-02-19International Business Machines CorporationResource allocation in a NUMA architecture based on separate application specified resource and strength preferences for processor and memory resources
US20080022286A1 (en)*2003-03-312008-01-24International Business Machines CorporationResource allocation in a numa architecture based on application specified resource and strength preferences for processor and memory resources
US20040194098A1 (en)*2003-03-312004-09-30International Business Machines CorporationApplication-based control of hardware resource allocation
US8042114B2 (en)2003-03-312011-10-18International Business Machines CorporationResource allocation in a NUMA architecture based on separate application specified resource and strength preferences for processor and memory resources
US20040220931A1 (en)*2003-04-292004-11-04Guthridge D. ScottDiscipline for lock reassertion in a distributed file system
US7124131B2 (en)*2003-04-292006-10-17International Business Machines CorporationDiscipline for lock reassertion in a distributed file system
US20040230762A1 (en)*2003-05-152004-11-18International Business Machines CorporationMethods, systems, and media for managing dynamic storage
US7743222B2 (en)2003-05-152010-06-22International Business Machines CorporationMethods, systems, and media for managing dynamic storage
US20080215845A1 (en)*2003-05-152008-09-04Kenneth Roger AllenMethods, Systems, and Media for Managing Dynamic Storage
US7356655B2 (en)*2003-05-152008-04-08International Business Machines CorporationMethods, systems, and media for managing dynamic storage
US20040267920A1 (en)*2003-06-302004-12-30Aamer HydrieFlexible network load balancing
US7606929B2 (en)2003-06-302009-10-20Microsoft CorporationNetwork load balancing with connection manipulation
US20040268358A1 (en)*2003-06-302004-12-30Microsoft CorporationNetwork load balancing with host status information
US7567504B2 (en)2003-06-302009-07-28Microsoft CorporationNetwork load balancing with traffic routing
US20050055435A1 (en)*2003-06-302005-03-10Abolade GbadegesinNetwork load balancing with connection manipulation
US7636917B2 (en)2003-06-302009-12-22Microsoft CorporationNetwork load balancing with host status information
US7590736B2 (en)2003-06-302009-09-15Microsoft CorporationFlexible network load balancing
US7613822B2 (en)2003-06-302009-11-03Microsoft CorporationNetwork load balancing with session information
US20110055368A1 (en)*2003-08-142011-03-03Oracle International CorporationConnection Pool Use of Runtime Load Balancing Service Performance Advisories
US20050262183A1 (en)*2003-08-142005-11-24Oracle International CorporationConnection pool use of runtime load balancing service performance advisories
US20050038801A1 (en)*2003-08-142005-02-17Oracle International CorporationFast reorganization of connections in response to an event in a clustered computing system
US7953860B2 (en)*2003-08-142011-05-31Oracle International CorporationFast reorganization of connections in response to an event in a clustered computing system
US20050038833A1 (en)*2003-08-142005-02-17Oracle International CorporationManaging workload by service
US7853579B2 (en)2003-08-142010-12-14Oracle International CorporationMethods, systems and software for identifying and managing database work
US7747717B2 (en)*2003-08-142010-06-29Oracle International CorporationFast application notification in a clustered computing system
US7937493B2 (en)2003-08-142011-05-03Oracle International CorporationConnection pool use of runtime load balancing service performance advisories
US8626890B2 (en)2003-08-142014-01-07Oracle International CorporationConnection pool use of runtime load balancing service performance advisories
US7664847B2 (en)2003-08-142010-02-16Oracle International CorporationManaging workload by service
US20070255757A1 (en)*2003-08-142007-11-01Oracle International CorporationMethods, systems and software for identifying and managing database work
US20050256971A1 (en)*2003-08-142005-11-17Oracle International CorporationRuntime load balancing of work across a clustered computing system using current service performance levels
US20050038772A1 (en)*2003-08-142005-02-17Oracle International CorporationFast application notification in a clustered computing system
US20050071125A1 (en)*2003-09-252005-03-31Hitachi Global Storage Technologies Netherlands B.V.Method for performing testing of a simulated storage device within a testing simulation environment
US7340661B2 (en)2003-09-252008-03-04Hitachi Global Storage Technologies Netherlands B.V.Computer program product for performing testing of a simulated storage device within a testing simulation environment
US20050080963A1 (en)*2003-09-252005-04-14International Business Machines CorporationMethod and system for autonomically adaptive mutexes
US7383368B2 (en)*2003-09-252008-06-03Dell Products L.P.Method and system for autonomically adaptive mutexes by considering acquisition cost value
US20050071126A1 (en)*2003-09-252005-03-31Hitachi Global Storage Technologies Netherlands B. V.Computer program product for performing testing of a simulated storage device within a testing simulation environment
US7165201B2 (en)2003-09-252007-01-16Hitachi Global Storage Technologies Netherlands B.V.Method for performing testing of a simulated storage device within a testing simulation environment
US7275185B2 (en)*2003-11-202007-09-25International Business Machines CorporationMethod and apparatus for device error log persistence in a logical partitioned data processing system
US20050138479A1 (en)*2003-11-202005-06-23International Business Machines CorporationMethod and apparatus for device error log persistence in a logical partitioned data processing system
CN1296850C (en)*2003-12-102007-01-24中国科学院计算技术研究所Partition lease method for cluster system resource management
US20050132379A1 (en)*2003-12-112005-06-16Dell Products L.P.Method, system and software for allocating information handling system resources in response to high availability cluster fail-over events
US20110202927A1 (en)*2003-12-302011-08-18Computer Associates Think, Inc.Apparatus, Method and System for Aggregating Computing Resources
US20090037585A1 (en)*2003-12-302009-02-05Vladimir MiloushevApparatus, method and system for aggregrating computing resources
US8656077B2 (en)2003-12-302014-02-18Ca, Inc.Apparatus, method and system for aggregating computing resources
US7934035B2 (en)*2003-12-302011-04-26Computer Associates Think, Inc.Apparatus, method and system for aggregating computing resources
US9497264B2 (en)2003-12-302016-11-15Ca, Inc.Apparatus, method and system for aggregating computing resources
US20050193259A1 (en)*2004-02-172005-09-01Martinez Juan I.System and method for reboot reporting
US7778422B2 (en)2004-02-272010-08-17Microsoft CorporationSecurity associations for devices
US20100229177A1 (en)*2004-03-042010-09-09International Business Machines CorporationReducing Remote Memory Accesses to Shared Data in a Multi-Nodal Computer System
US7574708B2 (en)2004-03-042009-08-11International Business Machines CorporationMechanism for enabling the distribution of operating system resources in a multi-node computer system
US20050210470A1 (en)*2004-03-042005-09-22International Business Machines CorporationMechanism for enabling the distribution of operating system resources in a multi-node computer system
US20050210469A1 (en)*2004-03-042005-09-22International Business Machines CorporationMechanism for dynamic workload rebalancing in a multi-nodal computer system
US7266540B2 (en)2004-03-042007-09-04International Business Machines CorporationMechanism for dynamic workload rebalancing in a multi-nodal computer system
US8312462B2 (en)2004-03-042012-11-13International Business Machines CorporationReducing remote memory accesses to shared data in a multi-nodal computer system
US20080134210A1 (en)*2004-03-302008-06-05Nektarios GeorgalasDistributed Computer
US20050235289A1 (en)*2004-03-312005-10-20Fabio BarillariMethod for allocating resources in a hierarchical data processing system
US7810098B2 (en)*2004-03-312010-10-05International Business Machines CorporationAllocating resources across multiple nodes in a hierarchical data processing system according to a decentralized policy
US7669235B2 (en)2004-04-302010-02-23Microsoft CorporationSecure domain join for computing devices
US9165157B2 (en)2004-06-012015-10-20Citrix Systems, Inc.Methods and apparatus facilitating access to storage among multiple computers
US7490089B1 (en)*2004-06-012009-02-10Sanbolic, Inc.Methods and apparatus facilitating access to shared storage among multiple computers
US7346811B1 (en)2004-08-132008-03-18Novell, Inc.System and method for detecting and isolating faults in a computer collaboration environment
US7681242B2 (en)2004-08-262010-03-16Novell, Inc.Allocation of network resources
US20060059565A1 (en)*2004-08-262006-03-16Novell, Inc.Allocation of network resources
US7546323B1 (en)*2004-09-302009-06-09Emc CorporationSystem and methods for managing backup status reports
US7567986B2 (en)*2004-10-072009-07-28Microsoft CorporationMethod and system for limiting resource usage of a version store
US20060080367A1 (en)*2004-10-072006-04-13Microsoft CorporationMethod and system for limiting resource usage of a version store
US7644161B1 (en)*2005-01-282010-01-05Hewlett-Packard Development Company, L.P.Topology for a hierarchy of control plug-ins used in a control system
US8875162B2 (en)2005-02-252014-10-28Vmware, Inc.Efficient virtualization of input/output completions for a virtual device
US7853960B1 (en)*2005-02-252010-12-14Vmware, Inc.Efficient virtualization of input/output completions for a virtual device
US7797147B2 (en)2005-04-152010-09-14Microsoft CorporationModel-based system monitoring
US8489728B2 (en)2005-04-152013-07-16Microsoft CorporationModel-based system monitoring
US7802144B2 (en)2005-04-152010-09-21Microsoft CorporationModel-based system monitoring
US20070006218A1 (en)*2005-06-292007-01-04Microsoft CorporationModel-based virtual system provisioning
US10540159B2 (en)2005-06-292020-01-21Microsoft Technology Licensing, LlcModel-based virtual system provisioning
US9317270B2 (en)2005-06-292016-04-19Microsoft Technology Licensing, LlcModel-based virtual system provisioning
US20070016393A1 (en)*2005-06-292007-01-18Microsoft CorporationModel-based propagation of attributes
US9811368B2 (en)2005-06-292017-11-07Microsoft Technology Licensing, LlcModel-based virtual system provisioning
US8549513B2 (en)2005-06-292013-10-01Microsoft CorporationModel-based virtual system provisioning
US20070083641A1 (en)*2005-10-072007-04-12Oracle International CorporationUsing a standby data storage system to detect the health of a cluster of data storage servers
US8615578B2 (en)*2005-10-072013-12-24Oracle International CorporationUsing a standby data storage system to detect the health of a cluster of data storage servers
US20070101191A1 (en)*2005-10-312007-05-03Nec CorporationMemory dump method, computer system, and memory dump program
US20070112847A1 (en)*2005-11-022007-05-17Microsoft CorporationModeling IT operations/policies
US7941309B2 (en)2005-11-022011-05-10Microsoft CorporationModeling IT operations/policies
US20070168740A1 (en)*2006-01-102007-07-19Telefonaktiebolaget Lm Ericsson (Publ)Method and apparatus for dumping a process memory space
US7913105B1 (en)2006-09-292011-03-22Symantec Operating CorporationHigh availability cluster with notification of resource state changes
US8209417B2 (en)*2007-03-082012-06-26Oracle International CorporationDynamic resource profiles for clusterware-managed resources
US20080222642A1 (en)*2007-03-082008-09-11Oracle International CorporationDynamic resource profiles for clusterware-managed resources
US20080228923A1 (en)*2007-03-132008-09-18Oracle International CorporationServer-Side Connection Resource Pooling
US8713186B2 (en)2007-03-132014-04-29Oracle International CorporationServer-side connection resource pooling
US9158779B2 (en)2007-05-172015-10-13Novell, Inc.Multi-node replication systems, devices and methods
US8346719B2 (en)2007-05-172013-01-01Novell, Inc.Multi-node replication systems, devices and methods
US20080288622A1 (en)*2007-05-182008-11-20Microsoft CorporationManaging Server Farms
US20090323640A1 (en)*2008-06-262009-12-31Qualcomm IncorporatedFair resource sharing in wireless communications
US8547910B2 (en)*2008-06-262013-10-01Qualcomm IncorporatedFair resource sharing in wireless communications
US8774119B2 (en)2008-06-262014-07-08Qualcomm IncorporatedFair resource sharing in wireless communication
US20090327798A1 (en)*2008-06-272009-12-31Microsoft CorporationCluster Shared Volumes
US7840730B2 (en)2008-06-272010-11-23Microsoft CorporationCluster shared volumes
US10235077B2 (en)2008-06-272019-03-19Microsoft Technology Licensing, LlcResource arbitration for shared-write access via persistent reservation
US8560273B2 (en)2008-12-232013-10-15Novell, Inc.Techniques for distributed testing
US9632903B2 (en)2008-12-232017-04-25Micro Focus Software Inc.Techniques for distributed testing
US9454444B1 (en)2009-03-192016-09-27Veritas Technologies LlcUsing location tracking of cluster nodes to avoid single points of failure
US9191505B2 (en)2009-05-282015-11-17Comcast Cable Communications, LlcStateful home phone service
US8458515B1 (en)2009-11-162013-06-04Symantec CorporationRaid5 recovery in a high availability object based file system
US8108715B1 (en)*2010-07-022012-01-31Symantec CorporationSystems and methods for resolving split-brain scenarios in computer clusters
US8943082B2 (en)2010-12-012015-01-27International Business Machines CorporationSelf-assignment of node identifier in a cluster system
US9069571B2 (en)2010-12-012015-06-30International Business Machines CorporationPropagation of unique device names in a cluster system
US8788465B2 (en)2010-12-012014-07-22International Business Machines CorporationNotification of configuration updates in a cluster system
US8495323B1 (en)2010-12-072013-07-23Symantec CorporationMethod and system of providing exclusive and secure access to virtual storage objects in a virtual machine cluster
US20120151265A1 (en)*2010-12-092012-06-14Ibm CorporationSupporting cluster level system dumps in a cluster environment
US8819074B2 (en)*2011-06-272014-08-26Sap AgReplacement policy for resource container
US20140059082A1 (en)*2011-06-272014-02-27Ivan SchreterReplacement policy for resource container
US8572130B2 (en)*2011-06-272013-10-29Sap AgReplacement policy for resource container
US20120331019A1 (en)*2011-06-272012-12-27Ivan SchreterReplacement policy for resource container
US20140059265A1 (en)*2012-08-232014-02-27Dell Products, LpFabric Independent PCIe Cluster Manager
US9086919B2 (en)*2012-08-232015-07-21Dell Products, LpFabric independent PCIe cluster manager
US10380041B2 (en)2012-08-232019-08-13Dell Products, LpFabric independent PCIe cluster manager
US9454415B2 (en)2013-03-112016-09-27International Business Machines CorporationCommunication failure source isolation in a distributed computing system
US9146791B2 (en)2013-03-112015-09-29International Business Machines CorporationCommunication failure source isolation in a distributed computing system
US9262324B2 (en)2013-12-122016-02-16International Business Machines CorporationEfficient distributed cache consistency
US9183148B2 (en)2013-12-122015-11-10International Business Machines CorporationEfficient distributed cache consistency
US10061841B2 (en)*2015-10-212018-08-28International Business Machines CorporationFast path traversal in a relational database-based graph structure
US20170116315A1 (en)*2015-10-212017-04-27International Business Machines CorporationFast path traversal in a relational database-based graph structure
US11113313B2 (en)2015-10-212021-09-07International Business Machines CorporationFast path traversal in a relational database-based graph structure
US10205782B2 (en)*2016-04-292019-02-12Netapp, Inc.Location-based resource availability management in a partitioned distributed storage environment
US20170318092A1 (en)*2016-04-292017-11-02Netapp, Inc.Location-Based Resource Availability Management in a Partitioned Distributed Storage Environment
US10069745B2 (en)2016-09-122018-09-04Hewlett Packard Enterprise Development LpLossy fabric transmitting device
US10474653B2 (en)2016-09-302019-11-12Oracle International CorporationFlexible in-memory column store placement
US11985076B1 (en)2022-12-142024-05-14Red Hat, Inc.Configuring cluster nodes for sharing network resources
US12242733B1 (en)2023-10-232025-03-04International Business Machines CorporationDetermining a memory contention state of a node

Also Published As

Publication numberPublication date
US6338112B1 (en)2002-01-08
US6151688A (en)2000-11-21

Similar Documents

PublicationPublication DateTitle
US6353898B1 (en)Resource management in a clustered computer system
US8239518B2 (en)Method for detecting and resolving a partition condition in a cluster
US12137029B2 (en)Dynamic reconfiguration of resilient logical modules in a software defined server
US6189111B1 (en)Resource harvesting in scalable, fault tolerant, single system image clusters
JP5214105B2 (en) Virtual machine monitoring
US5828876A (en)File system for a clustered processing system
US6192514B1 (en)Multicomputer system
US5220668A (en)Digital data processor with maintenance and diagnostic system
US6718486B1 (en)Fault monitor for restarting failed instances of the fault monitor
US5748882A (en)Apparatus and method for fault-tolerant computing
CN100485676C (en)Apparatus, system, and method for file system serialization reinitialization
US4819159A (en)Distributed multiprocess transaction processing system and method
US7631214B2 (en)Failover processing in multi-tier distributed data-handling systems
CN107077358B (en)System and method for supporting dynamic deployment of executable code in a distributed computing environment
US20020016891A1 (en)Method and apparatus for reconfiguring memory in a multiprcessor system with shared memory
KR20000028685A (en)Method and apparatus for managing clustered computer system
JPH0823835B2 (en) Faulty software component detection method and apparatus
US6424988B2 (en)Multicomputer system
JPH11504141A (en) Enhanced instrumentation software for fault-tolerant systems
US20240152286A1 (en)Fast restart of large memory systems
US20020116506A1 (en)Cross-MVS system serialized device control
Deconinck et al.A framework backbone for software fault tolerance in embedded parallel applications
Cardoza et al.Overview of digital UNIX cluster system architecture
US20250086076A1 (en)System wear leveling
CN100472457C (en)Method and system to recover from control block hangs in a heterogenous multiprocessor environment

Legal Events

DateCodeTitleDescription
STCFInformation on status: patent grant

Free format text:PATENTED CASE

FEPPFee payment procedure

Free format text:PAYOR NUMBER ASSIGNED (ORIGINAL EVENT CODE: ASPN); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY

FPAYFee payment

Year of fee payment:4

FPAYFee payment

Year of fee payment:8

ASAssignment

Owner name:CPTN HOLDINGS LLC, WASHINGTON

Free format text:ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:NOVELL, INC.;REEL/FRAME:027157/0583

Effective date:20110427

ASAssignment

Owner name:NOVELL INTELLECTUAL PROPERTY HOLDINGS INC., WASHIN

Free format text:ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:CPTN HOLDINGS LLC;REEL/FRAME:027162/0342

Effective date:20110909

ASAssignment

Owner name:CPTN HOLDINGS LLC, WASHINGTON

Free format text:ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:NOVELL,INC.;REEL/FRAME:027465/0227

Effective date:20110427

Owner name:NOVELL INTELLECTUAL PROPERTY HOLDINGS, INC., WASHI

Free format text:ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:CPTN HOLDINGS LLC;REEL/FRAME:027465/0206

Effective date:20110909

ASAssignment

Owner name:NOVELL INTELLECTUAL PROPERTY HOLDING, INC., WASHIN

Free format text:ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:CPTN HOLDINGS LLC;REEL/FRAME:027325/0131

Effective date:20110909

FPAYFee payment

Year of fee payment:12

ASAssignment

Owner name:RPX CORPORATION, CALIFORNIA

Free format text:ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:NOVELL INTELLECTUAL PROPERTY HOLDINGS, INC.;REEL/FRAME:037809/0057

Effective date:20160208

ASAssignment

Owner name:JPMORGAN CHASE BANK, N.A., AS COLLATERAL AGENT, IL

Free format text:SECURITY AGREEMENT;ASSIGNORS:RPX CORPORATION;RPX CLEARINGHOUSE LLC;REEL/FRAME:038041/0001

Effective date:20160226

ASAssignment

Owner name:RPX CORPORATION, CALIFORNIA

Free format text:RELEASE (REEL 038041 / FRAME 0001);ASSIGNOR:JPMORGAN CHASE BANK, N.A.;REEL/FRAME:044970/0030

Effective date:20171222

Owner name:RPX CLEARINGHOUSE LLC, CALIFORNIA

Free format text:RELEASE (REEL 038041 / FRAME 0001);ASSIGNOR:JPMORGAN CHASE BANK, N.A.;REEL/FRAME:044970/0030

Effective date:20171222


[8]ページ先頭

©2009-2025 Movatter.jp