The present application claims domestic benefit from chapter 35 of united states code § 119(e) to: U.S. provisional patent application serial No. 62/295,492 entitled "Dynamic Location Tracking for Digital Asset Policy" filed on 15.2.2016; U.S. provisional patent application serial No. 62/295,485 entitled "Data Analytics as a Policy Information Point" filed on 15/2/2016; U.S. provisional patent application serial No. 62/295,487 entitled "Method to Enable Rich Policy on status Clients" filed on 15.2.2016; U.S. provisional patent application serial No. 62/295,495 entitled "Digital Asset Protection Policy Using Dynamic Network Attributes" filed on 16.2.2016; and U.S. patent application Ser. No. 15/387,123 entitled "Digital Asset Protection Policy Using Dynamic Network Attributes" filed on 21/12/2016. Each of the above-mentioned provisional and non-provisional patent applications is incorporated by reference herein in its entirety for all purposes and as if fully and fully set forth herein.
Detailed Description
Overview
Aspects of the invention are set out in the independent claims and preferred features are set out in the dependent claims. Features of one aspect may be applied to these aspects alone or in combination with other aspects.
The present disclosure provides examples of methods and systems for data protection that employ dynamic networks and other information to control access to digital data assets (including physical assets, e.g., physical devices; and data assets, e.g., computer files and other forms of data, as discussed in more detail herein). Methods and systems such as those described herein utilize information sources (e.g., network and security information) and couple that information with rich policy information to control access to such digital data assets (e.g., by controlling which data assets can be decrypted, by whom, and subject to what constraints). Systems and apparatus for implementing the methods described herein are also described, including network nodes, computer programs, computer program products, computer readable media, and logic encoded on tangible media for implementing the methods.
For example, using methods and systems such as those described herein, an administrator or other entity may express and enforce policies that consider location information at one or more levels of granularity to limit access to data assets such as documents. As one example, a physician may only have access to a patient record when he or she is in a patient's room. However, when the doctor moves to a less secure location (e.g., a cafeteria), he or she should not have permission to view the document at that location. As another example, a laptop or tablet computer may access a document only when an associated cellular device (e.g., a cellular phone) is located within a predetermined distance (e.g., 10 feet), which may prevent continued access to the document when the cellular device (possibly held by the user) is separated from the laptop or desktop computer used to access the document. In one embodiment, the predetermined distance may be determined by, for example: by considering the geographic distance between devices based on the availability of wireless connections between devices, by requiring two devices to be on the same Wireless Access Point (WAP), or by some other mechanism.
The ability to access a document is controlled in part by decrypting the document (previously encrypted) when access rights are granted, and then returning the document to its encrypted format when the access rights are removed. More specifically, methods and systems for data protection (e.g., those described herein) provide for the use of physical and/or virtual locations and location information gathered from network information, identities, and other relevant information to locate users, devices, data assets, and/or some combination thereof to adequately inform a policy server of the enforcement of one or more policies.
At its most basic and fundamental level, methods and systems such as those described herein protect data assets through encryption and decryption, adding a new dimension to data protection. However, platforms implementing the methods and systems disclosed herein provide more functionality than simple encryption and decryption. For example, the methods and systems protect movable data assets by controlling access to those data assets using one or more of the techniques described herein. Thus, in the event that a device is compromised or a data asset is erroneously sent to an unintended recipient, these methods and systems will prevent the data of the data asset from being decrypted (and thus prevent the underlying data asset from being accessed in any meaningful way), at least if the policy attributes are not first enforced, as will be discussed in more detail below. Further, such control may be implemented in a manner that provides additional security, e.g., via communication with one or more security monitoring and control modules and/or by alerting one or more security management entities (e.g., system administrators, security managers, and/or other such parties) of the occurrence of unauthorized access requests or other potential failure events.
In certain embodiments, such methods and systems provide for the use of a policy engine that allows an individual or entity (e.g., an administrator) to control access to digital data assets and encrypt those data assets. Among other functions, the policy engine decides whether the user and/or user device has the ability to access data, which may be referred to as a "single policy decision point". The attributes of the policy engine will allow the administrator to use various inputs from various systems. Examples of such inputs include, but are not limited to, information generated by an Identity Services Engine (ISE) and/or a Mobile Services Engine (MSE), as well as various additional capabilities and services (e.g., gesture evaluation that may be provided, for example, by an inline gesture node (IPN)). Some embodiments also include the ability to analyze various underlying events and event logs to build a complete telemetry and analysis engine that learns from the platform to dynamically adjust policies.
In conjunction with other aspects, such as the methods and systems disclosed herein, network user/device authentication provides access to, for example, network resources, and in some cases, stronger assertions of identity. However, such network user/device authentication has no impact on accessing data, for example, on a file repository, on a record in a database, or on a portion of some enterprise-customized application, among other examples. In many cases, network authentication systems track user types, user locations, device types, etc., which have not previously been used for policy decisions above the network layer. However, the functionality provided by methods and systems such as those disclosed herein enables the use of such information to control access to digital data assets.
The disclosure provided herein describes examples of a rights-based data access platform that protects data assets wherever they may be located, moved to, accessed from, etc., thereby eliminating data leakage and data loss vectors (vectors), and other such undesirable events. Methods and systems such as those disclosed herein provide for environmental relevance of user, location, time, and threat vectors across data, application, and network layers, among other functions. In one aspect, the methods and systems allow for protection of digital data assets based on factors related to: the identity of the party accessing the data asset, which data assets are being accessed, what is being done with or by the data asset (e.g., read-only and write accesses to the data asset), when the data asset is accessed and where the data asset is accessed, and the amount of time that has elapsed since the policy was last updated or access was last verified. Additionally, these methods and systems allow for the application of advanced encryption algorithms based on threat parameters. Furthermore, fine-grained policies may be used to make protection consistent with commercial and Intellectual Property (IP) environments. In addition, these methods and systems provide techniques that can be used to protect digital data assets throughout their lifecycle.
The method and system according to the present disclosure can provide a flexible platform for today's enterprise systems to securely collaborate and transport data assets within an organization or across organizations, whether locally (on premise) or in the cloud. By doing so, the methods and systems can accelerate secure business productivity and prevent data leakage, data loss, and data abuse vectors. These methods and systems can work seamlessly across operating systems and applications, supporting easy integration of traditional infrastructure and policy workflows. These methods and systems provide security for digital data assets (e.g., as discussed in more detail elsewhere herein) under various circumstances (e.g., when accessing and sending those digital data assets between a storage system and a user from one user to another, and/or otherwise transferring or transmitting those digital data assets). The present disclosure describes a rich set of environmental policy attributes (e.g., information about users, groups, and locations on the network, as well as application and data layers coupled with various encryption technologies) that significantly improve data access security. The present disclosure also describes methods and systems that help ensure data privacy by ensuring that data assets are not subject to unauthorized intrusion. Further, such methods and systems may provide a policy that enforces constraints on which users have access to a given data asset. Thus, the data asset is private between the parties sharing the data asset based on the applicable policy or policies. In one embodiment, the digital data assets are managed by a content manager (e.g., a cloud-based repository of hosted services (e.g., gitubs), or other source control platform). In one embodiment, application level controls may be used to prevent users from copying and pasting (or performing similar functions, such as cutting and pasting) information from a secure document (after decryption) into an unsecure document. Also, application level controls may be used to prevent a user from saving a document as an unsecured or unencrypted document, or to enable or permit a user to save only encrypted documents.
The disclosure provided herein describes various components that may be configured to work together to collectively protect (referred to herein as) digital data assets by facilitating control of access thereto. Digital data assets (or more simply "assets") can include, for example, documents, spreadsheets, source code, other files (e.g., video files, image files, and audio files), or any other data in use, storage, or transmission. Digital data assets may be protected by defining one or more policies that govern access to the digital data assets and/or groups of digital data assets to which the digital data assets may belong (allowing membership in more than one such group (and referred to herein as a group of data assets), as will be understood in light of this disclosure).
Policies such as those discussed herein may be defined by using various attributes such as, for example, the principle or user attempting to access the protected data asset, the device used to attempt to access the protected data asset, the particular action for which permission is being requested (e.g., opening a document), the particular resource for which access is being requested, network attributes, security attributes, the physical/geographic location of the user and/or device, the physical/geographic location of another user and/or another user device relative to the user device through which the data asset is being accessed (or the physical distance between these devices, the availability of a wireless connection between these devices, and other examples of determining whether these two devices are at the same Wireless Access Point (WAP), and determining physical/geographic locations and relationships), various time metrics (e.g., time of day, or length of time elapsed since last event or authentication), device type, software and/or operating system available on the user's device, biometric attributes of the user (e.g., fingerprint recognition or retinal scan), security gestures of the user and/or device, results of previous authentication requests (e.g., previous refusals or grants permission), hardware roots trusting existence, and/or reputation of the user, device, and/or network or sub-parts thereof, and virtual location of the user or device (e.g., location of an avatar in a virtual environment).
Although a policy may be defined in terms of a single such attribute, in practice a policy is typically defined based on a combination of these attributes. For example, in one embodiment, a policy may allow a first user (e.g., an assistant attorney) to open and/or continue to access a certain document only when a second person (e.g., a higher-level attorney, such as a partner) is in the same room or within a certain distance of the first person. As another example, in one embodiment, a policy may allow a first user's device (e.g., a laptop computer) to open and/or continue to access a certain document only when a specified second device (e.g., the user's cell phone) is in the same room or within a certain distance of the first person. Such a relationship constraint may be used to prevent or remove access rights to a document when two devices are separated (or become) beyond an allowed distance, which may be assumed to indicate that a user (who may be carrying a second device on his or her person) has somehow separated from a first device (e.g., a laptop) and that access or continued access is (no longer) secure. In one embodiment, the distance may be determined, for example, by: by considering the geographic distance between devices based on the availability of wireless connections between devices, by requiring two devices to be located on the same Wireless Access Point (WAP), or by some other mechanism.
The preceding paragraphs include examples of situations where the access rights of one person may depend on the location of a second person. Similarly, a policy may allow a first person to access a protected data asset only when a second person, who may not be trusted for various reasons, is not in the same room or within some proximity of the person requesting access to the protected data. Due to the location tracking functionality discussed herein, thesystem 100 may determine such information by monitoring events related to the second person (e.g., the current geographic location of the second person's cell phone) without requiring the second person to explicitly log into the system. Further, one or more different attributes (e.g., location of the user device and software applications installed on the user device) may be combined in other ways to define the policy.
At a high level, the policies mentioned herein are defined and stored in a policy repository, such as discussed in more detail herein in connection with fig. 1A and 1B. An administrator may specify policies in the form of attributes (e.g., location or security status) and asset resources (e.g., digital data assets or groups of digital data assets). In one embodiment, the attributes may be weighted. The presentation of policies depends on the policy information sources and can accept information as input from different sets of sources to provide context about the actor (e.g., user/group) and its related resources (e.g., documents/groups). For example, an Identity Services Engine (ISE) may be used to provide information about the identity of a user and/or device. Such an ISE will have a rich set of primitives with respect to elements such as users, devices, and networks, which may be incorporated into a policy scheme in accordance with the present disclosure.
In addition, location technology (e.g., connected mobile experience (CMX) technology, such as that provided by Cisco Systems, Inc., of san jose, california) may also be used to provide location information as a policy attribute. For example, a policy may specify that files can only be accessed within an organization's campus, even within a room or room of a building, and that such information can be determined and delivered via CMX. Other identity and location services may also be used in conjunction with the present disclosure. In an example embodiment, location policies may also be implemented via cookies. The location may be mapped to an area in the CMX and the location system may track the user's movement. In this case, when the user's location changes, the CMX may notify the policy service of the change and may evaluate the impact of the location change against the policy instance. Further, when the location cookie "expires," a re-authentication request is sent to the policy service and the appropriate results can be returned to the client. In addition, other aspects of the policy may be periodically reevaluated as defined in any particular policy or situation.
A policy server such as described herein may dynamically bind policies to "real-time" data and present policies to consuming user devices (e.g., client computing devices, including, for example, security agents in addition to hardware and other software or so-called Web agents running in a cloud environment). As the information is updated, the policy instance is also updated, which ensures that the state of the system reflects the most currently available information. For example, using location as an example, as a user (or user's device) moves, the movement is tracked and the appropriate policy container is updated. The change in the attribute is then used to ensure proper enforcement based on the new location (or other changed attribute). The policy is enforced via client software (e.g., a security agent) that retrieves the presented enforcement rules (via a data access control server discussed below in connection with fig. 1A), and then may decrypt or re-encrypt the data asset(s) if necessary. In addition, the system supports presenting the appropriate policies for a series of agents in an appropriate format: a policy may be represented for an agent as long as the agent can perform the necessary enforcement, and the model of agent interaction may vary to include push (push), pull (pull), or a combination of both.
In one embodiment, the policy is created by an administrator via a user interface or API. A policy session is associated with a particular client and contains a policy instance for that client. When a client registers with the policy service, a session is created and an instance for the client is added to the session. At this point, the policy is ready for rendering. In one embodiment, the client requests the permissions via an API (e.g., REST API) that triggers the presentation via the appropriate client-specific presentation. In one embodiment, a JSON document is returned that contains a session of authenticated document groups, associated timer values (e.g., expired, revalidated, etc.), and cookies representing context attributes.
In one embodiment, a client (or an agent on the client) requests access to a particular data asset based on a value (e.g., "asset _ ID") that uniquely identifies the data asset, and the policy service determines whether to allow or deny the requested access. If access is allowed, the policy service returns a key with a particular time validity. After the key expires, the client (or proxy) should optimally re-verify access to the data asset. In this embodiment, the client (or agent) does not have to retrieve and manage the permission list. Conversely, when access is requested for a particular asset(s), the client (or agent) may request information for those particular asset(s).
In certain embodiments, formats other than JSON are used. In one embodiment, the client requests initial permissions, re-validations, and context validations (via the cryptographic server) via an API (e.g., REST), and these permissions, re-validations, and context validations are returned in JSON or other suitable format.
At one isIn an embodiment, once a given policy is created or otherwise defined, the policy is pushed from the policy repository to the user device (e.g., a client computing device that includes, for example, a security agent in addition to hardware and other software). As discussed in more detail in connection with fig. 1A and 1B, a security agent according to the present disclosure includes the necessary software components to provide functionality such as that described herein. In one aspect, functionality such as that described in this disclosure is independent of the underlying software program used to create, access, modify, or otherwise process or manipulate the decrypted data assets (e.g.,
). However, in some embodiments, a policy may require or prohibit a particular base software program, thereby permitting access to the document first. For example, a policy may specify that high importance documents can only be accessed by trusted word processing programs (e.g.,
or
) Open, but may exclude other unknown or untrusted word processors (which may be more susceptible to malicious use (e.g., spyware) once the file is decrypted). Similarly, in one aspect, functionality such as that described in this disclosure is independent of the underlying operating system. However, in some embodiments, a particular base operating system may be required (at least in the alternative) or disabled for reasons similar to those provided above with respect to the base software program. For example, such as MICROSOFT
Or MAC
An operating system such as this may be policy specified as trusted, such as
Such open source platforms may not be trusted in some cases. As such, the present disclosure may be used with any mobile (or other) operating system (e.g.,
or
) Used in conjunction, but the policy may specifically designate certain operating systems as trusted or not trusted or otherwise permitted to be used (or untrusted and/or otherwise prohibited).
In addition to the underlying software program for opening or viewing decrypted data assets, methods and systems according to the present disclosure provide for the use of security software to perform certain authentication and policy related operations disclosed herein. Such security software may take the form of: a stand-alone software program, a plug-in to another software program, software code, software running on a cloud service, a compiled binary, or other suitable form of software (collectively "modules"). In any case, the method and system according to the present disclosure generally do not allow the base software program to decrypt the data assets without the functionality provided by the secure software module operating in conjunction with the method and system according to the present disclosure. Thus, for example, such policies may be enforced at an endpoint device (i.e., the device that made the data access attempt).
The policy (and information included with the policy) is used in conjunction with installed software (e.g., word processing software), a security agent in the cloud or on a user device (e.g., an endpoint or other client computing device) to determine whether the user device can provide access to a given digital data asset or group of digital data assets. In one embodiment, access to the protected data asset will be provided via a separate software application, but only after the security agent determines that the secure data asset can be decrypted and accessed. After decryption (post-decryption), various timers may be used to ensure that the rights are current, the keying material is up-to-date, and the validity of dynamic properties such as location remains current. In one embodiment, this and other information may be stored in a cookie or other identifying encoded information on the user machine or data access control server.
In one embodiment, the systems and methods disclosed herein differentiate between users and clients. To this end, the user may be considered the actual identity of the authenticated individual (e.g., Alice or Bob), while the client represents a particular user on a particular device. For example, a client may be identified as "Bob" via a UUID that uniquely maps a particular user (e.g., Bob @ abc. com) and the device being used (e.g., his tablet). If a given user registers on another device (e.g., Bob also has a laptop), Bob's new registration typically includes a different UUID. In one embodiment, the client may present itself to a cryptographic service (e.g., cryptographic service module 162), which in turn registers known clients with thepolicy service 110. User information can be learned from the ISE as both a directory and an agent for the external directory repository. Com, group information, and other session information about a particular user's active session. User information from an ISE may be used in conjunction with client registration to present a policy session for a given user and client. The ISE may also contain a mapping of authenticated users to ISE groups, as well as session details (e.g., device type, IP address, etc.).
In one embodiment, the determination is based at least in part on a specific geographic location of the user device, such as may be determined with the accuracy of a Global Positioning System (GPS) or other geographic positioning technology. For example, in one embodiment, the policy may use the actual geographic location of the device in conjunction with the user identification and the device identification to determine whether an approved user uses an approved device at an approved location. The combination of these parameters adds a meaningful additional level of security to prevent malicious or even accidental activity (e.g., accidentally emailing confidential documents to the wrong person) by ensuring that the documents are not only accessed by approved people and machines, but also accessed at approved physical locations.
Data assets (e.g., documents) for which a policy is being evaluated are encrypted prior to policy evaluation. When encrypted or otherwise protected from access, the digital asset may be referred to as a protected data asset. In one embodiment, if it is determined that the user is an authenticated user attempting to access the protected data asset on an authenticated device and doing so while using an authenticated application at an authenticated location, the user (e.g., through the user device) may only access the protected data asset (e.g., the encrypted document). Until these conditions are met, the protected data asset is encrypted and unreadable. In other embodiments, different criteria may be used to determine when to grant access (e.g., decryption) to the protected data asset.
If the policy requirements are satisfied, access to the data asset is permitted. In one embodiment, the access is granted by exchanging a digital token and/or associated decryption key information (collectively, "decryption material") that allows the security agent to decrypt the document. Absent the decrypted material provided only if the user is properly authenticated (which typically requires that policies be satisfied), the user device will not be able to allow the installed application software to open the document (e.g., because the document will remain encrypted and thus cannot be opened). Furthermore, a user device (or cloud-based device) that is not equipped with a security agent will not be able to authenticate the user and/or device attempting to access the secure data asset, and thus will not be able to decrypt or otherwise access the secure data asset. Thus, methods and systems according to the present disclosure provide enhanced security that prevents, for example, unauthorized users from accessing a secure asset, and even authenticated users from accessing the secure asset when a user or device attempts to access the secure asset at an unauthorized location or when other such criteria are not met.
A system implemented according to methods and systems such as those described herein provides application of core policy services and/or contextual information from one or more information sources. Such policy services can ingest and store such information, and then use attribute information related to a resource or group of resources (e.g., a document or group of documents) to describe the language of the policy, as well as other examples discussed in more detail herein. A high-level overview of an implementation of such a system is provided as an example in fig. 1A and 1B.
Example policy service System
FIG. 1A depicts asystem 100 that includes apolicy service 110 that includes asession store 112, apolicy store 114, and anevent store 116. Thepolicy service 110 provides a "real source" for the asset protection system by specifying under what conditions (e.g., conditions described elsewhere herein) what users and user devices are allowed to access which digital data assets. Although not explicitly depicted in FIG. 1A,policy service 110 provides a series of application programming interfaces ("APIs") to various sources and consumers of the desired information. For example, the operator/administrator interface uses a northbound API for policy management or other interactions with the system. The policy rights API may be used for policy retrieval and (re) validation. The data access control service and the authentication and location service interface with the policy server through respective APIs.
Further, the multi-protocol messaging layer may be used to facilitate communication between other components, such as those described in this disclosure. Such a messaging layer may be used for event generation and security information for data analysis and reporting, external information points (e.g., ISE and location services), derivation of registration details of cryptographic clients, and ingestion (ingestion) of asset classification information (e.g., policy information inputs 118(1) - (n) (collectively "policy information inputs 118")).Policy information input 118 provides the attributes required to define and enforce the policy. The information provided can range from mandatory (e.g., Identity Service Engine (ISE) session information) to optional (e.g., security gestures). Regardless of the type of information provided, the information source provides its data to the policy system using the multi-protocol messaging system.
Dynamic information sources may also be used. By its nature, this information may change frequently, and these changes may be used by the systems and methods disclosed herein to enforce policies. Because the security agent (e.g., user device) does not necessarily have to know most of the policy information details, the interaction of dynamic attributes on the client device may be performed via encoded data (e.g., cookies). Thus, when the policy server receives the policy information, the policy server creates a policy instance, which in turn references the context information. The context may be passed to the client encoded in a token (e.g., a cookie) or explicitly evaluated and communicated in a policy decision, having a time value defined, for example, by an administrator. When the timer or time value expires, one or more dynamic attributes may be rechecked against the instantiation policy on the server and the appropriate permissions returned to the querying system based on the context. The token (e.g., cookie) may be provided in JSON or other suitable format.
Thesession store 112 may be a database or other memory for storing information about current user sessions.Policy store 114 may be a database or other memory for maintaining internal APIs. These internal APIs may provide storage objects and other functionality for storing policy-centric information, including, for example, registered clients, active user session information, attribute/context information, and policy objects. Theevent repository 116 may be a database or other memory for storing information about events generated by the systems and methods disclosed herein. As just some examples, an event may be the creation or update of: a policy, a request by a user to perform a particular action (e.g., open a protected data asset at a particular location), a client or security agent that enforces a policy (or makes a request to enforce a policy in a given situation), or information about the location and movement of a user or user device. Also shown in fig. 1A are various policy information inputs 118(1) - (n) (collectively "policy information inputs 118"), which are discussed in more detail throughout this disclosure.
Policy service 110 allows an administrator or other such individual to express a desired policy as a combination of attributes. For example, a policy may specify attributes or requirements, such as (1) allowing members of an "engineering group" to access any engineering documents (i.e., documents identified as being created by, responsible for, or otherwise associated with the group); (2) allowing members of the "financial group" access to any financial files (i.e., documents identified as being created by, responsible for, or otherwise associated with the group), but limited only at their respective desks and/or their respective offices; (3) allowing a particular user to access a particular document at their office. Of course, these are just some of the many other examples that are possible in practice.
Fig. 1A also depicts a dataaccess control server 120 and a user device 130. The dataaccess control server 120 and the user device 130 provide a basis for protection functions (e.g., encryption, decryption, and application integrity functions) according to methods and systems such as those described in this disclosure. The dataaccess control server 120 supports secure communications between user devices 130 running on endpoint devices and thepolicy service 110. The dataaccess control server 120 interfaces with thepolicy service 110 to receive user rights, manage protection information (e.g., key repository), provide initial provisioning and secure communication links to the user devices 130, and manage the integrity of approved applications and/or modules as requested and appropriate. In one embodiment, the dataaccess control server 120 is used to encrypt (or otherwise protect) the data asset before it is sent (as a protected data asset) to the user device 130. In other embodiments, the data asset is encrypted by software on the computing device that initiated the transmission of the data asset to the user device 130 (e.g., the computing device on which the data asset was created or "last modified"). A number of services are run in these layers to provide a full set of data access controls that may be required by the data access control server and user device 130 and/orsecurity agent 136.
In slightly more detail, the dataaccess control server 120 provides akey store 121; a back-end services manager 122, which in one embodiment includes a software signing module 123, a software wrapper module 124, and a bulk encryption module 125; a server software module 126 (e.g., a mobile target server); a software integrity manager 127; and a security gateway 128. Thekey store 121 may be used to store and provide decryption keys that thesecurity agent 136 uses as needed. Backend service manager 122 is used to provide the basic elements for the platform according to embodiments such as those described in this disclosure. For example, software signing is performed by the software signing module 123, software wrapping is performed by the software wrapping module 124, and batch encryption is performed by the batch encryption module 125. Software signing module 123 may be used to digitally sign software that is certified to work with a given digital data asset or assets. In this way, the software signature module 123 provides application integrity for custom or off-the-shelf enterprise desktop applications. The software wrapper module 124 utilizes well-established techniques to wrap the mobile application to provide application integrity. The batch encryption service module 125 is used to perform batch-level encryption of a repository of files, which may be used in conjunction with the present disclosure to provide encryption of files. The server software module 126 performs and controls various aspects of the functions of thedata control server 120, such as those provided in or otherwise used in conjunction with methods and systems such as those described in conjunction with the present disclosure. Software integrity module 127 is software for tracking certified software signatures (e.g., software signatures generated by software signature module 123). In one embodiment, software integrity module 127 stores the authenticated software signatures as a list in a flat file. In other embodiments, software integrity module 127 stores the authenticated software signature in a database, log file, or other suitable data structure for storing such information. The security gateway 128 provides a security gateway interface between the dataaccess control server 120 and the user device 130.
User device 130 (e.g., a client machine such as a desktop computer, laptop computer, tablet computer, cellular telephone or other mobile device, web application agent, or any other computing device capable of being configured to implement the necessary features provided herein) includes an operating system 132, software application(s) 134, andsecurity agent 136, as well as other software and hardware (e.g., RAM, non-transitory memory, microprocessors, etc.). User device 130 may also store protected digital data assets, such as protected data 138. Further, as shown in fig. 1A, although protected data 138 may of course be stored at least temporarily on user device 130 (although still in non-transitory computer memory), protected data 138 may also be stored at other locations in fig. 1A (e.g.,policy service 110 and data access control server 120), as well as at other locations both within and outside of the system depicted in fig. 1A (e.g., in the cloud, or on the computer of the user who authorized, modified, protected, or sent the data asset in the first instance). As discussed elsewhere herein, the operating system 132 may in principle be any operating system, but in practice a given policy may require or prohibit certain operating systems from being used in any given instance. Similarly, the software application(s) 134 may in principle be any software application, but in practice a given policy may require or prohibit the use of certain software applications in any given instance.
When a user or device attempts to access the protected data asset on the user device, thesecurity agent 136 enforces the policy locally (e.g., on the user device 130). User device 130 receives a policy determination (e.g., whether access to a protected data asset or assets should be granted) from another device (e.g.,policy service 110, dataaccess control server 120, encryption server 160, or some other computing device configured to make such a determination). If a determination is made to allow access to the protected data asset, the relevant token and keying material to be used for decryption is provided tosecurity agent 136. In one embodiment, these materials are provided bypolicy store 110, dataaccess control server 120, and/or encryption server 160. Further, the user device generates events regarding the policies being implemented and communicates these events to a system (e.g., the system disclosed herein). In one embodiment, thesecurity agent 136 is software (e.g., mobile target client software) and serves as a local root of trust and provides local policy enforcement functionality in accordance with the systems and methods disclosed herein. The security proxy has multiple components, but its general function is (1) to act as a communication link between the dataaccess control server 120 and the user device 130; (2) detecting the initiation of an application (process) that requires enforcement of a policy in accordance with the present disclosure; (3) verifying that the application(s) that open the file are indeed the application(s) specified by an Information Technology (IT) department, administrator, or other person or group; (4) performing a decryption function; (5) logs are maintained and events are generated.
A location tracking service may also be included in the user device 130, which may provide various functions (e.g., may need to provide relevant information to a location tracking server, including various modules that may be implemented in such a location tracking server), all of which are discussed in more detail below in connection with fig. 1B. Specific examples of location tracking services may include GPS functionality, short-range communication functionality (e.g., BLUETOOTH (BLUETOOTH)), identification functionality (e.g., Radio Frequency Identification (RFID) functionality), wireless and/or cellular networking functionality, and so forth.
Also depicted in FIG. 1A is a user interface 140, which provides a cloud-based interface for different administrative and user domains. In one embodiment, user interface 140 is built on an infrastructure for a cloud-based VPN service. In one embodiment, the user interface 140 provides interaction between the administrative domain and the end user. The user interface 140 allows an administrator to specify both policy and monitoring functions independently of each other. As depicted herein, user interface 140 includes an end user interface 142, a management interface 144, a security control interface 146, and anevent monitor 148.
Fig. 1A also depicts a data analysis and reporting system ("DARS") 150 that provides various services related to the systems and methods disclosed herein. As shown in fig. 1A, DARS includesNLP service module 151, dataasset classification module 152,audit service module 153,machine learning module 154, andanalysis module 155. Event data generated by the systems and methods disclosed herein may provide important insight into system state, data access patterns, and policy violations. Additionally, machine learning techniques can be applied to the data to dynamically generate and/or refine the policy. In various embodiments, the data service may provide access to data for administrator troubleshooting/verification, provide access to data for compliance verification, perform data analysis for baseline and policy bias detection, and provide automated feedback to the policy service to update policies based on observed behavior.
In some embodiments, the digital data assets are classified to be appropriately encrypted and subject to policy enforcement. Such functionality is provided by the dataasset classification module 152. Such classifications may range from relatively simple (e.g., identifying a data asset as an "engineering" document) to more granular (e.g., identifying a data asset as a "financial" document, and further identifying the file as "regulated"). Further, the classification method may vary, and may include implicit classification based on storage location, explicit user input, or machine learning techniques. Regardless of the method used to classify the assets, when the assets are ingested into the system described herein, the results of the classification can be associated with the assets and consumed by the encryption system.
Additionally, fig. 1A depicts an encryption server 160. The cryptographic server 160 is a server configured to perform one or more cryptographic services (e.g., cryptographic services that may be used in connection with the systems and methods provided in this disclosure). In the illustrated embodiment, the encryption service is performed by anencryption service module 162, theencryption service module 162 being a software module configured to encrypt the digital data assets. In the illustrated embodiment, the decryption service is performed bydecryption service module 164, anddecryption service module 164 is a software module configured to facilitate decryption of the protected digital data asset (e.g., by providing a decryption key, token, or other decryption information). In some embodiments, the encryption server is designed to provide a logical representation of the participating clients to the policy service. When a client device is added to a system, such assystem 100, the encryption server updates thepolicy service 110 with the necessary and/or relevant information. In these and other embodiments, the cryptographic server requests policy determinations (or decisions) for enforcing client asset access on behalf of the client. The encryption server 160 is designed to manage responsibilities associated with core cryptographic (core cryptographic) functions, such as asset encryption, token and key generation/regeneration and distribution, and/or other such functions. The encryption server 160 may also be responsible for generating events for consumption by other components of a system architecture such as that depicted in fig. 1A. Although the encryption server 160 is depicted in fig. 1A as being communicatively coupled (e.g., via a network connection) to the policy repository 110 (as indicated by the solid lines representing the connections), in practice the encryption server 160 may be communicatively coupled (directly or indirectly) to other components of the system, such as via an alternative connection to the dataaccess control server 120 indicated by the dashed lines in fig. 1A.
Additionally, in various embodiments, thesecurity agent 136 and the encryption server 160 may collectively or individually perform data asset encryption including metadata imposition (encryption); performing key generation, management and distribution; performing token generation, management and distribution; interact with thepolicy service 110 to receive rights and/or re-validate existing rights (e.g., rights may be considered access permissions for a given user, machine, and/or data asset), enforce client policies, and generate events.
Although thesystem 100 is discussed with respect to the particular configuration depicted in FIG. 1A, in practice many other configurations are possible. The systems and methods described herein include loosely coupled design principles. The only communications between the necessary components are those facilitated by Application Programming Interfaces (APIs), but each component can be developed as needed. Furthermore,policy service 110 may generate policy decisions in any format required by the endpoints (e.g., user device 130 and/or security agent 136). In addition, asset classification is an area of constant development with varying requirements based on customer use cases. To this end, the systems and methods described herein require classification of data assets, but do not necessarily require the use of any particular classification method or granularity in making the classification, although the methods disclosed herein are certainly effective embodiments that may be used to make such classification.
FIG. 1B is a simplified block diagram depicting a system, such as the system of FIG. 1A, including a positioning function, according to one embodiment. It will be understood from this disclosure that the basic mechanisms and configurations depicted in FIG. 1B follow those depicted in FIG. 1A. Additionally, FIG. 1B also depicts alocation tracking service 139, shown in FIG. 1B as installed in the user device 130. Thelocation tracking service 139 may include functionality to support the collection and generation of location information. Such functionality may include, for example, mechanisms such as: global Positioning System (GPS) functionality (e.g., a GPS receiver), short-range communications (e.g., via bluetooth or similar technology), Near Field Communication (NFC) transceivers, wireless data communications (e.g., WIFI), cellular communications (e.g., mobile telecommunications technologies such as those conforming to the 3 rd generation (3G; international mobile telecommunications-2000 (IMT-2000) specification) or 4 th generation (4G; including mobile WiMAX standards and Long Term Evolution (LTE) standards) cellular communication standards), optical-based communications (e.g., infrared transceivers), and the like, as well as software support therefor. As described elsewhere herein, a combination of hardware and/or software may be used to implement one or more of these functions. Further, it should be understood that certain processing of locally generated information (e.g., GPS information, cellular location information, and other such location information), such as the processing described in connection with the location information processing module described below, may be susceptible (in whole or in part) to processing performed at the user device, and such processed location information may be provided instead when reported to thepolicy service 110 and/or the location information processing server.
To support the location information so generated and/or collected, and to provide location information in addition thereto, the architecture ofsystem 100 also includes a location tracking server 170, as depicted in fig. 1B. The location tracking server 170 may then include a number of modules. As shown in fig. 1B, the location tracking server 170 includes alocation tracker module 171, aGPS module 172, anover-range location module 173, anRFID module 174, avirtual location module 175, a short-range location module 176, and an ultra-short-range location module 177.
Thelocation tracker module 171 coordinates the operations and processes performed by the other modules of the location tracking server 170. As will be understood in light of this disclosure, thelocation tracker module 171 collects location information and events from one or more modules of the location tracking server 170, and (1) sends aggregated information to the policy service 110 (for processing access requests from user devices, such as user device 130, based on one or more policies maintained by the policy service 110) (2) performs policy processing on location information based on the location information so aggregated (on behalf of the policy service 110), and/or (3) some combination thereof (e.g., pre-processes some location information (either by itself or based on the policy information sent by thepolicy service 110 to the location tracker module 171), and sends the results of such pre-processing and remaining location information to the policy service 110).
For example, in cooperation with GPS information obtained from thelocation tracking service 139 of the user device 130, theGPS module 172 processes these (physical) location information for ingestion by thelocation tracker module 171 and/or thepolicy service 110. It will be understood in light of this disclosure that such location information may be used to prevent or allow access to the digital data assets only from certain physical locations (e.g., only when the user + device is local), only when the digital data assets are in certain locations (e.g., the data assets are stored on a server of an enterprise), and/or for exclusive purposes (e.g., when the user + device is at a convention center, the digital data assets of the enterprise may never be accessed unless the user + device is within a booth of a trade show of the enterprise (further evidenced by a connection via WAP of the booth)).
In this regard, thesuperlocation module 173 identifies location through wireless access (e.g., using the device's wireless networking facilities, such as through a connection to one or more WAP access points, through the use of superlocation techniques, and the use of other such mechanisms, and/or combinations thereof). For example, such functionality may be provided by battery-powered Bluetooth Low Energy (BLE) beacons, location of wireless access points, and/or combinations thereof. This alternative is particularly attractive in indoor enterprise spaces.
In a similar manner,RFID module 174 supports identification of users, devices, or other such entities via transceivers installed at various locations. Thus, if the RFID is "ping" upon entering and exiting various facilities, their rooms, or other such spaces, the location of the user/device may be known to some extent. Users and/or devices may also be located using, for example, a short-range location module 176 (also via, for example, bluetooth or similar technology) and/or a short-range location module 177 (e.g., supporting Near Field Communication (NFC) technology, etc.). In this respect, triangulation of cellular communications may also be employed to good effect. It will also be appreciated that location tracking server 170 may also implement support for video and other image processing (e.g., via location tracker module 171) to identify users and/or devices within an enterprise workspace or other location, thereby adding an additional layer of security and control to the policies supported bypolicy service 110.
Instead,virtual location module 175 supports determining and tracking location in a more logical/conceptual domain. It may include their location with respect to the location of the user/device in the network (e.g., by observing which subnet the user/device is connected to), which is a cellular network, a Transmission Control Protocol (TCP)/Internet Protocol (IP) network, a wireless network, or other network, and/or combinations thereof. Further, the location of the user may be virtual; for example, the position of the user's avatar in the virtual world, or other such conceptual representations. That is, the "location" as used herein may be considered in a broad sense.
As will be understood in light of this disclosure and as previously described, the processing of such location information may be performed in any component of thesystem 100. Thus, some or all of the necessary processing may be performed at any or all of: user device 130, location tracking server 170,policy service 110, and/or any other server depicted in fig. 1B, which may be advantageous based on available communication bandwidth, available computing resources, available storage resources, and/or other such considerations. Further, based on location information such as described herein, the location of more than a single user/device may be considered alone or in combination with location information regarding the data asset(s) in question. Thus, for example, access to a data asset may only be accessible if a particular two users are physically located close enough to each other, attempt to access the data asset via the same WAP, both users are authenticated by the same biometric sensor (or sensor type), and/or other such factors are considered. Further, it should be understood that data assets, as used herein, are intended to mean documents, videos, image files, and the like; similarly, a data asset includes such a data asset, but may also be a computing device, computing resource (e.g., a camera or server), mobile device, or other such asset on which the data asset resides. For example, the policy may then indicate to access only the local server if the user device is also local, accesses data assets via a wired connection, uses a certain device type, and so on.
Generating and using events related to policy enforcement
The systems and methods provided herein also have the ability to collect, monitor, and analyze events generated by the system to provide feedback to the system to change policies and prevent or alert potential malicious access (or any such access attempt) to the protected digital data asset. In one embodiment, such functionality may be performed by a data analysis and reporting system, such as data analysis andreporting system 150. In one embodiment, the systems and methods disclosed herein may use various types of events available to the system as inputs to the machine learning layer, which may then develop and take actions to affect policy decisions made by the system.
The systems and methods disclosed herein rely on a series of information sources (e.g., policy information input 118) to provide attributes requested and evaluated by a policy, as that term is understood in light of this disclosure. The complexity of policy creation and maintenance in systems such as the system described herein is an obstacle to deploying such policy systems, and even more so when implementing policy systems that provide a rich set of policy parameters. The present disclosure describes the use of an analytics system (e.g., data analytics and reporting system 150) to provide information to the systems and methods disclosed herein to mitigate the complexity associated with granular policies.
A user device (e.g., user device 130) operating in conjunction with the systems and methods disclosed herein may generate a large amount of data, and this data may reveal behavioral access patterns with respect to the user device and the protected digital assets that the device is accessing or attempting to access. Some of the generated data may be reported to a system (e.g., system 100) as an event and consumed bysystem 100. In its simplest form, an event is a computerized notification or message that describes the client and user, the document being accessed (or the document for which access is being requested), a policy decision, or any other relevant attribute (e.g., a set of attributes associated with the client (e.g., user name (UserName), device id (deviceid), device location, installed software application, etc.) for presenting an initial policy decision).
For example, events generated by a security agent (e.g., security agent 136) may justify certain facts, such as the fact that a physician employed by a certain hospital often needs to open a patient record at a cafeteria, rather than just at his or her office or ward. As another example, the generated event may also demonstrate that the salesperson often views customer offers at a local coffee shop, rather than just at his or her desk. Instead, the generated event may indicate that a trusted user is attempting to access the protected data asset at a location that is expressly prohibited by the policy, e.g., an attorney attempts to access privileged and confidential legal documents on a golf course. The generated events may also take other forms, including information generated by a computing device, and may include machine-readable only information that is not observable by a human and/or readable by a human in any meaningful way other than a machine, e.g., a geographic location of user device 130, a processor speed of user device 130, currently available memory on user device 130, currently available RAM on user device 130, currently available network bandwidth, etc. The generated event may also include information such as: information associated with the security posture or reputation of the user and/or user device, and information associated with the reputation of the user device or other component of the system, e.g.,

a word processor that is safe may have a good reputation, but other word processors may have an unacceptable reputation. In addition, users, devices, software programs, etc. may generate reputations over time. Such a reputation may reflect access history that is consistent with (or not consistent with) the applicable policy or policies, usage of one or more devices and/or software packages in an aspect, communications that are consistent with (or not consistent with) the applicable policies (e.g., the number of acceptable and/or unacceptable emails sent), and/or other such actions.
In this regard, the system may consume and process the generated events. In one embodiment, the generated events may be consumed and processed by the data analysis and reporting system 150 (or "DARS 150"). TheDARS 150 may use these events in various ways, such as generated alerts, extended access rights, or reduced access rights. For example, using the above example of a lawyer attempting to access protected digital assets from a golf course, the system may simply generate an alert (e.g., an email) notifying a designated person (e.g., a perpetrator's boss) of the event. In other cases, particularly (but not necessarily) where there are repetitive events that conform to a common pattern, the system may respond to these events by updating the policy to expand or contract the rights under the policy. For example, if a sales person consistently needs access to a sales offer at a certain coffee shop, the system (e.g., the data analysis and reporting system 150) may automatically update the relevant policy or policies to allow such access. In contrast, however, if a trusted executive is constantly attempting to access confidential material on his or her personal, unsecured email server, the system (e.g., data analysis and reporting system 150) may completely revoke his or her rights to access any confidential material.
Once the system updates the policy in response to such generated events, the system may use the updated policy to evaluate future requests for access to the protected digital data asset. Further, the system may send information related to such update policy information to the user equipment in response to the pull request, or may push information related to such update policy information to the user equipment. In any of the scenarios contemplated herein, information may be sent to the user device via a computer cookie, token, or other means of transmitting digital information between computing devices.
Additionally, once a system component (e.g., data analysis and reporting system 150) updates a policy, the updated policy information may be sent to a centralized repository (e.g., policy service 110) and stored in an appropriate memory location (e.g., policy repository 114) for later use. The base events may also be stored in a suitable memory location (e.g., event store 116). Further, one or more of the steps discussed herein may be performed by subcomponents of the data analysis andreporting system 150, such as NLP (natural language processing)service 151, dataasset classification module 152,audit service module 153,machine learning module 154, and/oranalysis module 155.
Communicating information relating to policy enforcement
As mentioned above, policy enforcement is an important attribute of the systems and methods described in this disclosure. However, the user device (e.g., user device 130) and the security agent (e.g., security agent 136) do not necessarily have the ability to determine many attributes that control their access to protected digital data assets. As one example, while the user device may perform some location tracking services, the super location information is generally not derivable by the user device itself. Furthermore, even for attributes that the user device is able to derive or determine on its own, efficiency (of the user device, the network, and the overall system) is improved when the user device is not required to support all of these attributes. Thus, to further enhance policy enforcement in accordance with certain inherent limitations of many user devices, further functionality is described below consistent with the disclosure provided herein.
For example, in one embodiment, the systems and methods described herein use a novel approach to communicating attributes to user devices (e.g., user device 130) without requiring the user devices to track, monitor, or otherwise determine any underlying attributes relevant to policy enforcement. (while this functionality may be used to eliminate the need for the user device to track, monitor, or otherwise determine certain fundamental attributes related to policy enforcement, the user device is not necessarily prohibited from doing so, and indeed, such techniques may be used in combination to complement each other). In one embodiment,policy server 110 and/or location tracking server 170 (or one or more other components providing similar functionality) track one or more attributes (e.g., locations) required for policy enforcement. In tracking and collecting relevant information,policy server 110 and/or location tracking server 170 (or one or more other components providing similar functionality) may generate a cookie (or token, coded identifier, or other such construct) representing one or more attributes. Such cookie(s) may also include other information, such as a time value indicating when the cookie expires or when authentication must be re-requested. In one embodiment, the cookie includes a value identifying the digital asset (e.g., an "asset _ ID" value) and a time boundary key. Such one or more cookies are then transmitted to the user device 130, which can then use the cookie to determine whether access to the protected digital data asset can be granted during the time the cookie is valid. According to this approach, the client is not required to know what the cookie represents, only to know that the cookie is valid for a given period of time and that the cookie must be "refreshed" when (or after some point in time) the cookie expires (but in any event, once the cookie expires, access to the protected digital data asset will not be granted before the cookie is refreshed or before the policy information is otherwise authenticated or re-authenticated). As one example, a cookie consistent with the present disclosure includes fields related to: a user group ID (e.g., "ug _ ID"), a user group name (e.g., "ug _ name"), a boolean flag indicating whether the user + device combination is authorized at its current location (e.g., "pdp-authorization"), and a re-authentication time or other variable (e.g., "reputationation") indicating when the cookie expires. It will be understood from this disclosure that such functionality may be used not only to protect digital data assets such as documents or spreadsheets, but also to control access to facilities (e.g., local servers at the enterprise, other content (e.g., internal websites), and other such digital data assets) that are protected using such mechanisms.
Unlike user devices, a server (e.g.,policy service 110 or data access control server 120) may have a full context associated with a cookie and may verify or re-verify the requested policy by using the context. For example, the "reputationation" field in the cookie may be assigned a value of 86400. In this example, the corresponding cookie may be checked or re-validated every 86400 seconds. In one embodiment, the cookie may represent only one attribute (e.g., location) of a plurality of attributes evaluated by a given policy instance. In the case where the re-evaluated cookie represents a location, the server (e.g., policy server 110) may re-evaluate the cookie by comparing the current location of the user device (e.g., user device 130) to the required parameters of the policy and then return an appropriate value (e.g., a boolean value indicating whether the location is valid) to the user device (e.g., user device 130), which in one embodiment may be in the form of a new cookie with a new expiration time (e.g., a new "reputationation" value). A given user device may store many such cookies at once, each cookie representing one of a plurality of attributes required for a given policy, and each cookie/attribute being independently checked against a defined policy.
In addition to authentication, events may be monitored and tracked using an encoded identifier (e.g., a cookie) in accordance with the present disclosure (which may be referred to as "eventing"). An "event" may be generated by any component of thesystem 100 whenever an action occurs. The action that will generate the event may be defined by an administrator or in other ways. Some example actions that generate events are many other such potential event generation examples of a user attempting to access a document from a given device, a user moving to a new location, a user's device moving to an unapproved location after gaining access to protected data assets in an approved location, and so on. When a user device (e.g., user device 130) generates an event, system components (e.g.,policy service 110 and/or data analysis andreporting system 150, including subcomponents of each (e.g.,session repository 112,policy repository 114 and event repository 116)) may use cookie information to recreate the entire relationship between the event and the policy. For example, an analytics system (e.g., the data analytics and reporting system 150) may receive a full view of the policy state of the user device by looking at the combination of the original cookie and the event update, which may indicate that the information in the original cookie (if any) has been updated.
In addition, the foregoing functionality with respect to cookies and events provides additional flexibility and efficiency to the system (e.g., system 100). Because the user device does not have to track, determine, or otherwise know the underlying attributes evaluated by the policy, the user device itself does not necessarily have to be updated in response to policy changes. Thus, rather than updating a potentially large number of user machines and then monitoring those configurations, the system need only update the central component (e.g., policy repository 110).
FIG. 1C is a simplified block diagram depicting an alternative configuration and embodiment of the system depicted in FIGS. 1A and 1B, according to one embodiment. FIG. 1C depicts core services 180, which in this embodiment includes policy service 110 (and implicitly the components ofpolicy service 110 from FIGS. 1A and 1B, e.g.,session store 112,policy store 114, and event store 116),key service 181,identification service 182, andingestion service 183. Although not explicitly depicted in fig. 1C, core services 180 may also include API gateway and session security management functions.
The core service 180 may also receive input from (and otherwise communicate with) thepolicy information source 190, thepolicy information source 190 may include, but is not limited to, GPS information, other location information, security information, and reputation information such as described elsewhere in this disclosure. In addition, the core service 180 may receive input from (and otherwise communicate with) the content classification andmanagement system 191 and the data analysis and reporting system 150 (such as described above and shown in more detail in conjunction with fig. 1A and 1B). The core service 180 may also receive input from (and otherwise communicate with) anidentity source 192, theidentity source 192 may include, but is not limited to, Lightweight Directory Access Protocol (LDAP), Identity Service Engine (ISE), and/or Active Directory (AD) functionality. Finally, core services 180 may communicate with users via user interface 140, which user interface 140 may include other potential components, such as end user interface 142, management interface 144, security control interface 146, and event monitor 148.
Example network architecture
Fig. 2 is a block diagram illustrating an example of anetwork architecture 200 including a server system according to one embodiment.Network architecture 200 includes an internetwork (depicted in fig. 2 as the internet/Wide Area Network (WAN)210) configured to couple multiple intranets to one another (depicted in fig. 2 as intranets 220(1) - (N)). Intranet 220(1) - (N), in turn, may include a plurality of components, such as one or more clients (depicted in fig. 2 as clients 225(1) - (N)) and/or servers (depicted in fig. 2 as servers 230(1) - (N)). Clients 225(1) - (N) and/or servers 230(1) - (N) may be implemented using, for example, a computer system such as that described in conjunction with fig. 3. Thus, internet/WAN 210 communicatively couples intranets 220(1) - (N) to one another, allowing clients 225(1) - (N) and servers 230(1) - (N) to communicate with one another (and in some embodiments, the servers of intranets 220(3) and 220(N) may be provided to operate, for example, as cloud-based server systems). As shown in fig. 2, clients 225(1) - (N) may be communicatively coupled to each other and to servers 230(1) - (N) as part of one of intranets 220(1) - (N) or directly via internet/WAN 210. Similarly, servers 230(1) - (N) may be coupled via internet/WAN 210 through a direct connection to internet/WAN 210 or as part of one of intranets 220(1) - (N).
Thenetwork architecture 200 also provides for communication via the internet/WAN 210 using one or more other devices. Such devices may include, for example, a General Packet Radio Service (GPRS) client 240 (e.g., a "smartphone," a "tablet" computer, or other such mobile device), a secure web client (depicted in fig. 2 as a secure hypertext transfer protocol client 250), and a basic cellular telephone (e.g., using standard texting or other communication protocols, and depicted in fig. 2 as a Simple Message Service (SMS) client 260). TheHTTPS client 250 may be, for example, a laptop computer using the HTTP secure (HTTPS) protocol. Thus, support for GPRS clients, SMS clients, HTTP clients, etc. thus provides communication functionality to users according to embodiments in a mobile environment. As also depicted in fig. 2, SMS client 260 may communicate over internet/WAN 210 through several channels. SMS client 260 may, for example, communicate directly with gateway 265, which gateway 265 in turn communicates with internet/WAN 210 via messaging gateway 267, and optionally, with elements within, for example, intranet 220 (3). Alternatively, SMS client 260 may communicate with intranet 220(3) (and thus internet/WAN 210) via gateway 265 throughpublic messaging service 270, where gateway 265 and intranet 220(3) are connected to gatewaypublic messaging service 270. As also depicted in fig. 2, clients 225(4) are also capable of communicating via the internet/WAN 210 throughpublic communication service 270 and intranet 220 (3). To support such communications, as well as other communications according to various embodiments, intranet 220(3) includesserver system 280 and (optionally) provides a plurality of clients (not shown) in the manner of intranet 220 (2).
Theserver system 280 includes a number of components that allow theserver system 280 to provide various functionality (e.g., support for various communications, cloud-based services, enterprise services, etc.). In some embodiments, these components are servers, which may be implemented in hardware and/or software. Examples of such servers include dataaccess control server 120 and encryption server 160 among other potential servers.
Servers, such as those included inserver system 280, are designed to include hardware and/or software configured to facilitate such components and mechanisms to support the functionality of operations in accordance with the concepts disclosed herein as well as possibly other such other (e.g., directly, via various Application Programming Interfaces (APIs) and/or other such interfaces, and/or other such mechanisms and/or constructs) to communicate with one another. As will be discussed in more detail in connection with fig. 3, the server system ofserver system 280 provides such functionality, for example, by presenting end users with websites (functions affected by, for example, web servers 290(1) - (N)). End users may access such websites by using client computing devices such as one or more of the following: client 225(1) - (N),GPRS client 240,HTTPS client 250, and/or SMS client 260. It will be appreciated from the present disclosure that the ability to support such functionality on mobile devices, such as those described herein, is important because mobile e-commerce is rapidly becoming an important aspect of today's online environment. In providing functionality such as that described herein, thenetwork architecture 200 can support the identification and presentation of relevant product/service information in an efficient and effective manner.
It should be understood that the variable identifier "N" is used in several examples in the various figures herein to more simply designate the final elements of a series of related or similar elements (e.g., intranet 220(1) - (N), client 225(1) - (N), and server 230(1) - (N)) in accordance with the present disclosure. The repeated use of such variable identifiers is not meant to imply a correlation between the sizes of these series of elements. The use of such variable identifiers is by no means intended (and does not) require that each series of elements have the same number of elements as another series separated by the same variable identifier. Rather, in each instance of use, the variable so identified may represent the same or different value as other instances of the same variable identifier.
As will be understood in light of this disclosure, a process according to the concepts embodied by systems such as those described herein includes one or more operations, which may be performed in any suitable order. It should be understood that the operations discussed herein may include commands directly entered by a computer system user or steps performed by dedicated hardware modules, but the preferred embodiments include steps performed by software modules. The functionality of the steps referred to herein may correspond to the functionality of a module or a part of a module.
The operations referred to herein may be modules or portions of modules (e.g., software, firmware, or hardware modules). For example, although the described embodiments include software modules and/or include manually entered user commands, the various example modules may be dedicated hardware modules. The software modules discussed herein may include script, batch, or other executable files, or a combination and/or portion of such files. The software modules may include a computer program or a subroutine thereof encoded on a computer-readable storage medium. In one embodiment, such modules or portions of modules may be configured to host one or more components of fig. 1 and perform one or more functions associated with fig. 7-9, e.g., when one or more such components are hosted in a cloud or as a cloud-based service(s).
In addition, those skilled in the art will recognize that the boundaries between modules are merely illustrative and that alternative embodiments may merge modules or impose an alternate decomposition of functionality of modules. For example, the modules discussed herein may be broken down into sub-modules to be executed as multiple computer processes, and optionally on multiple computers. Furthermore, alternative embodiments may combine multiple instances of a particular module or sub-module. Further, those skilled in the art will recognize that the operations described in the example embodiments are for illustration only. The functionality of the operations may be combined or distributed in additional operations in accordance with the present disclosure.
Alternatively, the acts may be embodied in the structure of circuitry to perform such functions, e.g., microcode of a Complex Instruction Set Computer (CISC), firmware programmed into a programmable or erasable/programmable device, configuration of a Field Programmable Gate Array (FPGA), design of a gate array or a fully custom Application Specific Integrated Circuit (ASIC), etc.
Each block of the flow diagram may be performed by a module (e.g., a software module) or a portion of a module or may be performed by a computer system user using a computer system such ascomputer system 610. Thus, the above-described methods, their operations, and modules thereof may be performed on a computer system configured to perform the operations of the methods and/or may be performed from computer-readable storage media. The method may be embodied in a machine-readable and/or computer-readable storage medium for configuring a computer system to perform the method. Thus, software modules may be stored within and/or transmitted to a computer system memory to configure the computer system to perform functions such as the modules.
Such computer systems typically process information according to programs (a list of internally stored instructions, such as a particular application program and/or operating system) and produce resultant output information via I/O devices. A computer process typically includes an executing (running) program or portion of a program, current program values and state information, and the resources used by the operating system to manage the execution of the process. The parent process may spawn other child processes to help perform the overall functionality of the parent process. Because the parent process specifically spawns the child processes to perform a portion of the overall functionality of the parent process, the functionality performed by the child processes (and grandchild processes, etc.) may sometimes be described as being performed by the parent process.
Such computer systems typically include multiple computer processes that execute "concurrently". Typically, a computer system includes a single processing unit that is capable of supporting many active processes in an alternating fashion. While multiple processes may appear to be executing at the same time, at any given point in time, only one process is actually executed by a single processing unit. By quickly changing the process of execution, the computer system gives the appearance of concurrent process execution. The ability of a computer system to multiplex computer system resources among multiple processes in various stages of execution is referred to as multitasking. A system with multiple processing units (which by definition can support true concurrent processing) is referred to as a multiprocessing system. When such processes are executed in a multitasking and/or multiprocessing environment, the active processes are often referred to as executing concurrently.
The software modules described herein may be received by such a computer system, for example, from a computer-readable storage medium. The computer readable storage medium may be permanently, removably or remotely coupled to the computer system. The computer-readable storage medium may non-exclusively include, for example, any number of the following: magnetic storage media (including disk and tape storage media), optical storage media (e.g., compact disk media (e.g., CD-ROM, CD-R, etc.)), and digital video disk storage media. Non-volatile memory includes semiconductor-based memory cells, such as FLASH memory, EEPROM, EPROM, ROM, or application specific integrated circuit; volatile storage media include registers, buffers or caches, main memory, RAM, etc.; and other such computer-readable storage media. In a UNIX-based embodiment, the software modules may be embodied in a file that may be a device, a terminal, a local or remote file, or other such device. Other new and various types of computer-readable storage media may be used to store the software modules discussed herein.
Example architectures for characterizing products and/or services
FIG. 3 is a block diagram depicting anetwork architecture 300 in whichclient systems 310, 320, and 330 and storage servers 340A and 340B (any of which may be implemented using computer system 610) are coupled to anetwork 350. Storage server 340A is further depicted as having directly attached storage devices 360A (1) - (N), and storage server 340B is depicted as having directly attached storage devices 360B (1) - (N). Storage servers 340A and 340B are also connected toSAN fabric 370, but no connection to a storage area network is required for operation.SAN fabric 370 supports access to storage devices 380(1) - (N) by storage servers 340A and 340B, and thusclient systems 310, 320, and 330, vianetwork 350.Intelligent storage array 390 is also shown as an example of a particular storage device accessible viaSAN fabric 370.
Referring tocomputer system 610 described below,modem 647,network interface 648, or some other method can be used to provide a connection from each ofclient systems 310, 320, and 330 tonetwork 350.Client systems 310, 320, and 330 are able to access information on storage server 340A or 340B using, for example, a web browser or other client software (not shown). Such clients allowclient systems 310, 320, and 330 to access data hosted by storage server 340A or 340B or storage devices 360A (1) - (N), 360B (1) - (N), 380(1) - (N), orsmart storage array 390. Fig. 7 and 8 depict the use of a network, such as the internet, for exchanging data, but the system described herein is not limited to the internet or any particular network-based environment.
The foregoing embodiments, wherein different components are contained within different other components (e.g., various elements discussed subsequently that are shown as components of computer system 610). It is to be understood that such depicted architectures are merely examples, and that in fact many other architectures can be implemented which achieve the same functionality. In an abstract, but still definite sense, any arrangement of components to achieve the same functionality is effectively "associated" such that the desired functionality is achieved. Hence, any two components herein combined to achieve a particular functionality can be seen as "associated with" each other such that the desired functionality is achieved, irrespective of architectures or intermedial components. Likewise, any two components so associated can also be viewed as being "operably connected," or "operably coupled," to each other to achieve the desired functionality.
Example networked devices
Fig. 4 is a block diagram illustrating components of anexample networking device 400, which depicts, at least in part, one configuration of a network device or network routing element (e.g., a hub, router, switch, or similar device). In this depiction,networking device 400 includes a plurality of line cards (line cards 402(1) -402(N)) communicatively coupled to a control module 410 (which may include a forwarding engine, not shown) and a flow control (or flow control)processor 420 via adata bus 430 and aresults bus 440. Line card 402(1) - (N) includes a plurality of port processors 450(1, 1) -450(N, N) controlled by port processor controllers 460(1) -460 (N). It should also be noted thatcontrol module 410 andflow control processor 420 are not only coupled to each other viadata bus 430 andresult bus 440, but are also communicatively coupled to each other bycommunication link 470. It should be noted that in alternative embodiments, each line card may include its own forwarding engine.
When a network device or network routing element (e.g., networking device 400) receives a message (e.g., an authorization request, a debug certificate, or a debug complete confirmation message), the message is identified and analyzed in the following manner. Upon receipt, a message (or some or all of its control information) is sent from one of the port processors 450(1, 1) -450(N, N) that received the message to one or more of those devices coupled to the data bus 430 (e.g., other ones of the port processors 450(1, 1) -450(N, N), the forwarding engine, and/or the flow control processor 420). The processing of the message may be, for example, the performance of a forwarding engine in accordance with the systems and methods disclosed herein. For example, the forwarding engine may determine that the message should be forwarded to one or more of the port processors 450(1, 1) -450(N, N). This may be accomplished by indicating to the corresponding port processor controller(s) of port processor controllers 460(1) -460(N) that a copy of the message held in a given port processor(s) of port processors 450(1, 1) -450(N, N) should be forwarded to the appropriate one of port processors 450(1, 1) -450(N, N).
Thenetworked device 400 may be used to implement, for example, network devices (e.g., other examples ofpolicy service 110, dataaccess control server 120, user device 130, user interface 140, data analysis andreporting service 150, encryption server 160, and location tracking server 170) or network routing elements incontrol module 410 or in one or more of port processor controllers 460(1) -460(N) and/or inflow control processor 420 to implement the present disclosure. Although not shown,network device 400 may also be used for a routing protocol module and/or a network reachability protocol module (not shown) incontrol module 410, in one of port processor controllers 460(1) -460(N), and/or intraffic control processor 420. Further, thecontrol module 410 may implement one or more steps of themethod 700 ormethod 800 and may be used in conjunction with (or as part of) thepolicy service 110, the dataaccess control server 120, the user device 130, the user interface 140, the data analysis andreporting system 150, the encryption server 160, the location tracking server 170, and/or subcomponents or inputs (e.g., policy information input 118) of any of those components of thesystem 100.
The incoming message (e.g., an authorization request, debug certificate, or debug complete acknowledge message) may be provided to the network device or network routing element via a forwarding engine or port processor of a line card coupled to the port receiving the incoming message.Network device 400 may be configured to process incoming messages and generate one or more outgoing messages (e.g., authorization requests, debug credentials, or debug completion confirmation messages), as described throughout this disclosure.
The outgoing message may be provided to the forwarding engine by a network device or network routing device that may determine that the outgoing message should be forwarded to one or more of the port processors 450(1, 1) -450(N, N) that are configured to send the outgoing message to its destination.
Fig. 5 is a block diagram illustrating components of an example networking device 500 configured as a network device (e.g., other examples ofpolicy service 110, dataaccess control server 120, and user device 130) or network routing element. As shown, the networked device 500 includes one or more processors 502 (e.g., microprocessors, PLDs (programmable logic devices), or ASICs (application specific integrated circuits)) configured to execute program instructions stored in memories 506 and/or 508, where the memories 506 and/or 508 are computer readable storage media. Thememories 506 and 508 may include various types of RAM (random access memory), ROM (read only memory), flash memory, MEMS (micro electro mechanical system) memory, and the like. The networking device 500 also includes one or more ports 504 (e.g., one or more hardware ports or other network interfaces that may be linked to other networking devices, hosts, servers, storage devices, etc.). Theprocessor 502, port 504, andmemories 506 and 508 are coupled to send and receive data and control signals over one or more buses or other interconnects.
In this example, program instructions executable to implement the systems and methods disclosed herein are stored in memory 506. The topology information and network reachability information may be stored in one or more tables 530.
A message 510 (e.g., an authorization request, a debug certificate, or a debug complete confirmation message) is stored inmemory 508. In one embodiment, the message 510 may be received from the port 504 (e.g., received from another networked device coupled to the port 504) and may be stored in thememory 508 before being forwarded to another networked device in accordance with the systems and methods of the present disclosure. In one embodiment, outgoing messages 510 may be generated and stored inmemory 508 before being sent via port 504.
Example computing and network Environment
As indicated above, the present disclosure may be implemented using a variety of computer systems and networks. An example of one such computing environment is described below with reference to FIG. 6.
FIG. 6 depicts a block diagram of acomputer system 610 suitable for implementing aspects of the present disclosure.Computer system 610 includes abus 612 that interconnects the major subsystems ofcomputer system 610, such ascentral processor 614, system memory 617 (typically RAM, but which may also include ROM, flash RAM, etc.), input/output controller 618, an external audio device (e.g.,speaker system 620 via audio output interface 622), an external device (e.g.,display 624 via display adapter 626),serial ports 628 and 630, keyboard 632 (interfacing with keyboard controller 633),storage interface 634,floppy disk unit 637 operable to receive floppy disk 638, Host Bus Adapter (HBA)interface card 635A operable to connect with fibre channel network 690, Host Bus Adapter (HBA)interface card 635B operable to connect toSCSI bus 639, andoptical disk drive 640 operable to receive optical disk 642. Also included are a mouse 646 (or other point-and-click device) coupled tobus 612 viaserial port 628, a modem 647 (coupled tobus 612 via serial port 630), and a network interface 648 (coupled directly to bus 612).
Bus 612 allows data communication betweencentral processor 614 and system memory 617, and system memory 617 may include Read Only Memory (ROM) or flash memory (neither shown) and Random Access Memory (RAM) (not shown), as previously described. The RAM is typically the main memory into which the operating system and application programs are loaded. The ROM or flash memory may contain, among other code, a Basic Input Output System (BIOS) that controls basic hardware operations such as interaction with peripheral components. Applications resident in (or otherwise added to or stored in)computer system 610 are typically stored on and accessed via a computer readable medium, such as a hard disk drive (e.g., fixed disk 644), an optical disk drive (e.g., optical disk drive 640), afloppy disk unit 637, or other storage media. In addition, applications may be in the form of electronic signals modulated in accordance with the application and data communication technology when accessed via thenetwork modem 647 or thenetwork interface 648.
Storage interface 634, like the other storage interfaces ofcomputer system 610, may connect to a standard computer-readable medium for storing and/or retrieving information, such as the fixeddisk 644.Fixed disk 644 may be a part ofcomputer system 610 or may be separate and accessed through other interface systems.Modem 647 may provide a direct connection to a remote server via a telephone link or to the internet via an Internet Service Provider (ISP).Network interface 648 may provide a direct connection to a remote server via a direct network link to the internet via a POP (point of presence).Network interface 648 may provide such connection using wireless techniques, including digital cellular telephone connection, Cellular Digital Packet Data (CDPD) connection, digital satellite data connection, and the like.
Many other devices or subsystems (not shown) may be connected in a similar manner (e.g., document scanners, digital cameras, etc.). Conversely, not all of the devices shown in FIG. 6 are required to practice the present disclosure. The devices and subsystems may be interconnected in different ways from that shown in fig. 6. The operation of a computer system such as that shown in FIG. 6 is well known in the art and is present in the artAnd will not be discussed in detail in the application. Code for implementing the present disclosure may be stored in a computer-readable storage medium, such as one or more of the following: system memory 617, fixed
disk 644, optical disk 642, or floppy disk 638. The operating system provided on
computer system 610 may be
Or other known operating systems.
Further, with respect to the signals described herein, those skilled in the art will recognize that a signal may be sent directly from a first block to a second block, or that a signal may be modified (e.g., amplified, attenuated, delayed, latched, buffered, inverted, filtered, or otherwise modified) between blocks. Although the signals of the above-described embodiments are characterized as being transmitted from one block to the next, other embodiments of the present disclosure may include modified signals in place of such directly transmitted signals, so long as the information and/or functional aspects of the signals are transferred between blocks. To some extent, due to the physical limitations of the circuitry involved (e.g., there will inevitably be some attenuation and delay), the signal input at the second block may be conceptualized as a second signal derived from the first signal output by the first block. Thus, as used herein, a second signal derived from a first signal includes the first signal or any modification to the first signal, whether due to circuit limitations or by other circuit elements that do not alter the informational and/or final functional aspect of the first signal.
Example method for determining whether to grant access to a protected data asset
Fig. 7 is a flow diagram of amethod 700 that illustrates various acts performed in connection with one embodiment of the systems and techniques disclosed herein. It will also be understood from this disclosure that the method may be modified to yield alternative embodiments. Further, while the steps in this embodiment are shown in a sequential order, certain steps may occur in a different order than shown, certain steps may be performed simultaneously, certain steps may be combined with other steps, and certain steps may be omitted in another embodiment.
Themethod 700 described with reference to the example elements shown in fig. 1-6 illustrates a process that may be performed in accordance with the present disclosure. More specifically,method 700 describes a method for granting access to protected data assets where appropriate. In one embodiment,method 700 may be performed by a security agent (e.g., security agent 136).
Method 700 begins atstep 710, where a security agent (e.g., security agent 136) receives a request to access a protected data asset. In response to receiving the request, the security agent identifies one or more criteria to be evaluated by the policy(s) associated with the protected data asset instep 720. Although specific criteria may vary from implementation to implementation, examples of some criteria (e.g., username, password, device type, device ID, UUID, physical location, network, current time, etc.) are discussed throughout this disclosure. Atstep 730, themethod 700 then uses the relevant criteria of the policy to determine whether to grant access to the protected data asset for which access has been requested. Specific examples ofsteps 720 and 730, including the above and following paragraphs, are discussed in more detail throughout this disclosure. In one embodiment, this determination may be made by thesecurity agent 136, acting alone or in conjunction with other components of the system (e.g., thepolicy repository 110, the dataaccess control server 120, and the encryption server 160). In other embodiments, the determination may be made by one or more of the other components of thesystem 100, and then relayed to thesecurity agent 136 for evaluation and further processing.
The
security agent 136 then evaluates the results of this determination to determine whether access to the protected data asset should be allowed (or granted) at
step 740. If the
method 700 determines in
step 740 that access should be allowed (or granted), the
security agent 136 may perform the other steps in
step 750 to provide access to the protected digital data asset. Examples of these steps are disclosed in this disclosureProvided elsewhere herein, for ease of discussion, some of these steps are repeated herein (e.g., obtaining the relevant decryption material and using it to decrypt the protected data asset, thereby allowing the underlying software application (e.g.,
) Access, open, and/or display data assets on a user device). If the
method 700 determines in
step 740 that access should not be allowed (or not permitted), the
method 700 moves to step 760 and access to the protected data asset is not allowed (or permitted) (or any steps are taken to permit access, such as obtaining decrypted material), in which case the protected data asset will remain encrypted and the user's device will not be able to open, display, or otherwise access the protected data asset. After either step 750 or step 760, the
method 700 ends.
The foregoing description is intended to be illustrative and should not be taken to be limiting. It will be understood from this disclosure that other embodiments are possible. Those skilled in the art will readily implement the steps necessary to provide the structures and methods disclosed herein, and will understand that the process parameters and sequence of steps are given by way of example only and can be varied to achieve the desired structure and modification within the scope of the claims. Variations and modifications of the embodiments disclosed herein may be made based on the description set forth herein, without departing from the scope of the claims, giving full cognizance to equivalents in all respects.
Example operation of location function
As described above, with respect to the systems and methods described in this disclosure, dynamic location tracking (as one example of dynamic attribute tracking) is an important attribute for policy creation, enforcement, and maintenance. To further assist in such dynamic location tracking, location tracking functionality may also be provided by methods and systems such as those described herein. To this end, such dynamic location tracking functionality may be used in conjunction with at least the steps ofmethod 700 described herein. When combined with other document classification attributes, such as those described herein, location tracking enables the systems and methods disclosed herein to control access to protected data assets based on one or more locations where a document may or may not be accessed.
As one example, a user (e.g., with a username "Bob") may log into the system via a device (e.g., his laptop, device name "laptop"). According to this example, the system may identify the user device by a name such as "Bob + laptop" to indicate the user and the device being used. The system may then use location tracking functionality (e.g., functionality provided by CMX) to track the location of Bob + laptop. When the user device "Bob + laptop" performs a login, a location event is generated. Similarly, another location event is generated when Bob + laptop changes location (e.g., physically, logically (e.g., by disconnecting from one network and then connecting to another network), etc.). Location events such as these are consumed by the system and a location tracker is started for "Bob + laptop" (Bob accesses the system using his laptop). In some embodiments, location events may be consumed by a push model or a pull model, or some combination thereof.
The location tracker according to the methods and systems described herein may be implemented in a software module, a hardware module, or some combination thereof. In some embodiments, when implemented as a software module, the location tracker is launched, for example, for a user-device combination (e.g., for "Bob + laptop"). Such a location tracker may store client information (e.g., information about users, devices, user devices, and/or other relevant information), the current location of the user device, and information about the validity of such information. When a user (e.g., "Bob") attempts to open a protected data asset using a device (e.g., "laptop"), the corresponding policy is evaluated against information maintained by the tracker. If "Bob + laptop" is located at a location approved by the policy (e.g., Bob's office), the system may provide the information (e.g., decrypted material) needed to access (e.g., decrypt) the protected data asset. However, when "Bob + laptop" moves, a new location event may be generated. Assuming that such a change in location is sufficient to generate such a location event (e.g., based on the type of movement, the technology involved and its granularity, the constraints of the policy, and other such factors), if "Bob + laptop" then attempts to access the protected data asset (e.g., open a file) from the new location (or attempts to maintain access to the related data asset), the new location may be verified and access granted or denied (or revoked) based on the location. If the new location is not an acceptable location based on one or more applicable policies (e.g., referred to herein as an "invalid" location), then "Bob + laptop" will not be granted (e.g., the system will not provide the necessary decrypted material for decryption) access to the protected data asset that "Bob + laptop" is attempting to open (or will revoke access to the protected data asset (e.g., by forcing a file to close and flushing the relevant decrypted material from the security agent's computer-readable storage medium (e.g., memory)).
In one aspect of methods and systems such as those described herein, information related to location events may be stored in a non-transitory computer-readable storage medium, for example, a database or log file stored on a hard disk drive. In one embodiment, each location event may be indexed by a "device Id" or similar value, e.g., a network address (e.g., a Media Access Control (MAC) layer address). In another embodiment, each location event may be indexed by a different value, such as a combination of a username and a device Id. Regardless of the specific key used to uniquely identify the record, the computer-readable storage medium (e.g., in which the database is stored) may also store other information, for example, a unique policy identification value (e.g., a Universally Unique Identifier (UUID) for a given policy instance), a client UUID (e.g., a unique value representing a client or user), a current location (e.g., provided by a corresponding location event), information regarding one or more policy location constraints (e.g., a physical location where a policy permits or does not permit access to protected data assets), and a "location valid" flag (e.g., a booean value) indicating whether the current location is valid (e.g., indicating that a location is acceptable with respect to one or more applicable policies (as opposed to the "invalid location" discussed above)), and other information that may be stored in a database or other memory structure.
In one embodiment, client location tracking begins with a first policy having location constraints applicable to a given client, and ends with a last policy having location constraints applicable thereto. Client location tracking may be configured to avoid unnecessary tracking operations and processing of location update events. For example, the system may be designed to avoid creating location update events unless the device moves at least a certain amount (e.g., distance) and/or changes access points (e.g., cells, Wireless Access Points (WAPs), etc., and/or combinations thereof), thereby avoiding repeated location updates each time the "location" of the user device changes or is confronted with similar events. In other embodiments, location update events may be generated automatically (e.g., at set time intervals), thereby also avoiding repeated location update events when there are repeated minor changes in the location of the user device. As will be described in more detail below, location update events may be based on a push model (which may be based on other potential criteria such as, for example, one or more applicable time periods and/or physical movement) or a pull model (e.g., based on one or more applicable time periods, although other criteria are also possible).
In one embodiment, the client location update event is received by thepolicy repository 110 in conjunction with the event repository 116 (and/or by one or more other components of the system 100) and in some embodiments in conjunction with one or more components of the user interface 140 and processed as follows: the system or related system component(s) retrieves or "selects" the related policy instance based on a given key value (e.g., a combination of device Id and client UUID) for the related policy index. Upon retrieving or selecting the relevant policy instance, the system may iterate the retrieved/selected policy instances, update the current location of the user device for each policy instance, compare the current location to the location constraints provided by the policies to determine whether access to the protected data asset should still be allowed under each respective policy and/or update the location valid flag value in the respective policy instance if location verification has changed, thereby avoiding unnecessary updates to the records of the user device for each affected policy instance when the location valid flag value (e.g., TRUE/FALSE) has not changed. The system may use a process such as this to efficiently locate and, if necessary, update the affected client policy instances (e.g., all those that may be affected by a location change).
As noted above, in some embodiments, policy instances are updated using one or a combination of two main approaches: pull methods (e.g., through techniques such as polling) and push methods (e.g., through techniques such as callbacks from push notifications). Similarly, the system may be arranged to operate in a pull mode and/or a push mode. When operating in pull mode, each user device or other client/endpoint is polled, for example, by the system (e.g., policy service 110) at set intervals to verify each relevant policy instance to obtain the current location of the device and determine whether that location is still valid for each policy.
In the push mode, the user device 130 is primarily responsible for updating the system 100 (e.g., thepolicy service 110 and/or the data access control server 120) when a location update event occurs. As described above, location update events may be pushed from the user device 130 to other components of thesystem 100 based on time, movement of the user device 130, or a combination of both, among other such possible events. When operating on a time basis, the user device 130 (or a component of the user device 130, such as the security agent 136) may be configured to push location update events at certain time intervals (e.g., once every 5 minutes or once per day). The length and frequency of these time intervals may be configured by an administrator (e.g., by using user interface 140) based at least in part on the sensitivity of the digital data assets covered by a particular policy instance. When operating based on location, the user device 130 (or a component of the user device 130, such as the security agent 136) may be configured to push location update events based on various location-based criteria (e.g., when the user device 130 moves a physical distance (e.g., at least 10 feet), when the user device 130 crosses a physical or network boundary (e.g., leaves a room or switches to a different network or router within a network), when the user device 130 moves from one network connection to another, etc.).
Regardless of how the location update event occurs, the system may process the event as received (e.g., as received at the policy service 110). For example, a system (e.g., policy service 110) may record each received event inevent store 116 and update the policy instance inpolicy store 114, e.g., through the above-described method steps. Further, if the system (e.g., policy service 110) determines that the location is no longer valid, the system (e.g., policy service 110) may send a notification to dataaccess control server 120 and/or user device 130 indicating that access to the corresponding data asset(s) should be revoked (if previously granted) or that such access should not be provided (if any currently protected digital data assets are requested). In the event that the previously granted access is revoked, the system may issue a command to require thesecurity agent 136 to shut down and/or re-encrypt the corresponding data asset(s).
Themethod 800 described with reference to the example elements shown in fig. 1-6 illustrates a process that may be performed in accordance with the present disclosure. More specifically,method 800 depicts a process for evaluating one or more policies, according to one embodiment. In one embodiment, one or more steps ofmethod 800 may be performed by user devices, such aspolicy service 110 and/or dataaccess control server 120, working in conjunction with each other and with other elements, such as those shown in fig. 1-6.
In the example method of FIG. 8, the determination of policy enforcement begins instep 810, by determining which policy or policies apply to a given client policy session. Assuming that at least one policy applies to the current client policy session, themethod 800 then selects a policy p (i) to evaluate instep 820. (As used in the context of FIG. 8 and accompanying discussion and as will be understood in the art, "i" is an integer used to track the current policy being evaluated.)
After selecting policy p (i) instep 820,method 800 then evaluates each policy constraint (including any constraints that may be implied in the policy) instep 830 by evaluating the policy against various information and inputs such as those described elsewhere in this disclosure. Instep 840, themethod 800 makes a determination based on the results of the evaluation instep 830. Ifstep 840 determines that all constraints of the policy are satisfied, thenmethod 800 proceeds to step 850, wheremethod 800 allows access to the protected data asset in question (or grants long term access to the user device) atstep 850, and thenmethod 800 ends.
Ifstep 840 determines that not all constraints of the policy are satisfied, thenmethod 800 proceeds to step 860. Atstep 860, themethod 800 then determines whether there are any other policies P (i). Ifstep 860 determines that at least one or more policies are available,method 800 then proceeds to step 870. Themethod 800 then increments the count variable ("i") instep 870, and then returns to step 820, where the next policy P (i) is selected instep 820. (althoughstep 870 is included herein to facilitate following the steps of the flow chart in fig. 8, in practice such a counting variable is not necessary, but may be used.) ifstep 860 determines that no other policy exists, themethod 800 then ends.
In the embodiment shown in FIG. 8 and described above, access to protected data assets may be granted if the constraints of any one of policies P (i) are fully satisfied. For example, in the embodiment shown in FIG. 8, each policy may include various constraints, such as user groups, location information, and time. In this embodiment, each policy may contain its own set of constraints, but the particular constraints and acceptable values among these constraints may vary depending on the policy. As an example, a first policy may specify that a certain protected data asset may be accessed by a member of an "executive" user group if the security agent requesting access is located at a particular office and requests to be made at any time of day. The second policy may specify that the same protected data assets are accessible to members of the "administrative" user group if the members of the "executive" user group are within a certain distance and request to be sent out of the conference room during normal business hours. In this example, access to the protected data asset may be granted if the constraints of any policy are fully satisfied.
Although not explicitly depicted in fig. 8, alternative embodiments also exist. In this alternative embodiment, access is granted only if the constraints of each policy P (i) are fully satisfied. This embodiment allows for the case where there is a separate policy for each constraint. For example, there may be a first "user group" policy that covers all "user groups" that have permission to access the protected data asset, there may be a second "location" policy that covers all acceptable locations where the protected data asset may be accessed, there may be a third "time" policy that covers acceptable times where the protected data asset may be accessed, and so on. Thus, in this alternative embodiment, the constraints in each policy may have a logical "OR" (OR) relationship with each other. For example, a "location" policy may include a constraint indicating that protected data assets may be accessed in an enforcement office, a staff office, or a conference room. Since security agents typically cannot be located in three different locations at the same time, the policy may include these constraints in a logical or relationship, or as a series of alternatives to satisfying the location policy. Thus, in this embodiment, access should only be granted if at least one constraint from each policy type (e.g., user group policy, location policy, time policy, etc.) is satisfied.
Thus, in this alternative embodiment, steps 810-840 remain the same. However, in this alternative embodiment, ifstep 840 determines that all of the constraints of a given policy P (i) have been satisfied, thenmethod 800 will proceed to step 860. If any other policies are available instep 860, themethod 800 then proceeds to step 870 and then loops back tostep 820. In this alternative embodiment, themethod 800 continues the loop as long as other policies are available and no policy fails instep 840. If themethod 800 iterates through all available policies P (i) without any policy failures instep 840, themethod 800 of this alternative embodiment then proceeds to step 850 where access is granted, at which point the alternative embodiment of themethod 800 ends. During any iteration ofmethod 800, ifstep 840 determines that not all constraints of any given policy p (i) are satisfied, then the alternative embodiment will end without granting access to the protected digital data asset (or without granting long term access to the user device).
The foregoing description is intended to be illustrative and should not be taken to be limiting. It will be understood from this disclosure that other embodiments are possible. Those skilled in the art will readily implement the steps necessary to provide the structures and methods disclosed herein, and will understand that the process parameters and sequence of steps are given by way of example only and can be varied to achieve the desired structure and modifications within the scope of the claims. Variations and modifications of the embodiments disclosed herein may be made based on the description set forth herein, without departing from the scope of the claims, giving full cognizance to equivalents in all respects.
Fig. 9 is a flow diagram illustrating amethod 900 of various actions performed in connection with one embodiment of the systems and techniques disclosed herein. It will also be understood from this disclosure that the method may be modified to yield alternative embodiments. Further, while the steps in this embodiment are shown in a sequential order, certain steps may occur in a different order than shown, certain steps may be performed simultaneously, certain steps may be combined with other steps, and certain steps may be omitted in another embodiment.
Themethod 900 described with reference to the example elements shown in fig. 1-6 illustrates a process that may be performed in accordance with the present disclosure. More specifically,method 900 depicts a process for determining locations with sufficient granularity, according to one embodiment. In one embodiment, one or more steps ofmethod 900 may be performed by a user device, such as user device 130 and/or location tracking server 170, working in conjunction with each other and with other elements, such as those shown in fig. 1-6.
In the example process of fig. 9, a policy enforcement is determined, the process starting with determining available type(s) for location information instep 910. The type(s) of location information for the applicable policy(s) is then determined instep 920. The matching location information type(s) is then selected instep 930. This results in a set of location information types to be considered in the policy application. The matching location information type(s) may then be sorted or weighted by their granularity instep 940. For example, less restrictive location information (e.g., GPS information) may be considered (ranked) before more restrictive location information (e.g., over-location information), or weighted slightly higher than more restrictive location information. Continuing to step 950, the next least restrictive matching location information type is selected for consideration and/or analysis. One or more applicable location information requirements are then applied and a determination is made instep 960 whether one or more policy criteria are satisfied. If the relevant criteria are not met, an indication is made instep 970 to block the requested access. The process then ends. Otherwise, the process will iterate until the necessary policy requirements are met. This iteration is accomplished by determining whether more policy requirements are to be considered and/or analyzed instep 980. If additional policy requirements are to be considered and/or analyzed, themethod 900 then loops back tostep 950. Otherwise, if there are no more policy requirements to consider or analyze and if applicable requirements have been met, then access to the protected data asset is granted instep 990. Themethod 900 then ends.
It will be understood in light of this disclosure that policies, such as those described herein, may inform decisions regarding location, where a given user/device should have varying levels of granularity. For example, less precise requirements mean "looser" location control and faster authorization, and typically consume less processing power and require less communication, thereby consuming less bandwidth and less computer resources than more precise requirements. Conversely, more accurate location information means tighter or more accurate location control, which reduces the risk of unauthorized access, but may take longer to authorize and may consume more bandwidth and more computing resources in the process. Further, the finer the granularity, the more sensitive the system is to changes in location, but the more location information that is generated. The system disclosed herein provides such a strategy: the granularity requirement for such location information may be specified based on predetermined parameters and/or other variables, and may also be specified on a sliding scale or as a weighted value derived from analysis in conjunction with other values and/or information specified in the policy.
In summary, various systems and methods are disclosed herein for determining whether to allow or continue to allow access to protected data assets. For example, a method involves receiving a request to access a protected data asset, where the request is received from a first user device; determining whether access to the protected data asset is permitted, wherein the determining comprises evaluating one or more criteria associated with the first user device, and the criteria comprises first information associated with a first policy constraint; and granting access to the protected data asset in response to determining to grant access to the protected data asset.
The foregoing description is intended to be illustrative and should not be taken to be limiting. It will be understood from this disclosure that other embodiments are possible. Those skilled in the art will readily implement the steps necessary to provide the structures and methods disclosed herein, and will understand that the process parameters and sequence of steps are given by way of example only and can be varied to achieve the desired structure and modifications within the scope of the claims. Variations and modifications of the embodiments disclosed herein may be made based on the description set forth herein, without departing from the scope of the claims, giving full cognizance to equivalents in all respects.
Other embodiments
The system described herein is well suited to attain the advantages mentioned as well as others inherent therein. While such a system has been depicted, described, and is defined by reference to particular descriptions, such references do not imply a limitation on the claims, and no such limitation is to be inferred. The systems described herein are capable of considerable modification, alteration, and equivalents in form and function, as will occur to those ordinarily skilled in the pertinent art upon consideration of this disclosure. The depicted and described embodiments are examples only, and are not intended to be exhaustive of the scope of the claims.
The foregoing detailed description has set forth various embodiments of the systems described herein through the use of block diagrams, flowcharts, and examples. It will be understood by those within the art that each block diagram component, flowchart step, operation, and/or component illustrated by the use of examples can be implemented (individually and/or collectively) by a variety of hardware, software, firmware, or any combination thereof.
The system described herein has been described in the context of a fully functional computer system; however, those skilled in the art will appreciate that the system described herein is capable of being distributed as a program product in a variety of forms, and that the system described herein applies equally regardless of the particular type of computer-readable media used to actually carry out the distribution. Examples of computer readable media include computer readable storage media, and media storage and distribution systems developed in the future.
The embodiments discussed above may be implemented by software modules that perform one or more tasks associated with the embodiments. The software modules discussed herein may include script, batch, or other executable files. The software modules may be stored on a machine-readable or computer-readable storage medium, such as a magnetic floppy disk, a hard disk, a semiconductor memory (e.g., RAM, ROM, and flash-type media), an optical disk (e.g., CD-ROM, CD-R, and DVD), or other type of memory module. A storage device for storing firmware or hardware modules according to an embodiment may also include a semiconductor-based memory, which may be permanently, removably or remotely coupled to the microprocessor/memory system. Accordingly, the modules may be stored within a computer system memory to configure the computer system to perform the functions of the module. Other new and various types of computer-readable storage media may be used to store the modules discussed herein.
The foregoing description is intended to be illustrative and should not be taken to be limiting. It will be understood from this disclosure that other embodiments are possible. Those skilled in the art will readily implement the steps necessary to provide the structures and methods disclosed herein, and will understand that the process parameters and sequence of steps are given by way of example only and can be varied to achieve the desired structure and modifications within the scope of the claims. Variations and modifications of the embodiments disclosed herein may be made based on the description set forth herein, without departing from the scope of the claims, giving full cognizance to equivalents in all respects.
Although the present disclosure has been described in connection with several embodiments, it is not intended to be limited to the specific form set forth herein. On the contrary, it is intended to cover such alternatives, modifications, and equivalents as may be reasonably included within the scope of the disclosure as defined by the appended claims.