RELATED APPLICATIONThis patent arises from a continuation of U.S. Provisional Patent Application Ser. No. 62/907,597, which was filed on Sep. 28, 2019; and U.S. Provisional Patent Application Ser. No. 62/939,303, which was filed on Nov. 22, 2019. U.S. Provisional Patent Application Ser. No. 62/907,597; and U.S. Provisional Patent Application Ser. No. 62/939,303 are hereby incorporated herein by reference in their entirety. Priority to U.S. Provisional Patent Application Ser. No. 62/907,597; and U.S. Provisional Patent Application Ser. No. 62/939,303 is hereby claimed.
FIELD OF THE DISCLOSUREThis disclosure relates generally to edge environments, and, more particularly, to methods, system, articles of manufacture, and apparatus to manage telemetry data in an edge environment.
BACKGROUNDEdge environments (e.g., an Edge, Fog, multi-access edge computing (MEC), or Internet of Things (IoT) network) enable a workload execution (e.g., an execution of one or more computing tasks, an execution of a machine learning model using input data, etc.) near endpoint devices that request an execution of the workload. Edge environments may include infrastructure, such as an edge platform, that is connected to an edge cloud and/or data center cloud infrastructures, endpoint devices, or additional edge infrastructure via networks such as the Internet. Edge platforms may be closer in proximity to endpoint devices than public and/or private cloud infrastructure including servers in traditional data-center clouds.
BRIEF DESCRIPTION OF THE DRAWINGSFIG. 1 depicts an example environment including an example cloud environment, an example edge environment, an example endpoint environment, and example telemetry controllers to aggregate telemetry data.
FIG. 2 depicts an example block diagram illustrating an example implementation of the telemetry controller ofFIG. 1.
FIG. 3 is a flowchart representative of a process that may be implemented using logic or machine readable instructions that may be executed to implement the telemetry controller ofFIGS. 1 and/or 2 when a wish list is obtained from a consumer.
FIG. 4 is a flowchart representative of a process that may be implemented using logic or machine readable instructions that may be executed to implement the telemetry controller ofFIGS. 1 and/or 2 when a wish list is published.
FIG. 5 is a flowchart representative of a process that may be implemented using logic or machine readable instructions that may be executed to the extractor ofFIG. 2 to process a commitment by extracting and/or aggregating telemetry data.
FIG. 6 is a flowchart representative of a process that may be implemented using logic or machine readable instructions that may be executed to implement the collector ofFIG. 2 to process a commitment by collecting telemetry data.
FIG. 7 is a flowchart representative of a process that may be implemented using logic or machine readable instructions that may be executed to implement the indexer ofFIG. 2 to process a commitment by indexing and/or searching telemetry data.
FIG. 8 is a flowchart representative of a process that may be implemented using logic or machine readable instructions that may be executed to implement the communication interface ofFIG. 2 to establish the priority channel ofFIG. 1.
FIG. 9 is a block diagram of an example processor platform structured to execute the instructions ofFIGS. 3, 4, 5, 6, 7, and/or8 to implement the telemetry controller ofFIGS. 1 and/or 2.
The figures are not to scale. In general, the same reference numbers will be used throughout the drawing(s) and accompanying written description to refer to the same or like parts. Connection references (e.g., attached, coupled, connected, and joined) are to be construed broadly and may include intermediate members between a collection of elements and relative movement between elements unless otherwise indicated. As such, connection references do not necessarily infer that two elements are directly connected and in fixed relation to each other.
DETAILED DESCRIPTIONDescriptors “first,” “second,” “third,” etc. are used herein when identifying multiple elements or components which may be referred to separately. Unless otherwise specified or understood based on their context of use, such descriptors are not intended to impute any meaning of priority, physical order or arrangement in a list, or ordering in time but are merely used as labels for referring to multiple elements or components separately for ease of understanding the disclosed examples. In some examples, the descriptor “first” may be used to refer to an element in the detailed description, while the same element may be referred to in a claim with a different descriptor such as “second” or “third.” In such instances, it should be understood that such descriptors are used merely for ease of referencing multiple elements or components.
Edge computing, at a general level, refers to the transition of compute and storage resources closer to endpoint devices (e.g., consumer computing devices, user equipment, etc.) to optimize total cost of ownership, reduce application latency, improve service capabilities, and improve compliance with data privacy or security requirements. Edge computing may, in some scenarios, provide a cloud-like distributed service that offers orchestration and management for applications among many types of storage and compute resources. As a result, some implementations of edge computing have been referred to as the “edge cloud” or the “fog,” as powerful computing resources previously available only in large remote data centers are moved closer to endpoints and made available for use by consumers at the “edge” of the network. In some examples, edge computing can include multiple “edges,” such as, for example, an edge directed toward IoT devices, an edge directed towards a cloud network, an edge directed toward mobile and/or multi-access edges (e.g., cell phones, drones, autonomous vehicles), and/or an edge directed toward private clouds, information technology (IT), etc.
Edge computing use cases in mobile network settings have been developed for integration with multi-access edge computing (MEC) approaches, also known as “mobile edge computing.” MEC approaches are designed to allow application developers and content providers to access computing capabilities and an IT service environment in dynamic mobile network settings at the edge of the network. Limited standards have been developed by the European Telecommunications Standards Institute (ETSI) industry specification group (ISG) in an attempt to define common interfaces for operation of MEC systems, platforms, hosts, services, and applications.
Edge computing, MEC, and related technologies attempt to provide reduced latency, increased responsiveness, and more available computing power than offered in traditional cloud network services and wide area network connections. However, the integration of mobility and dynamically launched services to some mobile use and device processing use cases has led to limitations and concerns with orchestration, functional coordination, and resource management, especially in complex mobility settings where many participants (e.g., devices, hosts, tenants, platforms, service providers, operators, etc.) are involved.
In a similar manner, Internet of Things (IoT) networks and devices are designed to offer a distributed compute arrangement from a variety of endpoints. IoT devices can be physical or virtualized objects that may communicate on a network, and can include sensors, actuators, and other input/output components, which may be used to collect data or perform actions in a real-world environment. For example, IoT devices can include low-powered endpoint devices that are embedded or attached to everyday things, such as buildings, vehicles, packages, etc., to provide an additional level of artificial sensory perception of those things. In recent years, IoT devices have become more popular and thus applications using these devices have proliferated.
The deployment of various Edge, Fog, MEC, and IoT networks, devices, and services have introduced a number of advanced use cases and scenarios occurring at and towards the edge of the network. However, these advanced use cases have also introduced a number of corresponding technical challenges relating to security, processing and network resources, service availability and efficiency, among many other issues. One such challenge is in relation to Edge, Fog, MEC, and IoT networks, devices, and services executing workloads on behalf of endpoint devices.
In some examples, an edge environment can include an enterprise edge in which communication with and/or communication within the enterprise edge can be facilitated via wireless and/or wired connectivity. The deployment of various Edge, Fog, MEC, and IoT networks, devices, and services have introduced a number of advanced use cases and scenarios occurring at and towards the edge of the network. However, these advanced use cases have also introduced a number of corresponding technical challenges relating to security, processing and network resources, service availability and efficiency, among many other issues. One such challenge is in relation to Edge, Fog, MEC, and IoT networks, devices, and services executing workloads on behalf of endpoint devices.
The present techniques and configurations may be utilized in connection with many aspects of current networking systems, but are provided with reference to Edge Cloud, IoT, Multi-access Edge Computing (MEC), and other distributed computing deployments. The following systems and techniques may be implemented in, or augment, a variety of distributed, virtualized, or managed edge computing systems. These include environments in which network services are implemented or managed using multi-access edge computing (MEC), fourth generation (4G) or fifth generation (5G) wireless network configurations; or in wired network configurations involving fiber, copper, and other connections. Further, aspects of processing by the respective computing components may involve computational elements which are in geographical proximity of a user equipment or other endpoint locations, such as a smartphone, vehicular communication component, IoT device, etc. Further, the presently disclosed techniques may relate to other Edge/MEC/IoT network communication standards and configurations, and other intermediate processing entities and architectures.
Edge computing is a developing paradigm where computing is performed at or closer to the “edge” of a network, typically through the use of a computing platform implemented at base stations, gateways, network routers, or other devices which are much closer to end point devices producing and consuming the data. For example, edge gateway servers may be equipped with pools of memory and storage resources to perform computation in real-time for low latency use-cases (e.g., autonomous driving or video surveillance) for connected client devices. Or as an example, base stations may be augmented with compute and acceleration resources to directly process service workloads for connected user equipment, without further communicating data via backhaul networks. Or as another example, central office network management hardware may be replaced with computing hardware that performs virtualized network functions and offers compute resources for the execution of services and consumer functions for connected devices.
Edge environments and/or otherwise edge clouds include networks and/or portions of networks that are located between a cloud environment and an endpoint environment. Edge environments enable computations of workloads at edges of a network. For example, an endpoint device may request a nearby base station to compute a workload rather than a central server in a data center cloud environment. Edge environments include edge platforms, which include pools of memory, storage resources, and processing resources. Edge platforms perform computations, such as an execution of a workload, on behalf of other edge platforms, edge services, and/or edge nodes. Edge environments facilitate connections between producers (e.g., workload executors, edge platforms) and consumers (e.g., other edge platforms, endpoint devices).
Because edge platforms may be closer in proximity to endpoint devices than centralized servers in cloud environments, edge platforms enable computations of workloads with a lower latency (e.g., response time) than cloud environments. Edge platforms may also enable a localized execution of a workload based on geographic locations or network topographies. For example, an endpoint device may require a workload to be executed in a first geographic area, but a centralized server may be located in a second geographic area. The endpoint device can request a workload execution by an edge platform located in the first geographic area to comply with corporate or regulatory restrictions.
Examples of workloads to be executed in an edge environment include autonomous driving computations, video surveillance monitoring, machine learning model executions, and real time data analytics. Additional examples of workloads include delivering and/or encoding media streams, measuring advertisement impression rates, object detection in media streams, speech analytics, asset and/or inventory management, and augmented reality processing.
Edge platforms enable both the execution of workloads and a return of a result of an executed workload to endpoint devices with a response time lower than the response time of a server in a cloud environment. For example, if an edge platform is located closer to an endpoint device on a network than a cloud server, the edge platform may respond to workload execution requests from the endpoint device faster than the cloud server. An endpoint device may request an execution of a time-constrained workload from an edge platform rather than a cloud server. As used herein, a cloud environment may include a combination of edge clouds and/or any suitable backend components in a data center, cloud infrastructure, etc.
In addition, edge platforms enable the distribution and decentralization of workload executions. For example, an endpoint device may request a first workload execution and a second workload execution. In some examples, a cloud server may respond to both workload execution requests. With an edge environment, however, a first edge platform may execute the first workload execution request, and a second edge platform may execute the second workload execution request.
To meet the low-latency and high-bandwidth demands of endpoint devices, orchestration in edge clouds has to be performed on the basis of timely information about the utilization of many resources (e.g., hardware resources, software resources, virtual hardware and/or software resources, etc.), and the efficiency with which those resources are able to meet the demands placed on them. Such timely information is generally referred to as telemetry (e.g., telemetry data, telemetry information, etc.).
Telemetry can be generated from a plurality of sources including each hardware component or portion thereof, virtual machines (VMs), operating systems (OSes), applications, and orchestrators. Telemetry can be used by edge platforms, orchestrators, schedulers, etc., to determine a quantity and/or type of computation tasks to be scheduled for execution at which resource or portion(s) thereof, and an expected time to completion of such computation tasks based on historical and/or current (e.g., instant or near-instant) telemetry. For example, a core of a multi-core central processing unit (CPU) can generate over a thousand different varieties of information every fraction of a second using a performance monitoring unit (PMU) sampling the core and/or, more generally, the multi-core CPU. Periodically aggregating and processing all such telemetry in a given edge platform, edge service, etc., to obtain a different distilled metric of interest at different times can be an arduous and cumbersome process. Prioritizing salient features of interest and extracting such salient features from telemetry to identify current or future problems, stressors, etc., associated with a resource is difficult. Furthermore, identifying a different resource to offload workloads from the burdened resource is a complex undertaking.
Further, edge clouds may include a plurality of edge platforms implemented in a decentralized manner. Accordingly, such distributed edge platforms in an edge cloud may aggregate and process telemetry data to perform peer-to-peer tasks such as, for example, migrate a service and/or resource from one edge platform to another edge platform, migrating a tenant, etc. Some edge environments desire to obtain telemetry data associated with resources executing a variety of functions or services, such as data processing or video analytics functions (e.g., machine vision, image processing for autonomous vehicle, facial recognition detection, visual object detection, etc.), extract telemetry data, aggregate telemetry data, index telemetry data, filter telemetry data, search telemetry data. In an edge cloud, each edge platform may be communicatively coupled to a server via intermittent wide area network (WAN) links having variable amount of bandwidth. Additionally, there may be other, low-bandwidth, communication paths (e.g., publish-subscribe paths) accessible by edge platforms via carrier-grade internet (e.g., MQ Telemetry Transport (MQTT) and Internet Protocol version 4 (IPv4)) and/or public internet (e.g., MQTT and Internet Protocol version 6 (IPv6)). In the event an edge platform attempts to migrate a resource, tenant, service, etc., such an edge platform attempts to ascertain telemetry data from other edge platforms in the edge cloud to properly migrate the resource, tenant, service, etc. In this example, the telemetry data may include network congestion data, available resources, etc.
However, because the edge cloud is made up as a decentralized, distributed network of edge platforms, orchestration and load balancing between edge platforms in an edge cloud can be inefficient and/or time consuming. For example, in the event a first edge platform initiates a migration command to offload a task and/or a service to another edge platform, such a first edge platform may not have access to telemetry data from the other edge platforms in the edge cloud. Furthermore, any telemetry data obtained by the first edge platform may be old (e.g., telemetry data that is no longer accurate) and/or obtained with a high transfer latency.
Accordingly, examples disclosed herein implement a decentralized network of edge platforms configured to broker telemetry data in a low-latency manner. In examples disclosed herein, edge platforms in an edge cloud include a telemetry controller configured to broker communication between edge platforms. In this manner, edge platforms can request and subsequently receive telemetry data in a ready-to-use format. In examples disclosed herein, a consumer (e.g., a user of telemetry data, another edge platform, and/or any suitable computing device) may transmit a wish list to the edge platforms in the edge environment. As used herein, a wish list, additionally or alternatively referred to as a telemetry wish list, corresponds to a request by consumers (e.g., a user of telemetry data, another edge platform, and/or any suitable computing device) to initiate a task such as, for example, obtaining resource utilizations to migrate a service, migrate a tenant, offload a service, etc. Further, a wish list (e.g., telemetry wish list) may identify when the information is to be obtained and/or otherwise retrieved (e.g., obtain the information now, obtain the information for a ten minute period, obtain the information to be generally available for access, etc.). To determine whether other edge platforms in the edge environment are capable of executing such a task, data relating to the tasks in the wish list (e.g., telemetry wish list) is brokered and obtained. In such examples disclosed herein, the edge platforms are configured to broker commitments to tasks in the wish list (e.g., telemetry wish list). For example, a wish list (e.g., telemetry wish list) may (1) request telemetry data from device A, (2) request a specified set of telemetry data to be extracted, and (3) request the specified set of telemetry data to be aggregated. Further in such an example, device A may transmit a commitment to the edge environment indicating responsibility for (1) collecting telemetry data, device B may transmit a commitment to the edge environment indicating responsibility to (2) extract the specified set of telemetry data, and device C (or any other suitable device such as, for example, device A or device B) may transmit a commitment to the edge environment indicating responsibility to (3) aggregate the previously extracted specified set of data.
Further, examples disclosed herein, edge platforms in an edge cloud are capable of performing local filtering to deliver high-quality, succinct telemetry data. Examples disclosed herein utilize a computing overlay distributed across edge platforms in an edge cloud to overlay a naming and accessing layer for telemetry data. Examples disclosed herein implement a distributed metadata lake including contributors, falterers, indexers, and accessors. As used herein, a distributed metadata lake refers to a repository and/or storage mechanism of data that can be stored physically in a one or more computing devices, computing machines, servers, etc. Data stored in a metadata lake may be physically distributed amongst edge servers, yet in examples disclosed herein, the metadata lake serves as an aggregation point for high level semantic information, filtering, etc. In examples disclosed herein, the telemetry data stored in the metadata lake may be lossy.
Examples disclosed herein utilize a distributed metadata lake operable to deliver timely information (e.g., timely telemetry data transfer) to various edge platforms in an edge cloud. In examples disclosed herein, the data stored in the metadata lake can be updated continuously to reduce the amount of lossy data stored and/or transferred in and/or from the metadata lake. To achieve this, examples disclosed herein may broker telemetry data among edge platforms (e.g., implement a crowd-sharing protocol, implement commitments, etc.) to ensure independency of operation and transactional consistency. Thus, data stored in the metadata lake may be continually updated and, though a portion of the data may be lost, the overall patterns in the data are robustly identifiable, interpolate-able, and/or extrapolate-able.
Examples disclosed herein utilize a naming and dictionary directory structured to store data identifying the edge platform associated with telemetry data. In this manner, examples disclosed herein may perform a lookup of telemetry data if the telemetry data is not up-to-date. For example, if telemetry data is not brokered to the metadata lake continuously, examples disclosed herein may utilize a lookup process to identify the non-updated version (e.g., old version) of the telemetry data. Telemetry data may not be brokered to the metadata lake in the event network conditions inhibit high latency. Further, in this manner, an edge platform and/or user requesting telemetry data may access the telemetry data from the metadata lake independent of the manner in which the telemetry data is obtained, stored, and/or filtered.
In examples disclosed herein, telemetry data transmission to and/or from an edge platform may be initiated based on a voluntary commitment. As used herein, a voluntary commitment refers to a commitment approved and/or otherwise assigned by and/or from a component to execute a task included in a wish list (e.g., telemetry wish list). Further, such a voluntary commitment may be associated with a commitment duration. For example, in the event telemetry data is requested, examples disclosed here may transmit a voluntary commitment request to suitable edge platforms (e.g., edge platforms that are capable of fulfilling the telemetry data request) to provide telemetry data and/or services for the next ten minutes. As used herein, data continuously sent by a component when fulfilling a commitment may be continuously sent when the component obtains telemetry data and/or otherwise executes the task(s) in the wish list (e.g., telemetry wish list). In some examples, continuous transmission of telemetry data may refer to data sent every microsecond, millisecond, hour, day, week, etc. In this manner, telemetry data may be continuously sent to the metadata lake from a number of edge platforms (e.g., participants that have accepted the voluntary commitment) that implement tasks as a result of voluntarily committing. In some examples disclosed herein, acceptance to voluntary commitments may be obtained prior to implementation. In examples disclosed herein, acceptance to voluntary commitments may be incentivized to assist other edge platforms obtain the telemetry data needed. Such an example incentivized method may include providing a preference level (e.g., increasing a credit) with an edge platform which, in turn, may enable such an edge platform to obtain a higher service level agreement (SLA) in obtaining the telemetry data requested at various points in time.
Examples disclosed herein create information domains and credentials to ensure the distributed metadata lake can provide credentialled accesses, both for producers and consumers of information. Because telemetry data may be available for certain time periods, credentialled accessed enables credentials to be rescinded upon expiration of the time period. Furthermore, examples disclosed herein include removing and/or otherwise archiving expired telemetry data (e.g., telemetry data no longer accessible) by removing old telemetry data and/or archiving such old telemetry data into a backend storage (e.g., a cloud storage).
FIG. 1 depicts an example environment (e.g., a computing environment)100 including anexample cloud environment105, anexample edge environment110, and anexample endpoint environment115 to schedule, distribute, and/or execute a workload (e.g., one or more computing or processing tasks). InFIG. 1, thecloud environment105 includes afirst example server112, asecond example server114, a third example server116, a first instance of anexample telemetry controller130A, and an example database (e.g., a cloud database, a cloud environment database, etc.)135. InFIG. 1, thecloud environment105 is an edge cloud environment. For example, thecloud environment105 may include any suitable number of edge clouds. Alternatively, thecloud environment105 may include any suitable backend components in a data center, cloud infrastructure, etc. Alternatively, thecloud environment105 may include fewer or more servers than theservers112,114,116 depicted inFIG. 1. Theservers112,114,116 can execute centralized applications (e.g., website hosting, data management, machine learning model applications, responding to requests from client devices, etc.).
In the illustrated example ofFIG. 1, thetelemetry controller130A facilitates the generation and/or retrieval ofexample telemetry data136A-C associated with at least one of thecloud environment105, theedge environment110, or theendpoint environment115. InFIG. 1, thedatabase135 includes an examplefirst executable137 and an examplesecond executable139. Alternatively, thedatabase135 may include fewer or more executables than thefirst executable137 and thesecond executable139. For example, theexecutables137,139 can be telemetry executables that, when executed, generate thetelemetry data136A-C. In examples disclosed herein, thefirst executable137 may implement first means for executing. In examples disclosed herein, thesecond executable139 may implement second means for executing.
In the illustrated example ofFIG. 1, thetelemetry data136A-C includes examplefirst telemetry data136A, examplesecond telemetry data136B, and example third telemetry data136C. InFIG. 1, thefirst telemetry data136A and thesecond telemetry data136B can be generated by theedge environment110. InFIG. 1, the third telemetry data136C can be generated by one or more of theservers112,114,116, thedatabase135, etc., and/or, more generally, thecloud environment105.
In the illustrated example ofFIG. 1, theservers112,114,116 communicate with devices in theedge environment110 and/or theendpoint environment115 via a network such as the Internet. In examples disclosed herein, theservers112,114,116 may communicate with the devices in theedge environment110 and/or theendpoint environment115 via WAN links having a variable amount of bandwidth. Additionally, in examples disclosed herein, theservers112,114,116 may communicate example telemetry data (e.g., thetelemetry data136A-C stored locally and/or telemetry data obtained by thetelemetry controller130A) to the devices in theedge environment110 and/or theendpoint environment115 via anexample priority channel155. Description of thepriority channel155 is described below.
In the illustrated example ofFIG. 1, thecloud environment105 includes thedatabase135 to record data (e.g., thetelemetry data136A-C, theexecutables137,139, etc.). In some examples, thedatabase135 stores information including database records, website requests, machine learning models, and results of executing machine learning models. Theexample database135 is a distributed database across all edge platforms, and stores raw and transformed information. Theexample database135 can be implemented by a volatile memory (e.g., a Synchronous Dynamic Random Access Memory (SDRAM), Dynamic Random Access Memory (DRAM), RAMBUS Dynamic Random Access Memory (RDRAM), etc.) and/or a non-volatile memory (e.g., flash memory, Intel® Optane® DC Persistent Memory Module (DCPMM™), etc.). Theexample database135 can additionally or alternatively be implemented by one or more double data rate (DDR) memories, such as DDR, DDR2, DDR3, DDR4, mobile DDR (mDDR), etc. In some examples disclosed herein, thedatabase135 may be decentralized in the form of a blockchain, distributed cluster, a hierarchical service such as Domain Name Service (DNS) and/or a DNS-based Authentication of Named Entities (DANE), an information-centric network (ICN), a named data network (NDN), a named function network (NFN), a content delivery network (CDN), an IoT network, and/or industrial IoT (IIoT) network that implements a discovery mechanism for IoT resources. Further, thedatabase135 may be structured to associate data from participants to establish a view telemetry data through the lens of the aggregation function. In examples disclosed herein, the aggregation function may be an artificial intelligence (AI) analytics engine or complex stochastic analysis (e.g., a Monte Carlo simulation).
Theexample database135 can additionally or alternatively be implemented by one or more mass storage devices such as hard disk drive(s), compact disk drive(s), digital versatile disk drive(s), solid-state disk drive(s), etc. While in the illustrated example ofFIG. 1 thedatabase135 is illustrated as a single database, theexample database135 can be implemented by any number and/or type(s) of databases. Furthermore, the data stored in thedatabase135 can be in any data format such as, for example, binary data, comma delimited data, tab delimited data, structured query language (SQL) structures, etc. Likewise, theexample database135 can provide and/or store data records in response to requests from devices in thecloud environment105, theedge environment110, and/or theendpoint environment115.
In the illustrated example ofFIG. 1, theedge environment110 includes an examplefirst edge platform140 and an examplesecond edge platform150. In the illustrated example ofFIG. 1, the first andsecond edge platforms140,150 are edge-computing platforms or platform services. For example, theedge platforms140,150 can include hardware and/or software resources, virtualizations of the hardware and/or software resources, container based computational resources, etc., and/or a combination thereof. In such examples, theedge platforms140,150 can execute a workload obtained from an edge or endpoint device as illustrated in the example ofFIG. 1. Further, theexample edge platforms140,150 are structured to obtain and broker consumer wish lists (e.g., telemetry wish lists).
In examples disclosed herein, theedge environment110 is formed from network components and functional features operated by and within theedge platforms140,150. Theedge environment110 may be implemented as any type of network that provides edge computing and/or storage resources which are proximately located to radio access network (RAN) capable endpoint devices (e.g., mobile computing devices, IoT devices, smart devices, etc.), which are shown inFIG. 1 as theendpoint devices160,165,170,175,180,185. In other words, theedge environment110 may be envisioned as an “edge” which connects the endpoint devices and traditional network access points that serves as an ingress point into service provider core networks, including mobile carrier networks (e.g., Global System for Mobile Communications (GSM) networks, Long-Term Evolution (LTE) networks, 5G/6G networks, etc.), while also providing storage and/or compute capabilities. Other types and forms of network access (e.g., Wi-Fi, long-range wireless, wired networks including optical networks) may also be utilized in place of or in combination with such 3GPP carrier networks.
In the illustrated example ofFIG. 1, thefirst edge platform140 includes a second instance of thetelemetry controller130B, thefirst executable137, an examplefirst orchestrator142, an examplefirst scheduler144, an example first edge platform (EP)database148, an example first resource(s)149, and an examplefirst security controller161. InFIG. 1, thefirst EP database148 includes thefirst telemetry data136A. In examples disclosed herein, thefirst EP database148 is a local database, shard, slice, replica, and/or partition of a distributed database to theedge platform140. Further, theexample telemetry data136A is local telemetry data to theedge platform140. Further description of a distributed database structured to assist in brokering telemetry data is described below in connection with thetelemetry controller130A,130B,130C, and/orFIG. 2.
In the illustrated example ofFIG. 1, thesecond edge platform150 includes a third instance of the telemetry controller130C, thesecond executable139, an examplesecond orchestrator152, an examplesecond scheduler154, an examplesecond EP database158, an example second resource(s)159, and an examplesecond security controller162. In examples disclosed herein, thesecond EP database158 is a local database to theedge platform150. Further, theexample telemetry data136B is local telemetry data to theedge platform150. Further description of a distributed database configured to assist in brokering telemetry data is described below in connection with thetelemetry controller130A,130B,130C, and/orFIG. 2.
In the illustrated example ofFIG. 1, theedge platforms140,150 include theEP databases148,158 to record local data (e.g., thefirst telemetry data136A, thesecond telemetry data136B, etc.) and/or cache data stored in theEP databases148,158. TheEP databases148,158 can be implemented by a volatile memory (e.g., a SDRAM, DRAM, RDRAM, etc.) and/or a non-volatile memory (e.g., flash memory, Intel® Optane® DC Persistent Memory Module (DCPMM™), etc.). TheEP databases148,158 can additionally or alternatively be implemented by one or more DDR memories, such as DDR, DDR2, DDR3, DDR4, mDDR, etc. TheEP databases148,158 can additionally or alternatively be implemented by one or more mass storage devices such as hard disk drive(s), compact disk drive(s), digital versatile disk drive(s), solid-state disk drive(s), etc. While in the illustrated example theEP databases148,158 are illustrated as single databases, theEP databases148,158 can be implemented by any number and/or type(s) of databases. Furthermore, the data stored in theEP databases148,158 can be in any data format such as, for example, binary data, comma delimited data, tab delimited data, SQL structures, etc. In examples disclosed herein, theEP databases148,158 are a means for storing or a storing means, which are hardware.
In some examples disclosed herein, thetelemetry controller130A-C, thefirst edge platform140, and/or thesecond edge platform150 may be generated in a hierarchal form (e.g., a directed graph of nodes) such that a second tier telemetry controller may be connected to a first tier telemetry controller and a third tier telemetry controller may be connected to the second tier telemetry controller. In this manner, a hierarchy or directed graph of nodes contributing telemetry data can be aggregated into a next tier. In such an example, a distributed database may exist at the tiers (e.g., the first tier, the second tier, and/or the third tier). In examples disclosed herein, thetelemetry controller130A-C is a means for telemetry controlling or a telemetry controlling means, which is hardware.
In the example illustrated inFIG. 1, thetelemetry controller130B, thefirst orchestrator142, thefirst scheduler144, the first resource(s)149, and thefirst security controller161 are included in, correspond to, and/or otherwise is/are representative of thefirst edge platform140. However, in some examples, one or more of thetelemetry controller130B, thefirst orchestrator142, thefirst scheduler144, the first resource(s)149, and/or thefirst security controller161 can be included in theedge environment110 separate from thefirst edge platform140. For example, thefirst orchestrator142 can be connected to thecloud environment105 and/or theendpoint environment115 while being outside of thefirst edge platform140. In other examples, one or more of thetelemetry controller130B, thefirst orchestrator142, thefirst scheduler144, the first resource(s)149, and/or thefirst security controller161 is/are separate devices included in theedge environment110. Further, one or more of thetelemetry controller130B, thefirst orchestrator142, thefirst scheduler144, the first resource(s)149, and/or thefirst security controller161 can be included in thecloud environment105 or theendpoint environment115. For example, thefirst orchestrator142 can be included in theendpoint environment115. In some examples, thefirst scheduler144 can be included in and/or otherwise integrated or combined with thefirst orchestrator142.
In the example illustrated inFIG. 1, the telemetry controller130C, thesecond orchestrator152, thesecond scheduler154, the second resource(s)159, and thesecond security controller162 are included in thesecond edge platform150. However, in some examples, one or more of the telemetry controller130C, thesecond orchestrator152, thesecond scheduler154, the second resource(s)159, and/or thesecond security controller162 can be included in theedge environment110 separate from thesecond edge platform150. For example, thesecond orchestrator152 can be connected to thecloud environment105 and/or theendpoint environment115 while being outside of thesecond edge platform150. In other examples, one or more of the telemetry controller130C, thesecond orchestrator152, thesecond scheduler154, the second resource(s)159, and/or thesecond security controller162 is/are separate devices included in theedge environment110. Further, one or more of the telemetry controller130C, thesecond orchestrator152, thesecond scheduler154, the second resource(s)159, and/or thesecond security controller162 can be included in thecloud environment105. For example, thesecond orchestrator152 can be included in thecloud environment105. Alternatively, in some examples disclosed herein, one or more of the telemetry controller130C, thesecond orchestrator152, thesecond scheduler154, the second resource(s)159, and/or thesecond security controller162 can be included in theendpoint environment115. For example, thesecond orchestrator152 can be included in theendpoint environment115. In some examples, thesecond scheduler154 can be included in and/or otherwise integrated or combined with thesecond orchestrator152.
In examples disclosed herein, thetelemetry controller130A-C is structured to establish thepriority channel155. In examples disclosed herein, thepriority channel155 is an example virtual computing channel structured to transmit telemetry data obtained by thetelemetry controller130A-C (e.g., telemetry data obtained responsive to a wish list) in a safe and efficient manner. In some examples disclosed herein, thetelemetry data136A-C (e.g., the local telemetry data) may be transmitted via thepriority channel155. Description of thepriority channel155 is described below, in connection withFIG. 2.
In the illustrated example ofFIG. 1, theresources149,159 are invoked to execute a workload (e.g., an edge computing workload) obtained from theendpoint environment115. For example, theresources149,159 can correspond to and/or otherwise be representative of an edge node or portion(s) thereof. For example, thetelemetry controllers130B-C, theexecutables137,139, theorchestrators142,152, theschedulers144,154, and/or, more generally, theedge platforms140,150 can invoke a respective one of theresources149,159 to execute one or more edge-computing workloads. In examples disclosed herein, theresources149,158 are a first means for resource invoking or a first resource invoking means, which are hardware.
In some examples, theresources149,159 are representative of hardware resources, virtualizations of the hardware resources, software resources, virtualizations of the software resources, etc., and/or a combination thereof. For example, theresources149,159 can include, correspond to, and/or otherwise be representative of one or more CPUs (e.g., multi-core CPUs), one or more FPGAs, one or more GPUs, one or more network interface cards (NICs), one or more vision processing units (VPUs), etc., and/or any other type of hardware or hardware accelerator. In such examples, theresources149,159 can include, correspond to, and/or otherwise be representative of virtualization(s) of the one or more CPUs, the one or more FPGAs, the one or more GPUs, the one more NICs, etc. In other examples, theorchestrators142,152, theschedulers144,154, theresources149,159, and/or, more generally, theedge platforms140,150, can include, correspond to, and/or otherwise be representative of one or more software resources, virtualizations of the software resources, etc., such as hypervisors, load balancers, OSes, VMs, etc., and/or a combination thereof.
In the illustrated example ofFIG. 1, theedge platforms140,150 are connected to and/or otherwise in communication with each other and to theservers112,114,116 in thecloud environment105. Theedge platforms140,150 can execute workloads on behalf of devices associated with thecloud environment105, theedge environment110, or theendpoint environment115. Theedge platforms140,150 can be connected to and/or otherwise in communication with devices in theenvironments105,110,115 (e.g., thefirst server112, thedatabase135, etc.) via a network such as the Internet (e.g., carrier-grade internet such as, for example, MQTT and IPv4 and/or public internet such as, for example, MQTT and IPv6). Additionally or alternatively, theedge platforms140,150 can communicate with devices in theenvironments105,110,115 using any suitable wireless network including, for example, one or more wireless local area networks (WLANs), one or more cellular networks, one or more peer-to-peer networks (e.g., a Bluetooth network, a Wi-Fi Direct network, a vehicles-to-everything (V2X) network, etc.), one or more private networks, one or more public networks, etc. For example, theedge platforms140,150 can be connected to a cell tower included in thecloud environment105 and connected to thefirst server112 via the cell tower.
In the illustrated example ofFIG. 1, thesecurity controllers161,162 determine whether the resource(s)149,159 can be made discoverable to a workload and whether an edge platform (e.g.,edge platforms140,150) is sufficiently trusted for assigning a workload to. In some examples, thesecurity controllers161,162 negotiate key exchange protocols (e.g., TLS, etc.) with a workload source (e.g., an endpoint device, a server, an edge platform, etc.) to determine a secure connection between the security controller and the workload source. In some examples, thesecurity controllers161,162 perform cryptographic operations and/or algorithms (e.g., signing, verifying, generating a digest, encryption, decryption, random number generation, secure time computations or any other cryptographic operations.).
Theexample security controllers161,162 may include a hardware root of trust (RoT). The hardware RoT is a system on which secure operations of a computing system, such as an edge platform, depend. The hardware RoT provides an attestable device (e.g., edge platform) identity feature, where such a device identity feature is utilized in a security controller (e.g.,security controllers161,162). The device identify feature attests the firmware, software, and hardware implementing the security controller (e.g.,security controllers161,162). For example, the device identify feature generates and provides a digest (e.g., a result of a hash function) of the software layers between thesecurity controllers161,162 and the hardware RoT to a verifier (e.g., a different edge platform than the edge platform including the security controller). The verifier verifies that the hardware RoT, firmware, software, etc. are trustworthy (e.g., not having vulnerabilities, on a whitelist, not on a blacklist, etc.).
In some examples, thesecurity controllers161,162 store cryptographic keys (e.g., a piece of information that determines the functional output of a cryptographic algorithm, such as specifying the transformation of plaintext into ciphertext) that may be used to securely interact with other edge platforms during verification. In some examples, thesecurity controllers161,162 store policies corresponding to the intended use of thesecurity controllers161,162. In some examples, thesecurity controllers161,162 receive and verify edge platform security and/or authentication credentials (e.g., access control, single-sign-on tokens, tickets, and/or certificates) from other edge platforms to authenticate those other edge platforms or respond to an authentication challenge by other edge platforms. In examples disclosed herein, thesecurity controllers161,162 are a means for security controlling or a security controlling means, which are hardware.
In the illustrated example ofFIG. 1, theendpoint environment115 includes an examplefirst endpoint device160, an examplesecond endpoint device165, an examplethird endpoint device170, an examplefourth endpoint device175, an examplefifth endpoint device180, and an examplesixth endpoint device185. Alternatively, there may be fewer or more than theendpoint devices160,165,170,175,180,185 depicted in theendpoint environment115 ofFIG. 1.
In the illustrated example ofFIG. 1, theendpoint devices160,165,170,175,180,185 are computing, sensing, actuating, displaying, and communicating devices. For example, one or more of theendpoint devices160,165,170,175,180,185 can be an Internet-enabled tablet, mobile handset (e.g., a smartphone), watch (e.g., a smartwatch), fitness tracker, headset, vehicle control unit (e.g., an engine control unit, an electronic control unit, etc.), IoT device, etc. In other examples, one or more of theendpoint devices160,165,170,175,180,185 can be a physical server (e.g., a rack-mounted server, a blade server, etc.).
In the illustrated example ofFIG. 1, the first throughthird endpoint devices160,165,170 are connected to thefirst edge platform140. InFIG. 1, the fourth throughsixth endpoint devices175,180,185 are connected to thesecond edge platform150. Additionally or alternatively, one or more of theendpoint devices160,165,170,175,180,185 may be connected to any number of edge platforms (e.g., theedge platforms140,150), servers (e.g., theservers112,114,116), or any other suitable devices included in and/or otherwise associated with theenvironments105,110,115 ofFIG. 1. For example, thefirst endpoint device160 can be connected to theedge platforms140,150 and to thesecond server114.
In the illustrated example ofFIG. 1, one or more of theendpoint devices160,165,170,175,180,185 can connect to one or more devices in theenvironments105,110,115 via a network such as the Internet (e.g., carrier-grade internet such as, for example, MQTT and IPv4 and/or public internet such as, for example, MQTT and IPv6). Additionally or alternatively, one or more of theendpoint devices160,165,170,175,180,185 can communicate with devices in theenvironments105,110,115 using any suitable wireless network including, for example, one or more WLANs, one or more cellular networks, one or more peer-to-peer networks, one or more private networks, one or more public networks, etc. In some examples, theendpoint devices160,165,170,175,180,185 can be connected to a cell tower included in one of theenvironments105,110,115. For example, thefirst endpoint device160 can be connected to a cell tower included in theedge environment110, and the cell tower can be connected to thefirst edge platform140.
Consistent with the examples provided herein, an endpoint device (e.g., one of theendpoint devices160,165,170,175,180,185) may be implemented as any type of endpoint component, device, appliance, or other thing capable of communicating as a producer or consumer of data. For example, a client compute platform can include a mobile phone, a laptop computer, a desktop computer, a processor platform in an autonomous vehicle, etc. In additional or alternative examples, a client compute platform can include a camera, a sensor, etc. Further, the label “platform,” “node,” and/or “device” as used in theenvironment100 does not necessarily mean that such platform, node, and/or device operates in a client or slave role; rather, any of the platforms, nodes, and/or devices in theenvironment100 refer to individual entities, platforms, nodes, devices, and/or subsystems which include discrete and/or connected hardware and/or software configurations to facilitate and/or use theedge environment110.
In some examples, in response to a request to execute a workload from an endpoint device (e.g., the first endpoint device160), an orchestrator (e.g., the first orchestrator142) can communicate with at least one resource (e.g., the first resource(s)149) and an endpoint device (e.g., the second endpoint device165) to create a contract (e.g., a workload contract) associated with a description of the workload to be executed. Thefirst endpoint device160 can provide a task associated with the contract and the description of the workload to thefirst orchestrator142, and thefirst orchestrator142 can provide the task to a scheduler (e.g., the first scheduler144). In examples disclosed herein, thefirst orchestrator142 may utilize telemetry data to identify terms and/or risks in the event telemetry data is not identified and/or to form a graph of telemetry data that can be orchestrated and operated to meet maximum efficiency and effectiveness. The task can include the contract and the description of the workload to be executed. In some examples, the task can include requests to acquire and/otherwise allocate resources used to execute the workload.
In some examples, theorchestrators142,152 maintain records and/or logs of actions occurring in theenvironments105,110,115. For example, the first resource(s)149 can notify receipt of a workload description to thefirst orchestrator142. One or more of theorchestrators142,152, theschedulers144,154, and/or the resource(s)149,159 can provide records of actions and/or allocations of resources to theorchestrators142,152. For example, thefirst orchestrator142 can maintain or store a record of receiving a request to execute a workload (e.g., a contract request provided by the first endpoint device160). In examples disclosed herein, theorchestrators142,152 are a means for orchestrating or an orchestrating means, which are hardware.
In some examples, theschedulers144,154 access a task received and/or otherwise obtained by theorchestrators142,152 and provide the task to one or more of the resource(s)149,159 to execute or complete. The example resource(s)149,159 can execute a workload based on a description of the workload included in the task. The example schedulers144,154 access a result of the execution of the workload from one or more of the resource(s)149,159 that executed the workload. The example schedulers144,154 provide the result to the device that requested the workload to be executed, such as thefirst endpoint device160. In examples disclosed herein, theschedulers144,154 are a means for scheduling or a scheduling means, which are hardware.
Advantageously, an execution of a workload in theedge environment110 can reduce costs (e.g., compute or computation costs, network costs, storage costs, etc., and/or a combination thereof) and/or processing time used to execute the workload. For example, thefirst endpoint device160 can request thefirst edge platform140 to execute a workload at a first cost lower than a second cost associated with executing the workload in thecloud environment105. In other examples, an endpoint device, such as the first throughthird endpoint devices160,165,170, can be nearer to (e.g., spatially or geographically closer, fewer network hops, etc.) and/or otherwise proximate to an edge platform, such as thefirst edge platform140, than a centralized server (e.g., theservers112,114,116) in thecloud environment105. For example, thefirst edge platform140 is spatially closer to thefirst endpoint device160 than thefirst server112. As a result, thefirst endpoint device160 can request thefirst edge platform140 to execute a workload, and the response time of thefirst edge platform140 to deliver the executed workload result is lower than that can be provided by thefirst server112 in thecloud environment105.
In the illustrated example ofFIG. 1, thetelemetry controller130A-C improves the distribution and execution of edge computing workloads (e.g., among theedge platforms140,150) based on thetelemetry data136A-C associated with at least one of thecloud environment105, theedge environment110, or theendpoint environment115. While thetelemetry controller130A-C is illustrated asseparate telemetry controllers130A-C in the couldenvironment105 and theedge environment110, in examples disclosed herein, thetelemetry controller130A-C may be implemented as a distributed telemetry controller accessible in thecloud environment105 and/or theedge environment110. In some examples, thetelemetry controller130A-C is implemented utilizing separate hardware on each of theedge platforms140,150 and/or theserver112. In such examples, though the hardware utilized to implement thetelemetry controller130A-C is different, thetelemetry controller130A-C remains logically consistent across theedge platforms140,150 and/or theserver112. In other examples disclosed herein, thetelemetry controller130A-C is implemented as a service across theedge platforms140,150 and/or theserver112. In such an example, thetelemetry controller130A-C is implemented using hardware and deployed as a logically consistent service across theedge platforms140,150 and/or theserver112. In examples disclosed herein, thetelemetry controller130A-C may respond to an example wish list (e.g., telemetry wish list) obtained by a consumer (e.g., a user of telemetry data, another edge platform, and/or any suitable computing device) to coordinate with the other telemetry controller(s)130A-C to execute the wish list (e.g., telemetry wish list). In this manner, thetelemetry controller130A-C, being distributed, is structured to broker commitments and task execution among thecloud environment105 and/or theedge environment110 to execute the wish list (e.g., telemetry wish list). Further description of thetelemetry controller130A-C is explained in further detail below, in connection withFIG. 2.
In some examples, thefirst telemetry executable137, when executed, generates thefirst telemetry data136A. In some examples, thesecond telemetry executable139, when executed, generates thesecond telemetry data136B. In example operation, thefirst edge platform140 can invoke a first composition(s), and/or, more generally, thefirst telemetry executable137, to determine, generate, and/or obtain thefirst telemetry data136A. For example, the first resource(s)149 can include hardware resources that can be used for edge computing tasks by theendpoint devices160,165,170,175,180,185, where the hardware resources can include at least a multi-core CPU and a solid-state disc (SSD) drive. In such examples, the first compositions can include at least a first resource model corresponding to a core of the multi-core CPU and a second resource model corresponding to a partition of the SSD drive. The first compositions can determine thefirst telemetry data136A, such as a quantity of gigahertz (GHz) at which the core can execute, a utilization of the core (e.g., the core is 25% utilized, 50% utilized, 75% utilized, etc.), a quantity of gigabytes (GB) of the SSD partition, a utilization of the SSD partition, etc., and/or a combination thereof.
In example operation, thesecond edge platform150 can invoke a first composition(s), and/or, more generally, invoke thesecond telemetry executable139, to determine, generate, and/or obtain thesecond telemetry data136B. For example, the second resource(s)159 can include hardware resources that can be used for edge computing tasks by theendpoint devices160,165,170,175,180,185, where the hardware resources can include at least a multi-core CPU and a solid-state disc (SSD) drive. In such examples, the second compositions can include at least a first resource model corresponding to a core of the multi-core CPU and a second resource model corresponding to a partition of the SSD drive. The second compositions can determine thesecond telemetry data136B, such as a quantity of gigahertz (GHz) at which the core can execute, a utilization of the core (e.g., the core is 25% utilized, 50% utilized, 75% utilized, etc.), a quantity of gigabytes (GB) of the SSD partition, a utilization of the SSD partition, etc., and/or a combination thereof.
FIG. 2 depicts an example block diagram illustrating an example implementation of thetelemetry controller130A-C ofFIG. 1. InFIG. 2, thetelemetry controller130A-C includes anexample naming layer202 including anexample source manager204 and anexample directory206, an example distributedmetadata lake208, anexample credential manager210, anexample communication interface212, and anexample service manager214. In the example ofFIG. 2, theservice manager214 includes anexample publisher216, anexample commitment manager218, anexample extractor220, anexample collector222, anexample indexer224, and anexample commitment determiner226. In examples disclosed herein, thenaming layer202 is a means for identifying or an identifying means, which is hardware. In examples disclosed herein, thesource manager204 is a means for source managing, or a source managing means, which is hardware. In examples disclosed herein, thedirectory206 is a means for identification information storing, or an identification information storing means, which is hardware. In examples disclosed herein, the distributedmetadata lake208 is a means for telemetry data storing, or a telemetry data storing means, which is hardware. In examples disclosed herein, thecredential manager210 is a means for credential managing, or a credential managing means. In examples disclosed herein, thecommunication interface212 is a means for establishing, or an establishing means, which is hardware. In examples disclosed herein, theservice manager214 is a means for managing, or a managing means, which is hardware. In examples disclosed herein, thepublisher216 is a means for publishing, or a publishing means, which is hardware. In examples disclosed herein, thecommitment manager218 is a means for commitment managing, or a commitment managing means, which is hardware. In examples disclosed herein, theextractor220 is a means for extracting telemetry data, or a telemetry data extracting means, which is hardware. In examples disclosed herein, thecollector222 is a means for collecting telemetry data, or a telemetry data collecting means, which is hardware. In examples disclosed herein, theindexer224 is a means for indexing telemetry data, or a telemetry data indexing means, which is hardware. In examples disclosed herein, thecommitment determiner226 is a means determining, or a determining means, which is hardware.
In the example illustrated inFIG. 2, thenaming layer202 includes thesource manager204 and thedirectory206. In operation, thenaming layer202 acts as a redirection point between edge platforms (e.g., theedge platforms140,150 ofFIG. 1) in the edge environment110 (FIG. 1), the cloud environment105 (FIG. 1) and/or the endpoint environment115 (FIG. 1). Further, thenaming layer202 forms a telemetry cataloging service. Description of thesource manager204 and thedirectory206 is explained in further detail, below.
In the example illustrated inFIG. 2, thesource manager204 is structured to identify participants (e.g., theservers112,114,116) in thecloud environment105 and/or edge platforms in the edge environment110 (e.g., theedge platforms140,150). Additionally or alternatively, thesource manager204 may identify participants the endpoint devices in the endpoint environment115 (e.g., theendpoint device160,165,170,175,180,185) ofFIG. 1. Further, thesource manager204 is structured to store identification information of the participants (e.g., theservers112,114,116) in thecloud environment105 and/or edge platforms in the edge environment110 (e.g., theedge platforms140,150) ofFIG. 1 in thedirectory206. In examples disclosed herein, participants (e.g., theservers112,114,116) in thecloud environment105 and/or edge platforms in the edge environment110 (e.g., theedge platforms140,150) may refer to any computing environment and/or component that generates telemetry data. For example, a CPU core may use performance counters and/or monitor parameters as telemetry data and, thus, be identified by thesource manager204. In another example, an operating system may instrument the operating system with audit log criteria that records various events that may have security, safety, performance or chain-of-custody reasons for creating a log entry and, thus, be identified by thesource manager204. In another example, thesource manager204 may identify any device that enters and/or exits a special state (e.g., secure mode, boot state, etc.) as a device capable of producing telemetry data. In yet another example, thesource manager204 may identify sensors (e.g., temperature sensors, time sensors, location sensors, direction sensors, etc.) as devices capable of producing telemetry data.
Additionally or alternatively, thesource manager204 may store identification information of the endpoint devices in the endpoint environment115 (e.g., theendpoint device160,165,170,175,180,185) ofFIG. 1 in thedirectory206. In the event identification information regarding the participants (e.g., theservers112,114,116) in thecloud environment105 and/or edge platforms in the edge environment110 (e.g., theedge platforms140,150) ofFIG. 1 is not known, thesource manager204 may name new participants (e.g., generate a new name for an unknown participant). Additionally or alternatively, thesource manager204 may name new endpoint devices in the endpoint environment115 (e.g., theendpoint device160,165,170,175,180,185) ofFIG. 1 in the event identification information is not known. In some examples, thesource manager204 may obtain the name of the participants (e.g., theservers112,114,116) in thecloud environment105 and/or edge platforms in the edge environment110 (e.g., theedge platforms140,150) via thedirectory206. Additionally or alternatively, thesource manager204 may obtain the name of the endpoint devices in the endpoint environment115 (e.g., theendpoint device160,165,170,175,180,185) ofFIG. 1 via thedirectory206. In some examples disclosed herein, thesource manager204 may discover sources of telemetry data. In this manner, thesource manager204 may include a discovery sub-system capable of analyzing a discovery protocol (e.g., a broadcast, a multicast discovery protocol, consulting a directory node to identify existence of telemetry sources, etc.) related to the unknown sources of telemetry data.
In examples disclosed herein, thesource manager204 is structured to link the name of the participants (e.g., theservers112,114,116) in thecloud environment105 and/or the edge platforms in the edge environment110 (e.g., theedge platforms140,150) to thedirectory206. Additionally or alternatively, thesource manager204 may link the name of the endpoint devices in the endpoint environment115 (e.g., theendpoint device160,165,170,175,180,185) ofFIG. 1 to thedirectory206. Additionally, responsive to a commitment received, thesource manager204 is structured to link the name of the committed device to thedirectory206.
Further, in examples disclosed herein, thesource manager204 determines whether a wish list (e.g., telemetry wish list) is obtained. For example, thesource manager204 may determine whether a wish list (e.g., telemetry wish list) is obtained from a consumer of any of the participants (e.g., theservers112,114,116) in thecloud environment105 and/or edge platforms in the edge environment110 (e.g., theedge platforms140,150) to identify the potential participants (e.g., the participants in the edge cloud). Additionally or alternatively, thesource manager204 may determine whether a wish list (e.g., telemetry wish list) is obtained from a consumer of the endpoint devices in the endpoint environment115 (e.g., theendpoint device160,165,170,175,180,185) ofFIG. 1 to identify the potential participants (e.g., the participants in the edge cloud).
In some examples disclosed herein, thesource manager204 may implement networking concepts to enable sending an interest packet to components in theedge environment110. In this manner, thesource manager204 may obtain routes (e.g., responses) from the components to provide telemetry data. Thesource manager204 can then read and/or otherwise analyze the telemetry data using the supplied route. Furthermore a routing component may cache the telemetry data such that a second source manager insame edge environments110 can utilize the cached telemetry data.
Thedirectory206 is structured to store data corresponding to both the names of Telemetry data items (TDIs) and components (e.g., any of the participants (e.g., theservers112,114,116) in thecloud environment105, edge platforms in the edge environment110 (e.g., theedge platforms140,150), and/or endpoint devices in the endpoint environment115 (e.g., theendpoint device160,165,170,175,180,185) ofFIG. 1). In examples disclosed herein, TDIs refer to the atomic data that can be collected regarding a component. Further, in examples disclosed herein, components may refer to participants (e.g., theservers112,114,116) in thecloud environment105 and/or edge platforms in the edge environment110 (e.g., theedge platforms140,150) ofFIG. 1. Additionally or alternatively, components may refer to endpoint devices in the endpoint environment115 (e.g., theendpoint device160,165,170,175,180,185) ofFIG. 1. Components may register their particular TDI values so that a telemetry consumer of a TDI value can identify the components that may be capable of producing that TDI value. Likewise, the telemetry consumer efficiently identifies TDI values that are not expected for a given component (e.g., if that component has not registered particular TDI values). In examples disclosed herein, a TDI value includes metadata about its context.
In the event a new component is admitted into the edge environment110 (e.g. via an Admission Control or suitable onboarding process), the component name is added to thedirectory206 by thesource manager204. In examples disclosed herein, thedirectory206 may be decentralized in the form of a blockchain, distributed cluster, and/or hierarchical service such as Domain Name Service (DNS) and/or and DNS-based Authentication of Named Entities (DANE). In examples disclosed herein, names of the components may be stored in thedirectory206 in the form of sub-components responsive to static and/or dynamically applied configuration management techniques. For example, a rackscale server may be reconfigured with blades containing storage, processing, acceleration, networking, etc. In such an example, a rackscale component identifies in the directory as a component that includes of storage, processing, acceleration and networking sub-components. In examples disclosed herein, sub-components of a component may be further decomposed into further sub-components. In examples disclosed herein, thedirectory206 stores data relating to elemental aspects of the edge environment115 (e.g., CPU utilization aspects, CPU cycles, number of packets sent, etc.).
In the example illustrated inFIG. 2, the distributedmetadata lake208 is a distributed database that stores both raw (e.g., directly measured TDI values from components) information and/or filtered information from theservice manager214. In some examples, the distributedmetadata lake208 is referred to herein as a distributeddatabase208, a distributeddata lake208, etc. Further, the distributeddatabase208 is structured to associate data from participants to establish a view of telemetry data through the lens of the aggregation function. In examples disclosed herein, the aggregation function may be an artificial intelligence (AI) analytics engine or complex stochastic analysis (e.g., a Monte Carlo simulation). In an example operation, the distributeddatabase208 is structured to store data obtained from theservice manager214 based on a series of commitments made regarding the wish list (e.g., telemetry wish list). For example, a device in the service manager214 (e.g., theextractor220, thecollector222, and/or the indexer224) may commit to obtaining, producing, filtering, extracting, and/or otherwise manipulating telemetry data to be stored in the distributeddatabase208. In this manner, the contents (e.g., telemetry data) stored in the distributeddatabase208 may be subject to removal upon the expiration of the commitment. In examples disclosed herein, the distributeddatabase208 is updated continuously and, thus, telemetry data may be transmitted in a timely manner. Further, the telemetry data stored in the distributeddatabase208 may be lossy and/or non-consistent. In this manner, the telemetry data stored in the distributeddatabase208 may be timestamped to ensure that the most recent telemetry data is utilized. Such examples enable accurate interpolation and/or extrapolation to obtain any possible missing intermediate and/or current versions of telemetry data, during all stages of processing.
In examples disclosed herein, thecredential manager210 is structured to determine whether credentials are needed for components when accessing data in the distributeddatabase208. In the event thecredential manager210 determines credentials are needed, thecredential manager210 generates credentials for each component when accessing data in the distributeddatabase208 to provide credentialed access. For example, because theedge environment110 includes heterogenous edge platforms (e.g., theedge platforms140,150), thecredential manager210 ensures that the edge platforms (e.g., theedge platforms140,150) that are to access data stored in the distributeddatabase208 can access such data. For example, telemetry data from a first edge platform (e.g., the edge platform140) may be accessible by a second edge platform (e.g., the edge platform150) but not by a third edge platform (e.g., a third edge platform in the edge environment110). Thecredential manager210 identifies credentials of each of the edge platforms (e.g., theedge platform140,150). In examples disclosed herein, credentials of an edge platform (e.g., theedge platform140,150) may be generated responsive to a wish list (e.g., telemetry wish list). For example, if a first edge platform (e.g., the edge platform140) requests telemetry data regarding a first computing device and a second computing device, the credentials manager may generate credentials enabling access by the first edge platform (e.g., the edge platform140) to access the telemetry data associated with the first computing device and the second computing device.
Furthermore, thecredential manager210 may issue credentials to edge platforms (e.g., theedge platform140,150) corresponding to different level of privileges. For example, in the event information is available to one edge platform (e.g., the edge platform140) at a detailed level (e.g., power and thermal information about a node), such an edge platform (e.g., the edge platform140) may receive credentials corresponding to a high privilege. Further in such an example, such information may be available to a second edge platform (e.g., the edge platform150) in a relativized form and/or aggregated form. Thus, such a second edge platform (e.g., the edge platform150) may receive credentials corresponding to a lower privilege. Once a credential is provided to an edge platform (e.g., theedge platform140,150) by thecredential manager210, such an edge platform (e.g., theedge platform140,150) may utilize the existing credential to speed up access to data in the event the edge platform (e.g., theedge platform140,150) is to access the same data.
In examples disclosed herein, thecredential manager210 establishes components as either a source of telemetry data or a destination of telemetry data. In such a manner, the sources and/or destinations of telemetry data can, responsive to credentials being issued, communicate securely (e.g., with integrity and/or confidentiality).
In the example ofFIG. 2, thecommunication interface212 determines whether to establish thepriority channel155 ofFIG. 1. In the event thecommunication interface212 determines to establish the priority channel, thecommunication interface212 establishes thepriority channel155 ofFIG. 1 for the edge platforms (e.g., theedge platform140,150) in theedge environment110. In examples disclosed herein, thecommunication interface212 establishes thepriority channel155 as a virtual communication channel. Additionally or alternatively, in other examples disclosed herein, thecommunication interface212 may establish thepriority channel155 has a physical communication channel (e.g., wired ethernet connection, etc.).
In an example operation, thecommunication interface212 communicates with surrounding edge platforms (e.g., theedge platform140,150) to ensure that telemetry data can be transmitted to corresponding components in a safe and efficient manner. For example, thecommunication interface212 establishes thepriority channel155 to facilitate transmission of telemetry data in an isolated manner across thepriority channel155. Further in such an example, rather than transmitting telemetry data across a public internet connection, telemetry data may be communicated across thepriority channel155. In examples disclosed herein, thecredential manager210 establishes thepriority channel155 as an overlay network that operates according to a quality of service (QoS) guarantee. Thecommunication interface212 initiates establishment of thepriority channel155 by configuring a portion of bandwidth from a communication connection to the server112 (FIG. 1) (e.g., a communication connection with low performance and high resiliency) as a first bandwidth portion of thepriority channel155. As used herein, a communication connection refers to any suitable wired and/or wireless communication method and/or apparatus structured to facilitate transmission of data. Further, thecommunication interface212 configures a portion of bandwidth from the carrier-grade internet connection between edge platforms (e.g., theedge platform140,150) as a second bandwidth portion of thepriority channel155. In this manner, thecommunication interface212 may combine the first portion (e.g., the server connection) with the second portion (e.g., the carrier-grade internet) when telemetry data is to be transmitted. For example, thecommunication interface212 may establish the first and second bandwidth portions as segments of bandwidth that are to be used to facilitate telemetry data (e.g., segments of bandwidth that make up the priority channel155). In this manner, the telemetry data can be transmitted at a reliable, consistent rate to all sources and/or consumers. Further, such apriority channel155 enables edge platforms (e.g., theedge platform140,150) that implement the distributeddatabase208 to make decisions based on synchronized telemetry data accessible by the authorized users.
In the example ofFIG. 2, theservice manager214 is structured to determine whether a wish list (e.g., telemetry wish list) is published. For example, a different telemetry controller (e.g., thetelemetry controller130B) may receive and publish a wish list (e.g., telemetry wish list). In such a manner, theservice manager214 located in the remaining telemetry controllers (e.g., thetelemetry controller130A,130C) determine whether the wish list (e.g., telemetry wish list) is published. In the event theservice manager214 determines a wish list (e.g., telemetry wish list) is not published, theservice manager214 continues to wait. Alternatively, in the event the service manager determines a wish list (e.g., telemetry wish list) is published, theextractor220 parses the wish list (e.g., telemetry wish list) to identify the requested telemetry data and associated tasks. Further, theservice manger214 is operable to process a commitment. Description of commitment processing is explained in further detail below.
In the example ofFIG. 2, thepublisher216 is structured to publish the wish list (e.g., telemetry wish list) to various components. When publishing the wish list (e.g., telemetry wish list), thepublisher216 may also publish advertisements relating to one or more commitments to be assigned to fulfil the wish list (e.g., telemetry wish list). In examples disclosed herein, the commitments include a time period and data scope. As used herein, a time period in a commitment refers to a period of time (e.g., ten minutes, one week, etc.) in which telemetry data is requested and/or accessible. Accordingly, acceptance of commitments to provide telemetry data are likewise published by thepublisher216. In examples disclosed herein, telemetry data collected by thetelemetry controller130A-C may be published by the publisher216 (e.g., the components may be notified of the available telemetry data). In some examples disclosed herein, telemetry data may be included in the distributeddatabase208 via thenaming layer202 without explicit publication or notification by thepublisher216. In such an example, the telemetry data may be timestamped to ensure consumers of the telemetry data can identify the relevancy of the telemetry data (e.g., the age of the telemetry data).
In the example ofFIG. 2, thecommitment manager218 is structured to determine whether a commitment from anothertelemetry controller130A-C and/or thesame telemetry controller130A-C is received. In the event thecommitment manger218 determines a commitment is not received, thecommitment manager218 continues to wait. In the event a commitment is not received (e.g., not accepted) from any of the components, thecommitment manager218 may communicate with thesource manager204 to obtain a back-up commitment and/or historical listing of the telemetry data. The back-up may be obtained from a component operable to provide telemetry data in the event commitments are not obtained, received, and/or otherwise accepted. Such a back-up ensures the amount of telemetry data obtained is non-zero.
Alternatively, in the event thecommitment manager218 determines a commitment is received, thecommitment manager218 communicates an indication to thepublisher216 to publish the commitment. Thecommitment manager218 further determines whether all commitments associated with a wish list (e.g., telemetry wish list) are received. In the event thecommitment manager218 determines all commitments are not received, thecommitment manager218 may continue to wait. In addition, responsive to thecommitment manager218 determining all commitments are not received, theexample commitment determiner226 may determine whether there is an available commitment that can be processed by theservice manager214 of therespective telemetry controller130A-C.
In the example ofFIG. 2, theextractor220, responsive to a wish list (e.g., telemetry wish list) being received and/or published, parses the wish list (e.g., telemetry wish list) to identify the requested telemetry data and/or tasks. Additionally or alternatively, in the event thecommitment determiner226 determines theextractor220 can commit to implement a requested task in the wish list (e.g., telemetry wish list), theextractor220 may process the commitment. For example, a wish list (e.g., telemetry wish list) may include a task to extract specified telemetry data. In this manner, theextractor220 may commit to processing such a commitment (e.g., executing the task). In examples disclosed herein, theextractor220 may, when processing a commitment, extract the relevant telemetry data. For example, the extractor may extract the telemetry data from the distributeddatabase208 and/or any suitable database (e.g., thedirectory206, the database135 (FIG. 1), and/or theEP database148,158 local to theedge platform140,150, respectively). Further, theextractor220 may determine whether aggregation of the telemetry data is needed. In the event theextractor220 determines aggregation of the telemetry data is needed, theextractor220 may aggregate the previously extracted telemetry data. In examples disclosed herein, aggregations may occur in the event the telemetry data is received in a non-desired format (e.g., telemetry data obtained from heterogenous components), in the event the telemetry data can be combined, and/or in the event a task in the wish list (e.g., telemetry wish list) indicates to aggregate data. In such an example, theextractor220 may aggregate the telemetry data into an appropriate format. In examples disclosed herein, theextractor220 may aggregate data responsive to the processing of a commitment.
In the example ofFIG. 2, thecollector222, responsive to thecommitment determiner226 determining thecollector222 can commit to a task, processes such a commitment. In examples disclosed herein, thecollector222 may perform telemetry collection by identifying the telemetry data to be collected and subsequently collecting such telemetry data. In examples disclosed herein, thecollector222 may store such collected telemetry data in the distributeddatabase208. In an example operation, in the event thecollector222 commits to collecting telemetry data about utilization, thecollector222 identifies all sources that can provide such telemetry data and collect such telemetry data. In examples disclosed herein, thecollector222 may obtain telemetry data from surrounding components at any suitable time (e.g., within the next five minutes, in five minutes, etc.).
In the example ofFIG. 2, theindexer224, responsive to thecommitment determiner226 determining theindexer224 can commit to a task, processes such a commitment. In examples disclosed herein, theindexer224 may index and/or search data stored in the distributeddatabase208. In examples disclosed herein, theindexer224 may perform filtering on the data stored in the distributeddatabase208.
In examples disclosed herein, thecommitment determiner226 is operable in connection with any of thepublisher216, theextractor220, thecollector222, and/or theindexer224. In examples disclosed herein, thecommitment determiner226 is structured to determine whether any of thepublisher216, theextractor220, thecollector222, and/or theindexer224 can execute (e.g., whether any of thepublisher216, theextractor220, thecollector222, and/or theindexer224 can accept a commitment) a task in the wish list (e.g., telemetry wish list). In this manner, thecommitment determiner226 analyzes the tasks in the wish list (e.g., telemetry wish list) to identify the commitments to be accepted. Further, thecommitment determiner226 determines whether it is viable for any of thepublisher216, theextractor220, thecollector222, and/or theindexer224 to accept a commitment. In examples disclosed herein, thecommitment determiner226 may determine a task in the wish list (e.g., telemetry wish list) is viable to be accepted by a component (e.g., any of thepublisher216, theextractor220, thecollector222, and/or the indexer224) in the event thecommitment determiner226 determines such a component is capable (e.g., has the necessary processing resources, is available, etc.) of executing the task associated with the commitment. In the event thecommitment determiner226 determines it is not viable for any of thepublisher216, theextractor220, thecollector222, and/or theindexer224 to accept a commitment, thecommitment manager218 determines whether there is another commitment available to analyze.
In examples disclosed herein, in the event thecommitment determiner226 determines that it is viable for any of thepublisher216, theextractor220, thecollector222, and/or theindexer224 to accept a commitment, thecommitment determiner226 further determines whether such a viable commitment has already been accepted by another component. For example, thecommitment determiner226 may determine that theextractor220 is capable of accepting a commitment relating to extracting data. In such an example, thecommitment determiner226 identifies whether any other component (e.g., another extractor located in a separate telemetry controller) has already accepted such a commitment. In the event thecommitment determiner226 determines that another component has already accepted such a commitment, thecommitment manager218 determiners whether there is another commitment to analyze.
Alternatively, in the event thecommitment determiner226 determines that another component has not accepted such a commitment (e.g., the commitment is available to be accepted), thecommitment determiner226 transmits an indication of the accepted commitment to thesource manager204 and to thepublisher216. In response, the component determined to execute the commitment (e.g., any of thepublisher216, theextractor220, thecollector222, and/or the indexer224) processes the commitment.
In examples disclosed herein, thecommitment determiner226 is structured to determine whether the time period associated with the commitment has expired. In the event thecommitment determiner226 determines the time period associated with the commitment has expired, thecommitment manager218 determines whether there is another commitment to analyze. Alternatively, in the event thecommitment determiner226 determines the time period has not expired, the component determined to execute the commitment (e.g., any of thepublisher216, theextractor220, thecollector222, and/or the indexer224) continues to process the commitment. In examples disclosed herein, thecommitment determiner226 may, upon determination that the time period has not expired, instruct thecommitment manager218 to determine whether there is another commitment to analyze. In such an example, thecommitment determiner226 may direct acceptance of multiple commitments without waiting for the time to expire in a previously accepted commitment.
While an example manner of implementing thetelemetry controller130A-C ofFIG. 1 is illustrated inFIG. 2, one or more of the elements, processes and/or devices illustrated inFIG. 2 may be combined, divided, re-arranged, omitted, eliminated and/or implemented in any other way. Further, theexample naming layer202, theexample source manager204, theexample directory206, the example distributeddatabase208, theexample credential manager210, theexample communication interface212, theexample service manager214, theexample publisher216, theexample commitment manager218, theexample extractor220, theexample collector222, theexample indexer224, and/or, more generally, theexample telemetry controller130A-C ofFIGS. 1 and/or 2 may be implemented by hardware, software, firmware and/or any combination of hardware, software and/or firmware. Thus, for example, any of theexample naming layer202, theexample source manager204, theexample directory206, the example distributeddatabase208, theexample credential manager210, theexample communication interface212, theexample service manager214, theexample publisher216, theexample commitment manager218, theexample extractor220, theexample collector222, theexample indexer224, and/or, more generally, theexample telemetry controller130A-C ofFIGS. 1 and/or 2 could be implemented by one or more analog or digital circuit(s), logic circuits, programmable processor(s), programmable controller(s), graphics processing unit(s) (GPU(s)), digital signal processor(s) (DSP(s)), application specific integrated circuit(s) (ASIC(s)), programmable logic device(s) (PLD(s)) and/or field programmable logic device(s) (FPLD(s)). When reading any of the apparatus or system claims of this patent to cover a purely software and/or firmware implementation, at least one of theexample naming layer202, theexample source manager204, theexample directory206, the example distributeddatabase208, theexample credential manager210, theexample communication interface212, theexample service manager214, theexample publisher216, theexample commitment manager218, theexample extractor220, theexample collector222, theexample indexer224 is/are hereby expressly defined to include a non-transitory computer readable storage device or storage disk such as a memory, a digital versatile disk (DVD), a compact disk (CD), a Blu-ray disk, etc. including the software and/or firmware. Further still, theexample telemetry controller130A-C ofFIGS. 1 and/or 2 may include one or more elements, processes and/or devices in addition to, or instead of, those illustrated inFIG. 2, and/or may include more than one of any or all of the illustrated elements, processes and devices. As used herein, the phrase “in communication,” including variations thereof, encompasses direct communication and/or indirect communication through one or more intermediary components, and does not require direct physical (e.g., wired) communication and/or constant communication, but rather additionally includes selective communication at periodic intervals, scheduled intervals, aperiodic intervals, and/or one-time events.
Flowchart representative of example hardware logic, machine readable instructions, hardware implemented state machines, and/or any combination thereof for implementing thetelemetry controller130A-C ofFIGS. 1 and/or 2 are shown inFIGS. 3, 4, 5, 6, 7, and/or8. The machine readable instructions may be one or more executable programs or portion(s) of an executable program for execution by a computer processor such as theprocessor912 shown in theexample processor platform900 discussed below in connection withFIG. 9. The program may be embodied in software stored on a non-transitory computer readable storage medium such as a CD-ROM, a floppy disk, a hard drive, a DVD, a Blu-ray disk, or a memory associated with theprocessor912, but the entire program and/or parts thereof could alternatively be executed by a device other than theprocessor912 and/or embodied in firmware or dedicated hardware. Further, although the example program is described with reference to the flowcharts illustrated inFIGS. 3, 4, 5, 6, 7, and/or8, many other methods of implementing theexample telemetry controller130A-C may alternatively be used. For example, the order of execution of the blocks may be changed, and/or some of the blocks described may be changed, eliminated, or combined. Additionally or alternatively, any or all of the blocks may be implemented by one or more hardware circuits (e.g., discrete and/or integrated analog and/or digital circuitry, an FPGA, an ASIC, a comparator, an operational-amplifier (op-amp), a logic circuit, etc.) structured to perform the corresponding operation without executing software or firmware.
The machine readable instructions described herein may be stored in one or more of a compressed format, an encrypted format, a fragmented format, a compiled format, an executable format, a packaged format, etc. Machine readable instructions as described herein may be stored as data (e.g., portions of instructions, code, representations of code, etc.) that may be utilized to create, manufacture, and/or produce machine executable instructions. For example, the machine readable instructions may be fragmented and stored on one or more storage devices and/or computing devices (e.g., servers). The machine readable instructions may require one or more of installation, modification, adaptation, updating, combining, supplementing, configuring, decryption, decompression, unpacking, distribution, reassignment, compilation, etc. in order to make them directly readable, interpretable, and/or executable by a computing device and/or other machine. For example, the machine readable instructions may be stored in multiple parts, which are individually compressed, encrypted, and stored on separate computing devices, wherein the parts when decrypted, decompressed, and combined form a set of executable instructions that implement a program such as that described herein.
In another example, the machine readable instructions may be stored in a state in which they may be read by a computer, but require addition of a library (e.g., a dynamic link library (DLL)), a software development kit (SDK), an application programming interface (API), etc. in order to execute the instructions on a particular computing device or other device. In another example, the machine readable instructions may need to be configured (e.g., settings stored, data input, network addresses recorded, etc.) before the machine readable instructions and/or the corresponding program(s) can be executed in whole or in part. Thus, the disclosed machine readable instructions and/or corresponding program(s) are intended to encompass such machine readable instructions and/or program(s) regardless of the particular format or state of the machine readable instructions and/or program(s) when stored or otherwise at rest or in transit.
The machine readable instructions described herein can be represented by any past, present, or future instruction language, scripting language, programming language, etc. For example, the machine readable instructions may be represented using any of the following languages: C, C++, Java, C#, Perl, Python, JavaScript, HyperText Markup Language (HTML), Structured Query Language (SQL), Swift, etc.
As mentioned above, the example processes ofFIGS. 3, 4, 5, 6, 7, and/or8 may be implemented using executable instructions (e.g., computer and/or machine readable instructions) stored on a non-transitory computer and/or machine readable medium such as a hard disk drive, a flash memory, a read-only memory, a compact disk, a digital versatile disk, a cache, a random-access memory and/or any other storage device or storage disk in which information is stored for any duration (e.g., for extended time periods, permanently, for brief instances, for temporarily buffering, and/or for caching of the information). As used herein, the term non-transitory computer readable medium is expressly defined to include any type of computer readable storage device and/or storage disk and to exclude propagating signals and to exclude transmission media.
“Including” and “comprising” (and all forms and tenses thereof) are used herein to be open ended terms. Thus, whenever a claim employs any form of “include” or “comprise” (e.g., comprises, includes, comprising, including, having, etc.) as a preamble or within a claim recitation of any kind, it is to be understood that additional elements, terms, etc. may be present without falling outside the scope of the corresponding claim or recitation. As used herein, when the phrase “at least” is used as the transition term in, for example, a preamble of a claim, it is open-ended in the same manner as the term “comprising” and “including” are open ended. The term “and/or” when used, for example, in a form such as A, B, and/or C refers to any combination or subset of A, B, C such as (1) A alone, (2) B alone, (3) C alone, (4) A with B, (5) A with C, (6) B with C, and (7) A with B and with C. As used herein in the context of describing structures, components, items, objects and/or things, the phrase “at least one of A and B” is intended to refer to implementations including any of (1) at least one A, (2) at least one B, and (3) at least one A and at least one B. Similarly, as used herein in the context of describing structures, components, items, objects and/or things, the phrase “at least one of A or B” is intended to refer to implementations including any of (1) at least one A, (2) at least one B, and (3) at least one A and at least one B. As used herein in the context of describing the performance or execution of processes, instructions, actions, activities and/or steps, the phrase “at least one of A and B” is intended to refer to implementations including any of (1) at least one A, (2) at least one B, and (3) at least one A and at least one B. Similarly, as used herein in the context of describing the performance or execution of processes, instructions, actions, activities and/or steps, the phrase “at least one of A or B” is intended to refer to implementations including any of (1) at least one A, (2) at least one B, and (3) at least one A and at least one B.
As used herein, singular references (e.g., “a”, “an”, “first”, “second”, etc.) do not exclude a plurality. The term “a” or “an” entity, as used herein, refers to one or more of that entity. The terms “a” (or “an”), “one or more”, and “at least one” can be used interchangeably herein. Furthermore, although individually listed, a plurality of means, elements or method actions may be implemented by, e.g., a single unit or processor. Additionally, although individual features may be included in different examples or claims, these may possibly be combined, and the inclusion in different examples or claims does not imply that a combination of features is not feasible and/or advantageous.
FIG. 3 is a flowchart representative of aprocess300 that may be implemented using logic or machine readable instructions that may be executed to implement thetelemetry controller130A-C ofFIGS. 1 and/or 2 when a wish list (e.g., telemetry wish list) is obtained from a consumer. Atblock302, thesource manager204 identifies whether there are new participants (e.g., theservers112,114,116) in thecloud environment105 and/or edge platforms in the edge environment110 (e.g., theedge platforms140,150) ofFIG. 1. In some examples, the control of302 may include identifying whether there are additional endpoint devices in the endpoint environment115 (e.g., theendpoint device160,165,170,175,180,185) ofFIG. 1. In the event thesource manager204 identifies there is a new participant (e.g., theservers112,114,116) in thecloud environment105 and/or edge platform in the edge environment110 (e.g., theedge platforms140,150) (e.g., the control ofblock302 returns a result of YES), thesource manager204 stores identifying information (e.g., naming information) in thedirectory206. (Block304). In the event thesource manager204 does not identify a new participant (e.g., theservers112,114,116) in thecloud environment105 and/or edge platform in the edge environment110 (e.g., theedge platforms140,150) (e.g., the control ofblock302 returns a result of NO), control proceeds to block306.
Atblock306, thesource manager204 determines whether a wish list (e.g., telemetry wish list) is obtained. (Block306). For example, an edge consumer task, application, and/or microservice may advertise a wish list (e.g., telemetry wish list) to the edge platforms requesting data for use in a further computing process. For example, thesource manager204 may determine whether a wish list (e.g., telemetry wish list) is obtained from a consumer of any of the participants (e.g., theservers112,114,116) in thecloud environment105 and/or edge platforms in the edge environment110 (e.g., theedge platforms140,150) ofFIG. 1. In the event thesource manager204 determines a wish list (e.g., telemetry wish list) is not obtained (e.g., the control ofblock306 returns a result of NO), control waits. Alternatively, in the event thesource manager204 determines a wish list (e.g., telemetry wish list) is obtained (e.g., the control ofblock306 returns a result of YES), thepublisher216 publishes the wish list (e.g., telemetry wish list) to various components. (Block308).
Atblock310, thecommitment manager218 determines whether a commitment from anothertelemetry controller130A-C and/or thesame telemetry controller130A-C is received. (Block310). In the event thecommitment manger218 determines a commitment is not received (e.g., the control ofblock310 returns a result of NO), thecommitment manager218 continues to wait. In some examples disclosed herein, thecommitment manager218 may not wait indefinitely and, thus, thecommitment manger218 may request theservice manager214 to provide a commitment from a backup provider. For example, theservice manager214 may have access to backup providers (e.g., components that have previously committed to similar tasks, components that can provide some or all of the commitment, etc.) that are capable of committing. Alternatively, in the event thecommitment manager218 determines a commitment is received (e.g., the control ofblock310 returns a result of YES), thepublisher216 publishes the commitment. (Block312).
Atblock314, thesource manager204 links the name of the one or more committed devices to thedirectory206. (Block314). Atblock316, thecredential manager210 determines whether credentials are needed for components when accessing data in the distributeddatabase208. (Block316). In the event thecredential manager210 determines credentials are not needed (e.g., the control ofblock316 returns a result of NO), control proceeds to block320. Alternatively, in the event thecredential manager210 determines credentials are needed (e.g., the control ofblock316 returns a result of YES), thecredential manager210 generates credentials for the component when accessing data in the distributeddatabase208. (Block318).
Atblock320, thecommunication interface212 determines whether to establish thepriority channel155 ofFIG. 1. (Block320). In the event thecommunication interface212 determines not to establish the priority channel155 (e.g., the control ofblock320 returns a result of NO), control proceeds to block324. Alternatively, in the event thecommunication interface212 determines to establish the priority channel (e.g., the control ofblock320 returns a result of YES), thecommunication interface212 establishes thepriority channel155 ofFIG. 1 for the edge platforms (e.g., theedge platform140,150) in theedge environment110. (Block322). Description of the instruction inblock322 is explained in further detail below.
Atblock324, thecommitment manager218 determines whether all commitments associated with a wish list (e.g., telemetry wish list) are received. (Block324). In the event thecommitment manager218 determines all commitments are not received (e.g., the control ofblock324 returns a result of NO), control returns to block310. Alternatively, responsive to the control ofblock324 returning a result of NO, thecommitment manager218 may continue to wait.
In the event thecommitment manager218 determines all commitments are received (e.g., the control ofblock324 returns a result of YES), thecredential manager210 transmits the telemetry data to the consumer. (Block326). Further, thesource manager204 determines whether an additional wish list (e.g., telemetry wish list) is obtained. (Block328). In the event thesource manager204 determines an additional wish list (e.g., telemetry wish list) is obtained (e.g., the control ofblock328 returns a result of YES), control returns to block308. Alternatively, in the event thesource manager204 determines an additional wish list (e.g., telemetry wish list) is not obtained (e.g., the control ofblock328 returns a result of NO), theprocess300 stops.
FIG. 4 is a flowchart representative of aprocess400 that may be implemented using logic or machine readable instructions that may be executed to implement thetelemetry controller130A-C ofFIGS. 1 and/or 2 when a wish list (e.g., telemetry wish list) is published. Atblock402, theservice manager214 determines whether a wish list (e.g., telemetry wish list) is published. (Block402). For example, a different telemetry controller (e.g., thetelemetry controller130B) may receive and publish a wish list (e.g., telemetry wish list). In such a manner, theservice manager214 located in the remaining telemetry controllers (e.g., thetelemetry controller130A,130C) determine whether the wish list (e.g., telemetry wish list) is published. In the event theservice manager214 determines a wish list (e.g., telemetry wish list) is not published (e.g., the control ofblock402 returns a result of NO), theservice manager214 continues to wait. Alternatively, in the event the service manager determines a wish list (e.g., telemetry wish list) is published (e.g., the control ofblock402 returns a result of YES), theextractor220 parses the wish list (e.g., telemetry wish list) to identify the requested telemetry data and associated tasks. (Block404).
Atblock406, thecommitment determiner226 determines whether it is viable for a component (e.g., any of thepublisher216, theextractor220, thecollector222, and/or theindexer224 because, for example, the component of interest does not have the processing capabilities or is available) to accept a commitment. (Block406). In the event thecommitment determiner226 determines it is not viable for a component (e.g., any of thepublisher216, theextractor220, thecollector222, and/or the indexer224) to accept a commitment (e.g., the control ofblock406 returns a result of NO), control proceeds to block416. Alternatively, in the event thecommitment determiner226 determines that it is viable for a component (e.g., any of thepublisher216, theextractor220, thecollector222, and/or the indexer224) to accept a commitment (e.g., the control ofblock406 returns a result of YES), thecommitment determiner226 further determines whether such a viable commitment has already been accepted by another component. (Block408). For example, thecommitment determiner226 may determine that theextractor220 is capable of accepting a commitment relating to extracting data. In such an example, thecommitment determiner226 identifies whether any other component (e.g., another extractor located in a separate telemetry controller) has already accepted such a commitment.
In the event thecommitment determiner226 determines that another component has already accepted such a commitment (e.g., the control ofblock408 returns a result of YES), control proceeds to block416. Alternatively, in the event thecommitment determiner226 determines that another component has not accepted such a commitment (e.g., the control ofblock408 returns a result of NO), thecommitment determiner226 transmits an indication of the accepted commitment to thesource manager204 and to thepublisher216. (Block410).
In response to the execution of the control illustrated inblock410, the component determined to execute the commitment (e.g., any of thepublisher216, theextractor220, thecollector222, and/or the indexer224) processes the commitment. (Block412). Description of the instruction illustrated inblock412 is explained in further detail below.
Atblock414, thecommitment determiner226 is structured to determine whether the time period associated with the commitment has expired. (Block414). In the event thecommitment determiner226 determines the time period associated with the commitment has expired (e.g., the control ofblock414 returns a result of YES), control proceeds to block416. Alternatively, in the event thecommitment determiner226 determines the time period has not expired (e.g., the control ofblock414 returns a result of NO), the component determined to execute the commitment (e.g., any of thepublisher216, theextractor220, thecollector222, and/or the indexer224) continues to process the commitment. (Block412).
Atblock416, thecommitment manager218 determines whether there is another commitment available to analyze. (Block416). In the event thecommitment manager218 determines there is another commitment to analyze (e.g., the control ofblock416 returns a result of YES), control returns to block406. Alternatively, in the event thecommitment manager218 determines there is not another commitment to analyze (e.g., the control ofblock416 returns a result of NO), theprocess400 stops.
In examples disclosed herein, theprocesses300 and400 ofFIGS. 3 and 4 may be applied as control to be executed using a variety of data-level networking mechanisms. For example, in the event telemetry data is implemented in an NDN, control may include theservice manager210 sending an interest packet, obtaining a response regarding the execution and/or completion of the interest packet, and utilizing the response to route to the responders to fulfil the interest packet to obtain telemetry data. In such an example, the telemetry data may be cached by theservice manager214 for subsequent requests. In an alternate example, thetelemetry controller130A-C may be implemented in a RESTful request and/or response network. In the event thetelemetry controller130A-C is implemented in a RESTful request and/or response network, theservice manager214 may execute RESTful interactions such as, for example, CREATE, GET<read>, PUT<update>, DEL, NOTIFY, etc., to broker telemetry data.
FIG. 5 is a flowchart representative of a process that may be implemented using logic or machine readable instructions that may be executed to theextractor220 ofFIG. 2 to process a commitment by extracting and/or aggregating telemetry data. For example, the process ofFIG. 5 may be executed by theextractor220 to execute theinstructions412 ofFIG. 4 in the event the commitment is for extraction.
InFIG. 5, in the event thecommitment determiner226 determines theextractor220 can commit to implement a requested task in the wish list (e.g., telemetry wish list), theextractor220 may process the commitment. For example, a wish list (e.g., telemetry wish list) may include a task to extract specified telemetry data. Atblock502, theextractor220 extracts the relevant telemetry data. (Block502). For example, the extractor may extract the telemetry data from the distributeddatabase208 and/or any suitable database (e.g., thedirectory206, the database135 (FIG. 1), and/or theEP database148,158 local to theedge platform140,150, respectively). Further, theextractor220 determines whether aggregation of the telemetry data is needed. (Block504). In the event theextractor220 determines aggregation of the telemetry data is needed (e.g., the control ofblock504 returns a result of YES), theextractor220 may aggregate the previously extracted telemetry data. (Block506). In such an example, theextractor220 may aggregate the telemetry data into an appropriate format (e.g., a client specific format, summary statistics, any suitable aggregation format for telemetry data, etc.). Alternatively, in response to theextractor220 determining aggregation of the telemetry data is not needed (e.g., the control ofblock504 returns a result of NO), control proceeds to block508.
Atblock508, theextractor220 stores the telemetry data in the distributeddatabase208. (Block508). The process then returns to block414 ofFIG. 4 responsive to the execution of the control inblock508.
FIG. 6 is a flowchart representative of a process that may be implemented using logic or machine readable instructions that may be executed to implement thecollector222 ofFIG. 2 to process a commitment by collecting telemetry data. For example, the process ofFIG. 6 may be executed by thecollector222 to execute theinstructions412 ofFIG. 4 in the event the commitment is for collection.
In the example ofFIG. 6, thecollector222 identifies the telemetry data to be collected. (Block602). In response to identifying the telemetry data to be collected, thecollector222 collects the telemetry data. (Block604).
Atblock606, thecollector222 stores the collected telemetry data in the distributeddatabase208. (Block606). The process then returns to block414 ofFIG. 4 responsive to the execution of the control inblock606.
FIG. 7 is a flowchart representative of a process that may be implemented using logic or machine readable instructions that may be executed to implement theindexer224 ofFIG. 2 to process a commitment by indexing and/or searching telemetry data. For example, the process ofFIG. 7 may be executed by theindexer224 to execute theinstructions412 ofFIG. 4 in the event the commitment is for indexing.
InFIG. 7, theindexer224 performs indexing. (Block702). For example, theindexer224 may index data stored in the distributeddatabase208. Further, theindexer224 may perform searching. (Block704). For example, theindexer224 may search for specified telemetry data stored in the distributeddatabase208. In response to the execution of the control illustrated inblock704, theindexer224 stores telemetry data (e.g., the indexed and/or searched telemetry data) in the distributed database. (Block706). The process then returns to block414 ofFIG. 4 responsive to the execution of the control inblock706.
FIG. 8 is a flowchart representative of a process that may be implemented using logic or machine readable instructions that may be executed to implement thecommunication interface212 ofFIG. 2 to establish thepriority channel155 ofFIG. 1. The process ofFIG. 8 illustrates example instructions to execute the control ofblock322 ofFIG. 3.
InFIG. 8, thecommunication interface212 configures a first bandwidth portion from a communication connection to the server112 (FIG. 1) (e.g., a channel with low performance and high resiliency) as a first portion of thepriority channel155. (Block802). Additionally, thecommunication interface212 configures a second bandwidth portion from the communication connection betweentelemetry controllers130A-C (e.g., theedge platform140,150) as a second portion of thepriority channel155. (Block984). In response, thecommunication interface212 combines the first portion (e.g., the server connection) with the second portion (e.g., the carrier-grade internet). (Block806). Control then returns to block324 ofFIG. 3 responsive to the execution of the control inblock806.
FIG. 9 is a block diagram of anexample processor platform900 structured to execute the instructions ofFIGS. 3, 4, 5, 6, 7, and/or8 to implement thetelemetry controller130B of thefirst edge platform140 ofFIGS. 1 and/or 2. While theprocessor platform900 ofFIG. 9 is described in connection with thetelemetry controller130B of thefirst edge platform140 ofFIGS. 1 and/or 2, anysuitable telemetry controller130A-C may be implemented. For example, theprocessor platform900 may be structured to execute the instructions ofFIGS. 3, 4, 5, 6, 7, and/or8 to implement the telemetry controller130C of thesecond edge platform150 ofFIGS. 1 and/or 2. Theprocessor platform900 can be, for example, a server, a personal computer, a workstation, a self-learning machine (e.g., a neural network), a mobile device (e.g., a cell phone, a smart phone, a tablet such as an iPad′), a personal digital assistant (PDA), an Internet appliance, a DVD player, a CD player, a digital video recorder, a Blu-ray player, a gaming console, a personal video recorder, a set top box, a headset or other wearable device, or any other type of computing device.
Theprocessor platform900 of the illustrated example includes aprocessor912. Theprocessor912 of the illustrated example is hardware. For example, theprocessor912 can be implemented by one or more integrated circuits, logic circuits, microprocessors, GPUs, DSPs, or controllers from any desired family or manufacturer. The hardware processor may be a semiconductor based (e.g., silicon based) device. In this example, the processor implements theexample naming layer202, theexample source manager204, theexample directory206, the example distributeddatabase208, theexample credential manager210, theexample communication interface212, theexample service manager214, theexample publisher216, theexample commitment manager218, theexample extractor220, theexample collector222, theexample indexer224, and/or, more generally, theexample telemetry controller130B ofFIGS. 1 and/or 2, the examplefirst telemetry data136A, the examplefirst executable137, the examplefirst orchestrator142, the examplefirst scheduler144, the examplefirst EP database148, the example first resource(s)149, the examplefirst security controller161, and/or, more generally, the examplefirst edge platform140 ofFIGS. 1 and/or 2. While theprocessor platform900 ofFIG. 9 is described in connection with thetelemetry controller130B of thefirst edge platform140 ofFIGS. 1 and/or 2, anysuitable telemetry controller130A-C and/oredge platform140,150 may be implemented. For example, theprocessor platform900 may be structured implement theexample naming layer202, theexample source manager204, theexample directory206, the example distributeddatabase208, theexample credential manager210, theexample communication interface212, theexample service manager214, theexample publisher216, theexample commitment manager218, theexample extractor220, theexample collector222, theexample indexer224, and/or, more generally, the example telemetry controller130C ofFIGS. 1 and/or 2, the examplesecond telemetry data136B, the examplesecond executable139, the examplesecond orchestrator152, the examplesecond scheduler154, the examplesecond EP database158, the example second resource(s)159, the examplesecond security controller162, and/or, more generally, the examplesecond edge platform150 ofFIGS. 1 and/or 2.
Theprocessor912 of the illustrated example includes a local memory913 (e.g., a cache). Theprocessor912 of the illustrated example is in communication with a main memory including avolatile memory914 and anon-volatile memory916 via abus918. Thevolatile memory914 may be implemented by Synchronous Dynamic Random Access Memory (SDRAM), Dynamic Random Access Memory (DRAM), RAMBUS® Dynamic Random Access Memory (RDRAM®) and/or any other type of random access memory device. Thenon-volatile memory916 may be implemented by flash memory and/or any other desired type of memory device. Access to themain memory914,916 is controlled by a memory controller.
Theprocessor platform900 of the illustrated example also includes aninterface circuit920. Theinterface circuit920 may be implemented by any type of interface standard, such as an Ethernet interface, a universal serial bus (USB), a Bluetooth® interface, a near field communication (NFC) interface, and/or a PCI express interface.
In the illustrated example, one ormore input devices922 are connected to theinterface circuit920. The input device(s)922 permit(s) a user to enter data and/or commands into theprocessor912. The input device(s) can be implemented by, for example, an audio sensor, a microphone, a camera (still or video), a keyboard, a button, a mouse, a touchscreen, a track-pad, a trackball, isopoint and/or a voice recognition system.
One ormore output devices924 are also connected to theinterface circuit920 of the illustrated example. Theoutput devices924 can be implemented, for example, by display devices (e.g., a light emitting diode (LED), an organic light emitting diode (OLED), a liquid crystal display (LCD), a cathode ray tube display (CRT), an in-place switching (IPS) display, a touchscreen, etc.), a tactile output device, a printer and/or speaker. Theinterface circuit920 of the illustrated example, thus, typically includes a graphics driver card, a graphics driver chip and/or a graphics driver processor.
Theinterface circuit920 of the illustrated example also includes a communication device such as a transmitter, a receiver, a transceiver, a modem, a residential gateway, a wireless access point, and/or a network interface to facilitate exchange of data with external machines (e.g., computing devices of any kind) via anetwork926. The communication can be via, for example, an Ethernet connection, a digital subscriber line (DSL) connection, a telephone line connection, a coaxial cable system, a satellite system, a line-of-site wireless system, a cellular telephone system, etc.
Theprocessor platform900 of the illustrated example also includes one or moremass storage devices928 for storing software and/or data. Examples of suchmass storage devices928 include floppy disk drives, hard drive disks, compact disk drives, Blu-ray disk drives, redundant array of independent disks (RAID) systems, and digital versatile disk (DVD) drives.
The machineexecutable instructions932 ofFIGS. 3, 4, 5, 6, 7, and/or8 may be stored in themass storage device928, in thevolatile memory914, in thenon-volatile memory916, and/or on a removable non-transitory computer readable storage medium such as a CD or DVD.
From the foregoing, it will be appreciated that example methods, apparatus and articles of manufacture have been disclosed that manage telemetry data in an edge environment. Example disclosed methods, apparatus and articles of manufacture improve the efficiency of using a computing device by enabling telemetry collection among edge platforms to provide telemetry data in a just-in-time, just-where-needed, adaptation of telemetry data. Examples disclosed herein enable a such telemetry collection by implementing a distributed network of telemetry controllers configured to commit to requests in a wish list (e.g., telemetry wish list) and further store telemetry data in a distributed database. Example disclosed methods, apparatus and articles of manufacture are well suited to a distributed, decentralized, and partial trust, environment for edge platforms existing in various locations and with various security constraints and/or privacy domains. Examples disclosed herein enable telemetry data to be lossy and non-consistent. In this manner, collection and processing of telemetry data into a distributed database may be timestamped to ensure that the most recent telemetry data is utilized. Such examples enable accurate interpolation and/or extrapolation to obtain any possible missing intermediate and/or current versions of telemetry data, during all stages of processing.
Example disclosed methods, apparatus and articles of manufacture improve the efficiency of using a computing device by enabling telemetry data processing to occur independent of other data systems or applications. Examples disclosed herein may be implemented using existing data networks that are unique to networking trends such as NDN's, ICN's, CDN's, NFN's, and IoT resource model, and/or any suitable data network associated with edge computing.
Further, the disclosed methods, apparatus and articles of manufacture improve the efficiency of using a computing device by establishing a priority channel to facilitate safe and efficient transmission of telemetry data. Example disclosed methods, apparatus and articles of manufacture are accordingly directed to one or more improvement(s) in the functioning of a computer.
Although certain example methods, apparatus and articles of manufacture have been disclosed herein, the scope of coverage of this patent is not limited thereto. On the contrary, this patent covers all methods, apparatus and articles of manufacture fairly falling within the scope of the claims of this patent.
Example methods, apparatus, systems, and articles of manufacture to manage telemetry data in an edge environment are disclosed herein. Further examples and combinations thereof include the following:
Example 1 includes an apparatus to manage telemetry data in an edge environment, the apparatus comprising a publisher included in a first edge platform to publish a wish list obtained from a consumer, the wish list including tasks to execute, a commitment determiner to determine whether a commitment is viable to execute at least one of the tasks in the wish list, the commitment to be processed to identify the telemetry data, and a communication interface to establish a communication channel to facilitate transmission of the telemetry data from the first edge platform to a second edge platform.
Example 2 includes the apparatus of example 1, wherein the communication interface is to establish the communication channel by configuring a first bandwidth portion from a first communication connection between the first edge platform and a server, configuring a second bandwidth portion from a second communication connection between the first edge platform and the second edge platform, and combining the first bandwidth portion and the second bandwidth portion.
Example 3 includes the apparatus of example 1, wherein the publisher is to publish acceptance of the commitment in response to the commitment determiner determining the commitment is viable.
Example 4 includes the apparatus of example 1, further including at least one of an extractor to process the commitment when the at least one of the tasks is to extract the telemetry data, a collector to process the commitment when the at least one of the tasks is to collect the telemetry data, or an indexer to process the commitment when the at least one of the tasks is to index the telemetry data.
Example 5 includes the apparatus of example 1, further including a service manager to process the commitment for a time period.
Example 6 includes the apparatus of example 1, further including a service manager to store the telemetry data in a database, the database distributed among the first edge platform and the second edge platform.
Example 7 includes the apparatus of example 1, further including a credential manager to provide credentialed access to the second edge platform to access the telemetry data.
Example 8 includes the apparatus of example 7, wherein the credential manager is to provide the credentialed access for a time period.
Example 9 includes a non-transitory computer readable storage medium comprising data which may be configured into executable instructions and, when configured and executed, cause at least one processor to at least publish a wish list obtained from a consumer, the wish list including tasks to execute, determine whether a commitment is viable to execute at least one of the tasks in the wish list, the commitment to be processed to identify telemetry data, and establish a communication channel to facilitate transmission of the telemetry data from a first edge platform to a second edge platform.
Example 10 includes the non-transitory computer readable storage medium of example 9, wherein the instructions, when executed, cause the at least one processor to establish the communication channel by configuring a first bandwidth portion from a first communication connection between the first edge platform and a server, configuring a second bandwidth portion from a second communication connection between the first edge platform and the second edge platform, and combining the first bandwidth portion and the second bandwidth portion.
Example 11 includes the non-transitory computer readable storage medium of example 9, wherein the instructions, when executed, cause the at least one processor to publish acceptance of the commitment in response to determining the commitment is viable.
Example 12 includes the non-transitory computer readable storage medium of example 9, wherein the instructions, when executed, cause the at least one processor to process the commitment by extracting the telemetry data when the at least one of the tasks is to extract the telemetry data, process the commitment by collecting the telemetry data when the at least one of the tasks is to collect the telemetry data, or process the commitment by indexing the telemetry data when the at least one of the tasks is to index the telemetry data.
Example 13 includes the non-transitory computer readable storage medium of example 9, wherein the instructions, when executed, cause the at least one processor to process the commitment for a time period.
Example 14 includes the non-transitory computer readable storage medium of example 9, wherein the instructions, when executed, cause the at least one processor to store the telemetry data in a database, the database distributed among the first edge platform and the second edge platform.
Example 15 includes the non-transitory computer readable storage medium of example 9, wherein the instructions, when executed, cause the at least one processor to provide credentialed access to the second edge platform to access the telemetry data.
Example 16 includes the non-transitory computer readable storage medium of example 15, wherein the instructions, when executed, cause the at least one processor to provide the credentialed access for a time period.
Example 17 includes a method to manage telemetry data in an edge environment, the method comprising publishing a wish list obtained from a consumer, the wish list including tasks to execute, determining whether a commitment is viable to execute at least one of the tasks in the wish list, the commitment to be processed to identify the telemetry data, and establishing a communication channel to facilitate transmission of the telemetry data from a first edge platform to a second edge platform.
Example 18 includes the method of example 17, further including configuring a first bandwidth portion from a first communication connection between the first edge platform and a server, configuring a second bandwidth portion from a second communication connection between the first edge platform and the second edge platform, and combining the first bandwidth portion and the second bandwidth portion.
Example 19 includes the method of example 17, further including publishing acceptance of the commitment in response to determining the commitment is viable.
Example 20 includes the method of example 17, further including processing the commitment by extracting the telemetry data when the at least one of the tasks is to extract the telemetry data, processing the commitment by collecting the telemetry data when the at least one of the tasks is to collect the telemetry data, or processing the commitment by indexing the telemetry data when the at least one of the tasks is to index the telemetry data.
Example 21 includes the method of example 17, further including processing the commitment for a time period.
Example 22 includes the method of example 17, further including storing the telemetry data in a database, the database distributed among the first edge platform and the second edge platform.
Example 23 includes the method of example 17, further including providing credentialed access to the second edge platform to access the telemetry data.
Example 24 includes the method of example 23, further including providing the credentialed access for a time period.
Example 25 includes an apparatus comprising means for publishing a wish list obtained from a consumer, the wish list including tasks to execute, means for determining whether a commitment is viable to execute at least one of the tasks in the wish list, the commitment to be processed to identify telemetry data, and means for establishing a communication channel to facilitate transmission of the telemetry data from a first edge platform to a second edge platform.
Example 26 includes the apparatus of example 25, wherein the establishing means is to establish the communication channel by configuring a first bandwidth portion from a first communication connection between the first edge platform and a server, configuring a second bandwidth portion from a second communication connection between the first edge platform and the second edge platform, and combining the first bandwidth portion and the second bandwidth portion.
Example 27 includes the apparatus of example 25, wherein the publishing means is to publish acceptance of the commitment in response to determining the commitment is viable.
Example 28 includes the apparatus of example 25, further including at least one of means for extracting telemetry data to process the commitment when the at least one of the tasks is to extract the telemetry data, means for collecting telemetry data to process the commitment when the at least one of the tasks is to collect the telemetry data, or means for indexing telemetry data to process the commitment when the at least one of the tasks is to index the telemetry data.
Example 29 includes the apparatus of example 25, further including means for managing to process the commitment for a time period.
Example 30 includes the apparatus of example 25, further including means for managing to store the telemetry data in a database, the database distributed among the first edge platform and the second edge platform.
Example 31 includes the apparatus of example 25, further including means for credential managing to provide credentialed access to an edge platform to access the telemetry data.
Example 32 includes the apparatus of example 31, wherein the credential managing means is to provide the credentialed access for a time period.
Example 33 includes an apparatus to manage telemetry data in an edge environment, the apparatus comprising a first telemetry controller to obtain a wish list, the first telemetry controller to publish the wish list to a second telemetry controller, parse the wish list to identify requested tasks in the wish list, accept a commitment to execute at least one of the requested tasks in the wish list, process the commitment to identify the telemetry data, store the telemetry data in a distributed database, and establish a communication channel to facilitate transmission of the telemetry data among the first telemetry controller and the second telemetry controller.
Example 34 includes the apparatus of example 33, wherein the first telemetry controller includes a communication interface, and wherein the communication interface is to establish the communication channel by configuring a first bandwidth portion from a first communication connection between the first telemetry controller and a server as a first portion, configuring a second bandwidth portion from a second communication connection between the first telemetry controller and the second telemetry controller a second portion, and combining the first portion and the second portion.
Example 35 includes the apparatus of example 33, wherein the first telemetry controller is to publish the acceptance of the commitment.
Example 36 includes the apparatus of example 33, wherein the first telemetry controller is to process the commitment for a time period.
Example 37 includes the apparatus of example 33, wherein the distributed database is distributed among the first telemetry controller and the second telemetry controller.
Example 38 includes the apparatus of example 33, wherein the first telemetry controller is to provide credentialed access to the second telemetry controller to access the telemetry data in the distributed database.
Example 39 includes the apparatus of example 38, wherein the first telemetry controller is to provide the credentialed access for a time period.
Example 40 includes the apparatus of example 33, wherein the first telemetry controller is to process the commitment by performing at least one of collecting, extracting, publishing, or indexing the telemetry data.
Example 41 includes the apparatus of example 33, wherein the first telemetry controller is to obtain the wish from a consumer, wherein the consumer includes at least one of a computing device, an edge platform, or a participant in a cloud environment.
Example 42 includes a non-transitory computer readable storage medium comprising data which may be configured into executable instructions and, when configured and executed, cause at least one processor to at least obtain a wish list, publish the wish list to a first telemetry controller, parse the wish list to identify requested tasks in the wish list, accept a commitment to execute at least one of the requested tasks in the wish list, process the commitment to identify telemetry data, store the telemetry data in a distributed database, and establish a communication channel to facilitate transmission of the telemetry data among the first telemetry controller and a second telemetry controller.
Example 43 includes the non-transitory computer readable storage medium of example 42, wherein the instructions, when executed, cause the at least one processor to establish the communication channel by configuring a first bandwidth portion from a first communication connection between the first telemetry controller and a server as a first portion, configuring a second bandwidth portion from a second communication connection between the first telemetry controller and the second telemetry controller a second portion, and combining the first portion and the second portion.
Example 44 includes the non-transitory computer readable storage medium of example 42, wherein the instructions, when executed, cause the at least one processor to publish the acceptance of the commitment.
Example 45 includes the non-transitory computer readable storage medium of example 42, wherein the instructions, when executed, cause the at least one processor to process the commitment for a time period.
Example 46 includes the non-transitory computer readable storage medium of example 42, wherein the distributed database is distributed among the first telemetry controller and the second telemetry controller.
Example 47 includes the non-transitory computer readable storage medium of example 42, wherein the instructions, when executed, cause the at least one processor to provide credentialed access to the first telemetry controller to access the telemetry data in the distributed database.
Example 48 includes the non-transitory computer readable storage medium of example 47, wherein the instructions, when executed, cause the at least one processor to provide the credentialed access for a time period.
Example 49 includes the non-transitory computer readable storage medium of example 42, wherein the instructions, when executed, cause the at least one processor to process the commitment by performing at least one of collecting, extracting, publishing, or indexing the telemetry data.
Example 50 includes the non-transitory computer readable storage medium of example 42, wherein the instructions, when executed, cause the at least one processor to obtain the wish list from a consumer, and wherein the consumer includes at least one of a computing device, an edge platform, or a participant in a cloud environment.
Example 51 includes a method to manage telemetry data in an edge environment, the method comprising obtaining a wish list, publishing the wish list to a first telemetry controller, parsing the wish list to identify requested tasks in the wish list, accepting a commitment to execute at least one of the requested tasks in the wish list, processing the commitment to identify the telemetry data, storing the telemetry data in a distributed database, and establishing a communication channel to facilitate transmission of the telemetry data among the first telemetry controller and a second telemetry controller.
Example 52 includes the method of example 51, further including configuring a first bandwidth portion from a first communication connection between the first telemetry controller and a server as a first portion, configuring a second bandwidth portion from a second communication connection between the first telemetry controller and the second telemetry controller a second portion, and combining the first portion and the second portion.
Example 53 includes the method of example 51, further including publishing the acceptance of the commitment.
Example 54 includes the method of example 51, further including processing the commitment for a time period.
Example 55 includes the method of example 51, wherein the distributed database is distributed among the first telemetry controller and the second telemetry controller.
Example 56 includes the method of example 51, further including providing credentialed access to the first telemetry controller to access the telemetry data in the distributed database.
Example 57 includes the method of example 56, further including providing the credentialed access for a time period.
Example 58 includes the method of example 51, further including processing the commitment by performing at least one of collecting, extracting, publishing, or indexing the telemetry data.
Example 59 includes the method of example 51, further including obtaining the wish list from a consumer, wherein the consumer includes at least one of a computing device, an edge platform, or a participant in a cloud environment.
Example 60 includes an apparatus comprising means for telemetry controlling to obtain a wish list, publish the wish list to a first telemetry controller, parse the wish list to identify requested tasks in the wish list, accept a commitment to execute at least one of the requested tasks in the wish list, process the commitment to identify telemetry data, store the telemetry data in a distributed database, and establish a communication channel to facilitate transmission of the telemetry data among the first telemetry controller and a second telemetry controller.
Example 61 includes the apparatus of example 60, wherein the telemetry controlling means is to configure a first bandwidth portion from a first communication connection between the first telemetry controller and a server as a first portion, configure a second bandwidth portion from a second communication connection between the first telemetry controller and the second telemetry controller a second portion, and combine the first portion and the second portion.
Example 62 includes the apparatus of example 60, wherein the telemetry controlling means is to publish the acceptance of the commitment.
Example 63 includes the apparatus of example 60, wherein the telemetry controlling means is to process the commitment for a time period.
Example 64 includes the apparatus of example 60, wherein the distributed database is distributed among the first telemetry controller and the second telemetry controller.
Example 65 includes the apparatus of example 60, wherein the telemetry controlling means is to provide credentialed access to the first telemetry controller to access the telemetry data in the distributed database.
Example 66 includes the apparatus of example 65, wherein the telemetry controlling means is to provide credentialed access for a time period.
Example 67 includes the apparatus of example 60, wherein the telemetry controlling means is to process the commitment by performing at least one of collecting, extracting, publishing, or indexing the telemetry data.
Example 68 includes the apparatus of example 60, wherein the telemetry controlling means is obtain the wish list from a consumer, and wherein the consumer includes at least one of a computing device, an edge platform, or a participant in a cloud environment.
The following claims are hereby incorporated into this Detailed Description by this reference, with each claim standing on its own as a separate embodiment of the present disclosure.