Disclosure of Invention
The embodiment of the application provides an intelligent data acquisition method, equipment and medium based on a front-end monitoring SDK (software development kit), which are used for solving the technical problem of how to realize efficient and accurate front-end monitoring data acquisition on the premise of not affecting user experience.
The embodiment of the application provides an intelligent data acquisition method based on a front-end monitoring SDK (software development kit), which comprises the steps of introducing a front-end monitoring SDK script through an external file tag and loading data acquisition logic preset in the front-end monitoring SDK script when a Web application page is loaded, wherein the data acquisition logic comprises performance data acquisition logic, user behavior data acquisition logic and error log acquisition logic, intelligent data acquisition is performed through an idle time data scheduling mechanism preset in the front-end monitoring SDK script when the Web application page is operated, data to be uploaded is determined through an incremental data uploading mechanism preset in the front-end monitoring SDK script and is intelligently compressed, and a data uploading strategy is dynamically selected based on a real-time network condition so as to upload the compressed data to be uploaded to a data server.
In one implementation mode of the application, after the front-end monitoring SDK script is introduced through the external file tag, the method further comprises the steps of detecting whether the browser supports key functions to be applied, wherein the key functions to be applied comprise requestIdleCallback, web workbench and network state API, and initializing default configuration, wherein the configuration types of the default configuration comprise data acquisition frequency, uploading strategy and scheduling parameters.
In one implementation of the application, performance data acquisition logic comprises monitoring page interaction behaviors of a user through JavaScript to determine performance parameters by utilizing PerformanceAPI, wherein the performance parameters comprise page loading time, resource request time consumption and memory use condition, periodically acquiring the performance parameters based on data acquisition frequency, the user behavior data acquisition logic comprises monitoring user interaction events and recording related event information, wherein the user interaction events comprise clicking, scrolling and inputting, monitoring DOM changes of the page through Mutation Observer to acquire loading information of dynamic content, and the error log acquisition logic comprises acquiring global errors and uncaptured Promise anomalies based on page monitoring and acquiring network request error information and resource loading failure information.
The method comprises the steps of detecting idle time of a browser under the condition that requestIdleCallback functions are available, temporarily storing performance data and user behavior data which belong to the low-priority task acquisition in a memory when the browser is busy, reading the performance data and the user behavior data when the browser is idle, and dynamically adjusting delay time of execution of the high-priority task or the low-priority task through a preset setTimeout scheduling scheme under the condition that requestIdleCallback functions are unavailable based on real-time rendering load of a page.
In one implementation mode of the application, after intelligent data acquisition is performed by the idle time data scheduling mechanism preset in the front-end monitoring SDK script, the method further comprises the steps of identifying sensitive data in the acquired data and performing desensitization processing on the sensitive data based on a preset desensitization rule.
In one implementation mode of the application, the data to be uploaded is determined through a preset incremental data uploading mechanism in the front-end monitoring SDK script, and the data to be uploaded is intelligently compressed, and specifically comprises the steps of calculating hash values of front and rear collected data of the same type by utilizing a hash algorithm, determining that the collected data is the data to be uploaded at the next time under the condition that the hash values are different, carrying out standardization processing on the data to be uploaded based on a preset preprocessing rule, determining a real-time network condition, dynamically selecting a corresponding compression algorithm to be applied based on the real-time network condition, wherein the real-time network condition comprises a network type, an effective bandwidth and a delay, and compressing the data to be uploaded after the standardization processing based on the compression algorithm to be applied.
In one implementation mode of the application, a data uploading strategy is dynamically selected based on real-time network conditions to upload compressed data to a data server, and the method specifically comprises the steps of analyzing whether the real-time network conditions meet real-time uploading conditions, uploading the compressed data to the data server in real time if the real-time network conditions are met, and dynamically configuring delay uploading frequency and batch size to upload the compressed data to the data server in batches if the real-time uploading conditions are not met.
In one implementation of the application, the method further comprises asynchronous processing of data acquisition, data scheduling, data compression and data uploading by using the Web Worker.
In a second aspect, an embodiment of the present application further provides a front-end monitoring SDK-based intelligent data acquisition device, where the device includes at least one processor, and a memory communicatively connected to the at least one processor, where the memory stores instructions executable by the at least one processor, and where the instructions are executable by the at least one processor to enable the at least one processor to perform a front-end monitoring SDK-based intelligent data acquisition method as in any one of the above.
In a third aspect, an embodiment of the present application further provides a non-volatile computer storage medium for intelligent data acquisition based on a front-end monitoring SDK, where computer executable instructions are stored, where the computer executable instructions, when executed, implement a method for intelligent data acquisition based on a front-end monitoring SDK as in any one of the above.
The intelligent data acquisition method, the intelligent data acquisition equipment and the intelligent data acquisition medium based on the front-end monitoring SDK have the following beneficial effects:
1. And the consumption of network and system resources is reduced through idle time scheduling, incremental uploading and compression technology. And the browser is used for collecting data in idle time, so that the influence on page loading and user interaction is reduced, and the user experience is improved.
2. The method and the device realize efficient data acquisition, transmission and processing, ensure the accuracy, the integrity and the safety of monitoring data while guaranteeing the page performance. And the uploading strategy is automatically adjusted, so that efficient data transmission under various network conditions is ensured.
3. The method can intelligently collect and transmit data, simultaneously furthest reduce the influence on page performance, is suitable for front-end monitoring scenes of various Web applications, and is particularly suitable for being used in mobile terminals and weak network environments.
Detailed Description
In order to make the objects, technical solutions and advantages of the present application more apparent, the technical solutions of the present application will be clearly and completely described below with reference to specific embodiments of the present application and corresponding drawings. It will be apparent that the described embodiments are only some, but not all, embodiments of the application. All other embodiments, which can be made by those skilled in the art based on the embodiments of the application without making any inventive effort, are intended to be within the scope of the application.
The embodiment of the application provides an intelligent data acquisition method, equipment and medium based on a front-end monitoring SDK (software development kit), which are used for solving the technical problem of how to realize efficient and accurate front-end monitoring data acquisition on the premise of not affecting user experience.
The following describes the technical scheme provided by the embodiment of the application in detail through the attached drawings.
Fig. 1 is a flowchart of an intelligent data acquisition method based on front-end monitoring SDK according to an embodiment of the present application. As shown in fig. 1, the method for collecting intelligent data based on front-end monitoring SDK provided by the embodiment of the application specifically includes the following steps:
and step 101, introducing a front-end monitoring SDK script through an external file tag when the Web application page is loaded, and loading data acquisition logic preset in the front-end monitoring SDK script.
In one embodiment of the present application, to implement intelligent data collection based on front-end monitoring SDK, first, when loading an application page, a monitoring SDK script is introduced through a < script > tag, so as to ensure that the monitoring SDK script is executed at the initial stage of page loading.
Further, whether the browser supports key functions to be applied is detected, wherein the key functions to be applied comprise requestIdleCallback, web workbench and a network state API, and default configuration is initialized, and the configuration type of the default configuration comprises data acquisition frequency, uploading strategy and scheduling parameters.
Further, loading a data acquisition logic preset in the front-end monitoring SDK script.
In one embodiment of the application, the data acquisition logic comprises performance data acquisition logic, user behavior data acquisition logic and error log acquisition logic.
In one embodiment, the performance data collection logic includes monitoring the user's page interaction behavior via JavaScript, such as collecting performance metrics such as page load time, resource request time consumption, memory usage, etc. using PerformanceAPI. Further, the FPS (frame rate) and memory occupation of the page are periodically acquired to monitor the page rendering performance.
In one embodiment, the user behavior data acquisition logic comprises monitoring user interaction events and recording related event information, wherein the user interaction events comprise clicking, scrolling and inputting, and monitoring DOM changes of a page through Mutation Observer to acquire loading information of dynamic content.
In one embodiment, the error log collection logic includes collecting global errors and uncaptured Promise exceptions based on page snooping to capture runtime errors. Further, network request error information and resource loading failure information can be acquired as required so as to diagnose application performance problems.
And 102, when the Web application page runs, intelligent data acquisition is performed through a preset idle time data scheduling mechanism in the front-end monitoring SDK script.
In one embodiment of the application, the performance data collection and the user behavior data collection are pre-designated as low-priority tasks, and the error log collection is pre-designated as high-priority tasks; the intelligent data acquisition is carried out by monitoring an idle time data scheduling mechanism preset in the SDK script through the front end, and specifically comprises the steps of detecting idle time of a browser under the condition that requestIdleCallback functions are available, temporarily storing performance data and user behavior data acquired by low-priority tasks into a memory when the browser is busy, and reading the performance data and the user behavior data when the browser is idle, and dynamically adjusting delay time of executing high-priority tasks or low-priority tasks through a preset setTimeout scheduling scheme under the condition that requestIdleCallback functions are unavailable based on real-time rendering load of pages.
In one embodiment, the idle time data scheduling mechanism may include intelligent task scheduling, data batch processing, adaptive idle time scheduling.
The intelligent task scheduling comprises idle detection, task allocation and task priority management.
Specifically, first, requestIdleCallback is used to detect the idle time of the browser, and the data acquisition, preprocessing and uploading tasks are intelligently scheduled. This mechanism ensures that data processing tasks are only performed when the main thread is in an idle state, avoiding blocking user interactions. Further, if requestIdleCallback is not available, the scheduling scheme is automatically downgraded to a setTimeout-based scheduling scheme, and the delay time of task execution is dynamically adjusted according to the rendering load of the current page, so that the data processing is ensured not to conflict with page rendering. Further tasks are classified by priority, high priority tasks (e.g., error log, page crash data) are collected and uploaded immediately upon detection of a critical event, and low priority tasks (e.g., performance data, user behavior records) are processed in bulk during idle time. Further, the priority of the task is dynamically adjusted through the scheduler, so that the frequency of the data processing task is reduced during the active operation of the user, and the response speed of the page is preferentially ensured.
The data batch processing comprises batch acquisition and combination, task throttling and anti-shake.
Specifically, firstly, low-priority performance data and user behavior data are temporarily stored in a memory, and when the browser is idle, unified processing is performed, so that the influence of frequent acquisition operation on page performance is reduced. Further, the acquired data packets are combined through a memory caching mechanism and are uploaded in batches when the browser is idle, so that the network request frequency is reduced, and the server pressure is reduced. Further combining throttle (throttling) and anti-shake (debouncing) strategies controls the data acquisition frequency. The throttling mechanism ensures that the data processing and uploading tasks are executed at most once within a set time window, and the anti-shake mechanism avoids high-frequency data acquisition within a short time and reduces resource consumption. Further, when the page interaction is frequent or the CPU utilization rate is too high, the low-priority task is automatically suspended, and the system resumes the idle state and then continues to execute, so that the page smoothness is not influenced in the data acquisition process.
The self-adaptive idle time scheduling comprises intelligent scheduling based on user behaviors and soul adjustment of task time slices. Firstly, the frequency of the data acquisition task is reduced when the user is active by monitoring the active state (such as mouse movement, keyboard input, rolling operation and the like) of the user, and the frequency of the data acquisition task is automatically improved when the user is in an inactive state (such as page static for more than a certain time). Further, the scheduling strategy is adaptively adjusted by analyzing the user behavior pattern. For example, when the user switches to the background or the page is not visible (through document. Visibility state detection), the data uploading is quickened, and the data synchronization is ensured to be completed when the page is idle. Further based on the deadline parameter requestIdleCallback, the time slice length of each task execution is dynamically adjusted. More data processing operations are performed when the readline.timeremaining () is longer, while high priority tasks are preferentially processed when the idle time is insufficient, and low priority tasks are deferred until the next idle period. Further, by adjusting the execution time slices of the tasks, the use of system resources is optimized, and the balance between data acquisition and page rendering is achieved.
In one embodiment of the application, after intelligent data acquisition is performed by the front-end monitoring of the idle time data scheduling mechanism preset in the SDK script, the method further comprises the steps of identifying sensitive data in the acquired data and performing desensitization processing on the sensitive data based on a preset desensitization rule.
And step 103, determining data to be uploaded through an incremental data uploading mechanism preset in the front-end monitoring SDK script, and intelligently compressing the data to be uploaded.
In one embodiment of the application, after intelligent data acquisition is performed by a preset idle time data scheduling mechanism in the front-end monitoring SDK script, the data to be uploaded is determined by an incremental data uploading mechanism preset in the front-end monitoring SDK script, and the data to be uploaded is intelligently compressed.
The method comprises the steps of carrying out hash value calculation on front and rear collected data of the same type by utilizing a hash algorithm, determining that the collected data is data to be uploaded at the last time under the condition that the hash values are different, carrying out standardization processing on the data to be uploaded based on a preset preprocessing rule, determining a real-time network condition, and dynamically selecting a corresponding compression algorithm to be applied based on the real-time network condition, wherein the real-time network condition comprises a network type, an effective bandwidth and delay, and compressing the standardized data to be uploaded based on the compression algorithm to be applied.
In one embodiment, determining the data to be uploaded and intelligently compressing the data to be uploaded may include incremental data collection and deduplication, data compression and lightweight transmission.
The incremental data acquisition and deduplication comprises data fluctuation detection and deduplication, data slicing and batch processing.
Specifically, firstly, a Hash algorithm is used for generating a unique identifier (Hash value) for collected data, and the Hash value of the current data and the Hash value of the last uploaded data are compared before each uploading, so that only the changed data are uploaded, and redundant data transmission is reduced. Further, the memory cache is used for storing the data snapshot uploaded last time, new data items are screened out through the data comparison mechanism, and uploading operation is triggered only when data changes, so that repeated uploading is avoided. Further, the collected data are subjected to slicing processing, classification is carried out according to data types (such as performance indexes, user behaviors and error logs), and Hash is independently generated for incremental detection of each type of data. And further combining the performance data and the user behavior data, and reducing the network request times in a batch uploading mode, so as to optimize the bandwidth use.
The data compression and lightweight transmission comprises intelligent compression algorithm selection and lightweight data structure.
Specifically, the compression algorithm is dynamically selected first according to the current network environment. For example, gzip is used for fast compression in high bandwidth environments, brotli is used for increasing compression ratio and reducing packet volume in low bandwidth environments. Further, the data volume is further reduced by preprocessing the data content (such as removing blank fields and replacing lengthy fields with short labels), so that the transmission efficiency is optimized. Further, a lightweight data format (such as ProtocolBuffers or MESSAGEPACK) is adopted to replace the traditional JSON format, so that the cost for serialization and deserialization of data is reduced, and the compression efficiency is improved. Further carrying out structural optimization on the log data and the performance index, only retaining key information fields, and deleting redundant information to reduce data load.
Step 104, dynamically selecting a data uploading strategy based on the real-time network condition so as to upload the compressed data to be uploaded to a data server.
In one embodiment of the application, after the data to be uploaded is determined by monitoring an incremental data uploading mechanism preset in the SDK script through the front end and intelligently compressed, a data uploading strategy is dynamically selected based on the real-time network condition so as to upload the compressed data to be uploaded to the data server.
The method comprises the steps of analyzing whether the real-time network condition meets the real-time uploading condition, uploading the compressed data to be uploaded to a data server in real time if the real-time network condition meets the real-time uploading condition, and dynamically configuring the time delay uploading frequency and the batch size to upload the compressed data to be uploaded to the data server in batches if the real-time network condition does not meet the real-time uploading condition.
In one embodiment, the uploading of data to the data server may employ adaptive uploading policies including network state based uploading optimization, user behavior awareness and intelligent scheduling, intelligent uploading scheduling and task management, breakpoint continuous uploading and retry mechanisms.
The uploading optimization based on the network state comprises network state detection, bandwidth and delay sensing.
Specifically, firstly, the current network state (such as network type, effective bandwidth and delay) of the equipment is obtained through navigator/connection, and the uploading frequency and the batch size are adjusted according to the network conditions. In high speed networks (e.g., wiFi and 4G), data is uploaded in real time, and in low speed or high latency environments (e.g., 3G, edge), batch uploads are employed to reduce frequent requests. Further, when the user is detected to be switched to the weak network environment, the method is automatically switched to a delayed uploading mode, collected data is temporarily stored in the local area, and the collected data is uploaded in batches when the network is recovered.
The user behavior perception and intelligent scheduling comprises the steps of optimizing uploading based on a user and matching page life cycle time.
Specifically, the method comprises the steps of firstly monitoring the active state of a user (such as mouse, keyboard operation, page scrolling and the like), reducing the data uploading frequency when the user is active, preferentially guaranteeing the response speed of the page, and improving the uploading frequency or executing batch data uploading when the user is inactive or the page is in the background (through document. VisibleState detection). Further, when the page is detected to be switched to the background (for example, a user switches the tab page), batch uploading is automatically triggered once, so that data synchronization is completed by using idle time. Further in connection with page lifecycle events (e.g., beforeunload and pagehide), critical data (e.g., error logs and non-uploaded performance data) are quickly uploaded when a user closes a page or refreshes, ensuring that data is not lost due to the closing of the page.
The intelligent uploading scheduling and task management comprises a multi-level uploading strategy and self-adaptive task time slice adjustment.
Specifically, a multi-level uploading mode is realized firstly, wherein high-priority data (such as error logs and page crash reports) are immediately uploaded when detected, and low-priority data (such as user behavior data and performance indexes) are uploaded in batches through intelligent scheduling. Further setting a dynamic uploading time window, and adjusting uploading frequency according to page load and system resource conditions. For example, when it is detected that the CPU usage is high, data upload is temporarily delayed to ensure a smooth user experience. Further in requestIdleCallback, the time slice length of the upload task is dynamically adjusted. When the browser is detected to have more idle time, more data packets are processed for uploading, and when the idle time is insufficient, the data with high priority is preferentially uploaded and the processing of the data with low priority is deferred.
The breakpoint resume and retry mechanism comprises a data cache and breakpoint resume, and an intelligent retry and index backoff strategy.
Specifically, the data packets that are not uploaded are first temporarily buffered through IndexedDB or LocalStorage. When the uploading fails, the unfinished data is automatically stored, and the data is continuously uploaded from the breakpoint after the network is recovered, so that the data is prevented from being lost. Further, a unique uploading identifier is set for each data packet, so that incomplete parts can be rapidly positioned during breakpoint continuous transmission, and uploading efficiency is improved. And further, an uploading retry mechanism is realized, and when the data uploading fails, the retry time interval is adjusted based on an exponential backoff algorithm (exponentialbackoff), so that network congestion caused by frequent heavy try is avoided. Further setting maximum retry times to prevent excessive system resource consumption, and notifying the server to record abnormal state after multiple failed retries for subsequent analysis.
In one embodiment of the application, the method further comprises asynchronous processing of data acquisition, data scheduling, data compression and data uploading by using the Web Worker.
In one embodiment, asynchronous processing may include task separation by Web workbench, asynchronous data processing and queue management, asynchronous data compression and optimization.
The task separation of the Web workbench comprises task modularization and parallel processing and Web workbench pool management.
Specifically, firstly, tasks such as data acquisition, preprocessing, compression, uploading and the like are divided into a plurality of independent modules, and asynchronous processing is performed by the Web workbench respectively, so that the data acquisition task is ensured not to block a main thread. Further, for tasks with high priority (such as error logs and page crash data) and tasks with low priority (such as performance data and user behavior records), different workers are allocated for parallel processing, so that the data processing efficiency is improved. Further, a Web workbench pool (WorkerPool) mechanism is realized, the quantity of the workbench is dynamically distributed according to the task load, and memory occupation and thread contention caused by the creation of too many workbench are avoided. Further, under the condition of high load, the method preferentially distributes the works to process the key tasks, and releases unnecessary works after the peak of the tasks so as to optimize the memory use.
The asynchronous data processing and queue management comprises message queue and batch data processing, data caching and slicing processing.
Specifically, communication is carried out between the main thread and the Web Worker through the message queue, and collected data is added into the queue according to priority. And when the workbench is idle, data are obtained from the queue in batches for processing, so that frequent data transfer overhead is reduced. Further, to avoid memory overflow, an upper limit is set for the queue length. When the queue is near full, high priority tasks are prioritized and low priority tasks are deferred until the next idle time processing. Further, the large data packet is fragmented (Chunking) in the workbench, and multiple data fragments are processed in parallel, so that compression and uploading operations are accelerated. Further, the data blocks to be processed are temporarily stored through a memory caching mechanism, so that stable data acquisition and uploading can be ensured even in the peak period of the task.
Asynchronous data compression and optimization comprises non-blocking data compression, data encryption and privacy protection.
Specifically, firstly, compression processing (such as Gzip or Brotli) is carried out on collected data in the Web workbench, so that the compression process is ensured not to block a main thread, and the page rendering speed is improved. Further, different compression strategies are selected according to the data types, namely a rapid compression algorithm is used for text data, and a high compression ratio algorithm is adopted for performance log data so as to reduce the uploading data quantity. Further, data encryption (AES encryption) operation is carried out in a workbench, so that sensitive data is encrypted before transmission, and data security is improved. Further all encryption and compression processing is completed in the workbench, so that a large amount of computation tasks are prevented from being processed by the main thread, and the page response time is reduced.
Fig. 2 is a flowchart of an implementation of intelligent data acquisition based on a front-end monitoring SDK according to the present application. The above-described embodiments of the present application may be embodied in this flowchart.
The above is a method embodiment of the present application. Based on the same inventive concept, the embodiment of the application also provides intelligent data acquisition equipment based on the front-end monitoring SDK, and the structure of the intelligent data acquisition equipment is shown in fig. 3.
Fig. 3 is a schematic diagram of an internal structure of an intelligent data acquisition device based on a front-end monitoring SDK according to an embodiment of the present application. As shown in fig. 3, the apparatus includes:
at least one processor 301;
and a memory 302 communicatively coupled to the at least one processor;
Wherein the memory 302 stores instructions executable by the at least one processor, the instructions being executable by the at least one processor 301 to enable the at least one processor 301 to:
When a Web application page is loaded, introducing a front-end monitoring SDK script through an external file tag, and loading data acquisition logic preset in the front-end monitoring SDK script, wherein the data acquisition logic comprises performance data acquisition logic, user behavior data acquisition logic and error log acquisition logic;
When a Web application page runs, intelligent data acquisition is performed through a preset idle time data scheduling mechanism in the front-end monitoring SDK script;
Determining data to be uploaded through an incremental data uploading mechanism preset in the front-end monitoring SDK script, and intelligently compressing the data to be uploaded;
And dynamically selecting a data uploading strategy based on the real-time network condition so as to upload the compressed data to be uploaded to a data server.
Some embodiments of the present application provide a non-volatile computer storage medium corresponding to the intelligent data collection based on the front-end monitoring SDK of fig. 1, storing computer executable instructions configured to:
When a Web application page is loaded, introducing a front-end monitoring SDK script through an external file tag, and loading data acquisition logic preset in the front-end monitoring SDK script, wherein the data acquisition logic comprises performance data acquisition logic, user behavior data acquisition logic and error log acquisition logic;
When a Web application page runs, intelligent data acquisition is performed through a preset idle time data scheduling mechanism in the front-end monitoring SDK script;
Determining data to be uploaded through an incremental data uploading mechanism preset in the front-end monitoring SDK script, and intelligently compressing the data to be uploaded;
And dynamically selecting a data uploading strategy based on the real-time network condition so as to upload the compressed data to be uploaded to a data server.
The embodiments of the present application are described in a progressive manner, and the same and similar parts of the embodiments are all referred to each other, and each embodiment is mainly described in the differences from the other embodiments. In particular, for the internet of things device and the medium embodiment, since they are substantially similar to the method embodiment, the description is relatively simple, and the relevant points are referred to in the description of the method embodiment.
The system, the medium and the method provided by the embodiment of the application are in one-to-one correspondence, so that the system and the medium also have similar beneficial technical effects to the corresponding method, and the beneficial technical effects of the method are explained in detail above, so that the beneficial technical effects of the system and the medium are not repeated here.
It will be appreciated by those skilled in the art that embodiments of the present application may be provided as a method, system, or computer program product. Accordingly, the present application may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present application may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
The present application is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems) and computer program products according to embodiments of the application. It will be understood that each flow and/or block of the flowchart illustrations and/or block diagrams, and combinations of flows and/or blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
In one typical configuration, a computing device includes one or more processors (CPUs), input/output interfaces, network interfaces, and memory.
The memory may include volatile memory in a computer-readable medium, random Access Memory (RAM) and/or nonvolatile memory, such as Read Only Memory (ROM) or flash memory (flash RAM). Memory is an example of computer-readable media.
Computer readable media, including both non-transitory and non-transitory, removable and non-removable media, may implement information storage by any method or technology. The information may be computer readable instructions, data structures, modules of a program, or other data. Examples of storage media for a computer include, but are not limited to, phase change memory (PRAM), static Random Access Memory (SRAM), dynamic Random Access Memory (DRAM), other types of Random Access Memory (RAM), read Only Memory (ROM), electrically Erasable Programmable Read Only Memory (EEPROM), flash memory or other memory technology, compact disc read only memory (CD-ROM), digital Versatile Discs (DVD) or other optical storage, magnetic cassettes, magnetic tape magnetic disk storage or other magnetic storage devices, or any other non-transmission medium, which can be used to store information that can be accessed by a computing device. Computer-readable media, as defined herein, does not include transitory computer-readable media (transmission media), such as modulated data signals and carrier waves.
It should also be noted that the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising one does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises an element.
The foregoing is merely exemplary of the present application and is not intended to limit the present application. Various modifications and variations of the present application will be apparent to those skilled in the art. Any modification, equivalent replacement, improvement, etc. which come within the spirit and principles of the application are to be included in the scope of the claims of the present application.