Detailed Description
Reference will now be made in detail to exemplary embodiments, examples of which are illustrated in the accompanying drawings. When the following description refers to the accompanying drawings, the same numbers in different drawings refer to the same or similar elements, unless otherwise indicated. The implementations described in the following exemplary examples do not represent all implementations consistent with the application. Rather, they are merely examples of apparatus and methods consistent with aspects of the application as detailed in the accompanying claims.
As shown in FIG. 1, the related data management system generally comprises an interface layer 110, a query engine 120, an execution engine 130 and a storage engine 140, wherein the interface layer 110 is used for acquiring a query language input by a user, the query engine 120 is used for translating the query language into an abstract syntax tree and generating a query plan, the execution engine 130 is used for specific execution of the query plan, and the storage engine 140 is used for accessing data and managing transactions in the execution process, and the modules are tightly coupled together to form the data management system. Although most data management systems have similar components in logic, the existing database management systems are developed and maintained as a whole, various database management systems generally use respective query languages or private interfaces to perform data addition, deletion and investigation operations, for example, a relational database uses a standard SQL query language to perform data operations, a NoSQL database uses a private query language or command interface to flexibly adapt to various data models, and a graph database uses a graph query language to perform graph data operations. The query language or interface of each database management system is optimized for its data model and architecture, so that their add-drop-check operations are not typically interoperable. This fragmentation has at least the following problems:
1) The developer needs to repeatedly manufacture wheels between different systems, increasing development and maintenance costs;
2) The lack of unified standards between different data management systems results in the need for users to learn and adapt to a variety of incompatible structured query languages (Structured Query Language, SQL) and non-SQL dialects, thereby increasing the cognitive burden and learning cost of the users;
3) The functions and semantics among different systems are inconsistent, so that users need to continuously adapt to different behaviors and characteristics when using a plurality of systems, and the working efficiency is reduced.
Because each system has unique application programming interfaces (Application Programming Interface, APIs) and characteristics, users need to spend a great deal of time learning and mastering the use methods of different systems, meanwhile, because of the lack of reusability among various modules of different systems, the development of new systems needs to start from scratch, which leads to overlong development cycle and difficult rapid push-out of new functions and improvements, in order to rapidly push-out prototypes, developers often sacrifice the stability and maintainability of codes and lead to continuous accumulation of technical liabilities, and because of the high fragmentation of the data management system, hardware suppliers have difficulty in optimizing for specific data processing requirements, which leads to lower collaborative efficiency between hardware and software.
In view of the above-mentioned problems in the data processing process, an embodiment of the present application provides a data processing method, which converts a query language into a data query request representation, and generates a query plan representation according to the data query request representation, so that the query language and execution logic can be decoupled, and the same type of query language can be executed using different execution logic, or the same execution logic can process different types of query languages, thereby improving reusability of query execution.
Referring to fig. 2, fig. 2 shows a flow chart of a data processing method according to some embodiments of the present application, where an execution subject of the method may be a terminal device or a server, where the terminal device may be a device such as a personal computer, or a mobile terminal device such as a mobile phone, a tablet computer, or the like, and the terminal device may be a terminal device used by a user. The server may be a stand-alone server or a server cluster composed of a plurality of servers, and the server may be a background server of a certain service or a background server of a certain platform (for example, a data management system, a data processing platform, a query system, etc.). In the embodiment of the present application, the execution body is taken as a server for illustration, and for the case of the terminal device, the processing may be performed according to the following related content, which is not described herein. As shown in the figure, the data processing method 200 may include the steps of:
Step 201, converting the acquired query language into a data query request representation according to at least one query clause.
The query languages in the above step 201 include a structured query language (Structured Query Language, SQL), a graph query language (Graph Query Language, graphQL), and a natural language, and the query clauses in the above step 201 include a clause select_fields (SELECT statement) for indicating a field returned BY the query, a clause from_tables (FROM statement) for indicating a table FROM which data is sourced, a clause join_conditions (JOIN statement) for indicating a table connection condition, a clause write_conditions (WHERE statement) for indicating a filtering condition of the query, a clause group_by (GROUP BY statement) for indicating a filtering field after the grouping, a clause division_conditions (HAVING statement) for indicating a sorting rule, a clause LIMIT (list statement) for indicating a line number LIMIT returned BY the query, a clause OFFSET (OFFSET statement) for indicating a filtering condition of the query, a clause group_by statement (fset) for indicating a filtering field after the grouping, a clause compare_conditions (statement for indicating a filtering statement) for indicating a filtering condition after the grouping, a clause LIMIT (LIMIT statement) for indicating a line number LIMIT for indicating a sorting rule, a clause OFFSET (OFFSET statement for indicating a line LIMIT), a filtering statement for indicating a filtering condition after the filtering field after the grouping, and a filtering condition for indicating a filtering condition, and a clause LIMIT.
In a specific implementation, as shown in fig. 3, the interface layer 310 of the data management system may include various external interfaces, including SQL, graphQL, pandas, REST interfaces, natural language interfaces, and the like, and the interface layer 310 obtains various query languages, for example, the natural language input by the user is "find 10 books related to natural language processing, arrange in descending order of publishing time", if the query is performed using the natural language interface, the natural language may be converted into an intermediate representation (INTERMEDIATE REPRESENTATION, IR) of the interface, and the trained model is adopted to perform the conversion, and the converted Prompt word (promt) format is { find 10 books related to natural language processing, arrange in descending order of publishing time "convert into an IR format of the interface }, and if the query is performed using the SQL interface, the following statement is used:
SQL statement:
SELECT title, author, publish_date
FROM books
WHERE topic= 'natural language processing'
ORDER BY publish_date DESC
LIMIT 10;
The retrieved query language, SQL, etc. is converted into a more abstract intermediate representation containing mainly the basic information of the query, e.g. select field, where condition, orderBy order and limit, etc., according to at least one preset query clause, e.g. select_fields, from_tables, where conditions, etc.
In this way, a query language entered by a user may be converted by at least one query clause into a machine-understandable data query request representation for subsequent execution of data processing operations corresponding to the query language.
Step 202, generating a query plan representation according to the data query request representation and at least one execution operator.
The execution operators in the step 202 include an operator Scan for indicating a scanning operation, an operator Filter for indicating a filtering operation, an operator project for indicating a Projection operation, an operator Join for indicating a Join operation, an operator Aggregation for indicating an Aggregation operation, an operator Sort for indicating a sorting operation, an operator Limit for indicating a limiting operation, and an operator Union for indicating a Union operation.
In particular implementations, a query plan representation is generated from the data query request representation converted in step 201 and the execution operators Scan, filter, projection, etc., which include specific operators, e.g., scan, sort, limit, etc., that describe the order of execution of the queries.
Thus, by further translating the data query request representation into a query plan representation, the hierarchical design of the assemblies in the query plan representation can ensure flexibility and extensibility of the query while ensuring maximization of execution efficiency.
Step 203, obtaining a data processing result by executing a data processing operation corresponding to the query plan representation, wherein the data processing operation comprises a data writing operation and a data query operation.
In a specific implementation, the execution engine may be invoked to perform a data processing operation corresponding to the query plan representation, and obtain a data processing result output by the execution engine. As shown in fig. 3, the execution engine 330 may include Velox, spark, ray, postgre, flink, where application scenarios of different execution engines are different, for example, velox focuses on a query engine, which is suitable for large-scale data analysis, spark has a powerful big data analysis framework, which is suitable for batch processing and stream processing, ray focuses on distributed computing, especially machine learning and deep learning tasks, postgreSQL is a powerful relational database, which is suitable for complex database applications, and Flink focuses on stream processing, which is suitable for real-time data computing and event driven applications. Because the embodiment of the application converts the query language input by the user into the unified query plan representation through the query clauses and the execution operators, the unified query plan representation can be suitable for various execution engines.
Through the steps, the query language is converted into the data query request representation, and the query plan representation is generated according to the data query request representation, so that the query language and the execution logic can be decoupled, the same type of query language can be executed by using different execution logic, or the same execution logic can process different types of query languages, and the reusability of query execution is improved.
In some embodiments, in step 201, converting the acquired query language into the data query request representation according to the at least one query clause includes:
The method comprises the steps of obtaining a query language, generating a structured representation corresponding to the query language by analyzing the query language, mapping the structured representation to at least one query clause, and generating a data query request representation.
In particular implementations, a user-entered query language is obtained through an interface layer 310 of a data management system, the interface layer 310 including interfaces of SQL, graphQL, natural language, and the like, the query language is structured by parsing the query language, e.g., representing the query language as an expression tree containing function calls, table references, constants, and various operator operations, e.g., filtering, projecting, sorting, concatenating, aggregating, window functions, shuffling/repartitioning, and the like, and the structured representation is mapped to query clauses of select_fields, from_tables, where_conditions, and the like, generating a data query request representation containing basic information of the query, e.g., select fields, where conditions, orderBy sorting, and limit, and the like.
In some possible implementations, the generating the structured representation corresponding to the query language by parsing the query language includes:
the method comprises the steps of obtaining a plurality of clauses in a query language, converting query elements in each clause into interface intermediate representations, and recombining the interface intermediate representations according to the dependency relationship among the clauses to generate structural representations corresponding to the query language.
In a specific implementation, the query language includes multiple clauses, for example SELECT, FROM, WHERE, JOIN, etc., multiple clauses in the query language are acquired, query elements such as fields, aggregation functions, expressions, etc. in each clause are converted into interface intermediate representation IR, and the interface IR is recombined according to the dependency relationship among the multiple clauses, so as to generate a structured representation corresponding to the query language. The mapping mode of the query language to the interface IR is shown in the following table:
TABLE 1 mapping method of query language to interface IR
In some embodiments, in step 202, generating the query plan representation according to the data query request representation and the at least one execution operator includes:
The method comprises the steps of generating a query logic plan according to a target query clause in a data query request representation, wherein the target query clause comprises a first query clause used for indicating query intention, a second query clause used for indicating context information and a third query clause used for indicating a query structure, generating an executable physical plan by optimizing the execution efficiency of the query logic plan, and converting the executable physical plan into a query plan representation according to at least one execution operator.
In particular implementations, as shown in FIG. 3, the query engine 320 includes Calcite, orca, presto, postgre, flink or the like, which receives the data query request representation in step 201 described above, first extracts a first query clause for indicating query intent, a second query clause for indicating context information, and a third query clause for indicating query structure, e.g., select_fields, join_ conditions, where _ conditions, order _by, etc., from the data query request representation, generates a query logic plan, optimizes execution efficiency of the query logic plan based on time costs and preset rules, generates an executable physical plan, and further converts the executable physical plan into a query plan representation, i.e., a query plan IR, according to an execution operator such as Scan, sort, limit. The query plan IR describes the way the query is executed and the optimization strategy, and is composed of a series of operators (Operators) representing the execution steps of the database query. The planned IR is defined using json, including the following:
root-root operator of query (final output)
Operators Each operator involved in query execution
Type operator type (e.g., scan, filter, sort, limit, etc.)
Input-the upstream operator (i.e., data Source) that the operator depends on
Output_fields field of the operator output
Conditions of filtering, connecting, etc
Order_by ordering mode
Limit line number limitation for query return
Cost-estimated execution cost of operator (optional)
Parallelism parallelism of operators (optional)
The operator types mainly comprise:
in some embodiments, in step 203, the obtaining the data processing result by performing the data processing operation corresponding to the query plan representation includes:
the method comprises the steps of obtaining at least one operator node in a query plan representation, distributing execution tasks for each operator node, selecting a target storage engine matched with the execution tasks from a plurality of preset storage engines, adapting the execution tasks to operation requests of the target storage engine, and obtaining data processing results by calling the target storage engine to execute data processing operations corresponding to the operation requests.
In particular implementations, as shown in FIG. 3, the execution engine 330 may have different choices, including Velox, spark, ray, postgre, flink, etc., depending on the scenario, and the execution engine 330 is the core component of the data management system responsible for actually executing the query plan and returning the results. It first parses the operator nodes in it from the query plan IR and assigns execution tasks to each operator. Optionally, the execution engine 330 may manage the execution sequence and dependency relationship of the tasks through the task scheduler, divide the tasks into a plurality of subtasks and allocate the subtasks to different computing nodes or threads for parallel processing, sequentially execute the operators such as scanning, filtering, connecting, sorting, aggregating, limiting, and the like, generate intermediate results by reading data from the storage engine and gradually converting and processing the data, and the execution engine 330 is used for managing the memory and the intermediate results, and may write the intermediate results into the disk cache for large-scale data processing to avoid memory overflow.
In a specific application, in order to improve efficiency, the execution engine 330 may use a vectorization execution technique to process batch data operations as a unit, and may perform performance optimization in combination with JIT compilation, partition parallelism, pipeline optimization, and other policies. During execution, the execution engine 330 may also detect and process runtime errors and provide a fault tolerant mechanism for automatic retries or downgraded processing. Finally, the execution engine 330 performs merging and formatting processing on the intermediate results of all the subtasks to obtain data processing results, and returns the data processing results to the user or the upper-layer application according to the output format specified by the query plan IR.
By the method, efficient execution and stability of the query can be ensured, and good expansibility and flexibility are achieved. Support for new heterogeneous hardware is conveniently achieved by expanding the execution engine to support the partial or complete issuing of execution tasks (e.g., projection, aggregation, ordering, encoding and decoding, etc.) to heterogeneous hardware such as GPUs, FPGAs, DPUs, etc.
As shown in FIG. 3, the storage engine 340 may include different components, such as DuckDB, rocksDB, SQLite, lanceDB, parquet, etc., which are core components of the data management system responsible for persistent storage and efficient access of data, depending on the scenario in which the task is performed. The storage engine 340 supports a variety of storage structures, such as row storage, column storage, and key value storage, providing corresponding optimization strategies based on different storage engine characteristics. Further, a storage adaptation layer is added between the execution engine 330 and the storage engine 340, where the storage adaptation layer is configured to select a target storage engine matched with the execution task from a preset plurality of storage engines, and adapt the execution task to an operation request of the target storage engine. The storage engine 340 obtains a data processing result by calling the target storage engine to perform a data processing operation corresponding to the operation request. Optionally, storage engine 340 may also improve data access performance through index structures, bulk operations, and data compression techniques, and ensure the ability for data persistence and failure recovery, such as by Write-forward logging (WAL), transaction logging, snapshot, and checkpointing mechanisms. To support Multi-user concurrent access, storage engine 340 may also provide a lock mechanism and Multi-version concurrency control (Multi-Version Concurrency Control, MVCC) to ensure transaction isolation and consistency. Therefore, not only is the efficient data storage and management capability provided, but also flexible storage engine replacement and expansion are supported, and high performance and expandability of the system under different application scenes are ensured.
In some possible implementations, selecting a target storage engine matched with the execution task from the preset storage engines includes:
and selecting a target storage engine matched with the scene where the task is executed from the plurality of storage engines according to the matching relation between the plurality of storage engines and the task scene.
In implementations, the storage adaptation layer is an abstraction layer between the execution engine and the underlying storage engine, whose primary function is to provide a unified data access interface for the execution engine, supporting seamless integration of different types of storage engines (e.g., duckDB, SQLite, rocksDB, lanceDB, etc.). The core responsibilities of the storage adaptation layer include storage engine abstraction, data format conversion, interface normalization, optimization policy adaptation and resource management. First, the storage adaptation layer defines standardized storage engine interfaces, such as CreateTable, dropTable, scan, indexScan, read, write, update, delete, and provides specific Adapter (Adapter) implementations for each storage engine. Second, it is responsible for adapting the requests of the execution engine to the data formats and APIs of the underlying storage engine, including data encoding and decoding, metadata management and table structure mapping, etc. The storage adaptation layer also provides an optimization policy adaptation function that allows the execution engine to determine the task scenario that matches each storage engine using the storage engine's feature information, e.g., rocksDB applies to the key-value index, duckDB applies to the column store, and SQLite applies to transaction support. And then, according to the matching relation between the plurality of storage engines and the task scene, a target storage engine matched with the scene where the task is executed is selected from the plurality of storage engines so as to improve the query performance.
Optionally, in order to support concurrent execution and efficient data access, the storage adaptation layer may also provide functions such as caching mechanism, parallel I/O scheduling and memory management, and process exceptions and errors that may be thrown by the underlying storage engine. Thus, the execution engine can perform unified access and control on different storage engines, so that efficient and flexible storage engine adaptation and integration are realized.
By combining different components, the data management requirements of different scenes can be met. For example, in a transaction scenario, SQL is used as the interface, postgreSQL is used as the query and execution engine, SQLite is used as the storage engine, graphQL is used as the interface, orca or Velox is used as the query and execution engine, duckDB is used as the storage engine, and in a large model application scenario, natural language and Pandas are used as the interface, calcite or Ray is used as the query and execution engine, parquet is used as the storage engine, as shown in the following table:
TABLE 2 Combined relationship Table of different Components in data management System
Therefore, the expansion of the data management software can be realized through the combination of different components, and further, the requirements of different scenes are met.
FIG. 4 illustrates a schematic diagram of a data processing system that may implement all or a portion of the contents of the embodiment shown in FIG. 2, according to some embodiments of the present application, data processing system 400, including:
an interface intermediate representation layer module 410 for converting the obtained query language into a data query request representation according to at least one query clause;
a plan intermediate representation layer module 420 for generating a query plan representation from the data query request representation and at least one execution operator;
The result obtaining module 430 is configured to obtain a data processing result by performing a data processing operation corresponding to the query plan representation, where the data processing operation includes a data writing operation and a data query operation.
In some embodiments, the interface intermediate representation layer module 410, when configured to convert the obtained query language into a data query request representation according to at least one query clause, is specifically configured to:
Acquiring a query language;
Generating a structured representation corresponding to the query language by parsing the query language;
mapping the structured representation to at least one query clause, generating a data query request representation.
In some possible implementations, the interface intermediate representation layer module 410, when configured to generate a structured representation corresponding to the query language by parsing the query language, is specifically configured to:
acquiring a plurality of clauses in the query language;
converting the query elements in each clause into an interface intermediate representation;
and recombining the interface intermediate representation according to the dependency relationship among the clauses to generate a structured representation corresponding to the query language.
In some embodiments, the plan intermediate representation layer module 420, when configured to generate a query plan representation from the data query request representation and at least one execution operator, is specifically configured to:
Generating a query plan representation from the data query request representation and at least one execution operator, comprising:
Generating a query logic plan according to a target query clause in the data query request, wherein the target query clause comprises a first query clause for indicating query intention, a second query clause for indicating context information and a third query clause for indicating query structure;
Generating an executable physical plan by optimizing the execution efficiency of the query logic plan;
The executable physical plan is converted into a query plan representation according to at least one execution operator.
In some embodiments, the result acquisition module 430 includes:
An execution engine layer module, configured to obtain at least one operator node in the query plan representation, and allocate an execution task for each of the operator nodes;
The storage adaptation layer module is used for selecting a target storage engine matched with the execution task from a plurality of preset storage engines and adapting the execution task to an operation request of the target storage engine;
And the storage engine layer module is used for obtaining a data processing result by calling the target storage engine to execute the data processing operation corresponding to the operation request.
In an exemplary embodiment, as shown in fig. 5, an embodiment of the present application further provides an assembled data management system, including:
An interface layer 510, configured to obtain a query language input by a user;
an interface intermediate representation 520 for converting the acquired query language into a data query request representation based on at least one query clause;
a query engine 530 for generating a query plan representation from the data query request representation and at least one execution operator;
a mid-plan representation 540 for retrieving a query plan representation generated by the query engine 530;
an execution engine 550, configured to obtain at least one operator node in the query plan, and allocate an execution task to each of the operator nodes;
The storage adapting layer 560 is configured to select a target storage engine that matches the execution task from a preset plurality of storage engines, and adapt the execution task to an operation request of the target storage engine;
the storage engine layer 570 is configured to obtain a data processing result by calling the target storage engine to perform a data processing operation corresponding to the operation request.
Embodiments of the present application provide for the splitting of a data management system into a series of reusable components, including an interface layer, an interface Intermediate Representation (IR), a query engine, a plan Intermediate Representation (IR), an execution engine, a storage adaptation and a storage engine. These components interact through well-defined interfaces (APIs) to enable modular and decoupled systems. The modularized design not only improves the development efficiency, but also reduces the maintenance cost of the system, and simultaneously provides more consistent experience for users. Through the unified interface IR and the plan IR, different system interfaces can generate the unified interface IR, the unified optimized plan IR is generated through the query engine, and different execution engines can execute the plan IR. This decoupling allows the system interface, query engine, execution engine, and storage engine to develop independently while also supporting cross-system query optimization and execution. This architecture not only supports a variety of workloads from online transaction processing to online analysis processing, streaming processing to machine learning, etc., but also allows developers to select and combine different components according to needs, thereby quickly building a data management system that meets specific needs. Through componentization and standardization, the data management system can better adapt to the rapidly-changing technical environment, adapt to a novel hardware accelerator GPU, an FPGA and the like, promote the co-evolution between hardware and software, and fully exert the function and performance advantages of the novel hardware.
The PostgreSQL database allows users flexibility to add new features and capabilities without modifying the database core code through plug-in mechanisms (extensions) for database Extension functions. The plugin may implement data type expansion, for example, supporting geospatial data through installation PostGIS, or providing fuzzy search functionality through pg_ trgm, and further supporting definition of custom indexing methods, functions, and operators, optimizing query performance, for example, btree _gin plugin enhances B-tree indexing. However, the plug-in mechanism typically relies on internal extensions and external packages of functionality of the database, the extension capabilities are often tightly coupled to specific database versions and architectures, and cannot be shared and migrated across platforms between different database systems, presenting challenges in terms of compatibility, maintenance, performance, etc. Compared with the mode that the PostgreSQL carries out database expansion through a plug-in mechanism, the embodiment of the application carries out expansion through the interface IR, the plan IR and the storage adaptation layer, the interface IR and the plan IR carry out structural separation on query semantics and an execution plan, so that expansion logic can be shared among different database systems, better flexibility, maintainability and heterogeneous system compatibility are provided, and the method is more suitable for constructing a modularized, multi-mode and sustainable evolution data processing platform.
Fig. 6 shows a schematic diagram of a hardware structure of an electronic device provided by an embodiment of the application, and referring to the figure, at a hardware level, the electronic device 600 includes a processor 610, and optionally an internal bus 620, a network interface 630, and a memory. The Memory may include a Memory 641, such as a Random-Access Memory (RAM), and may further include a non-volatile Memory 642 (non-volatile Memory), such as at least 1 disk Memory. Of course, the electronic device 600 may also include hardware required for other services.
The processor 610, network interface 630, and memory may be interconnected by an internal bus 620, which internal bus 620 may be an advanced microprocessor bus architecture (Advanced Microcontroller Bus Architecture) bus, wishbone bus, open core protocol (Open Core Protocol, OCP) bus, avalon bus, or the like. The buses may be classified as address buses, data buses, control buses, etc. For ease of illustration, only one bi-directional arrow is shown in the figure, but not only one bus or one type of bus.
And a memory for storing the program. In particular, the program may include program code including computer-operating instructions. The memory may include memory 641 and nonvolatile memory 642 and provide instructions and data to the processor 610.
The processor 610 reads the corresponding computer program from the nonvolatile memory 642 into the memory and then runs to form a means for locating the target user at the logic level. The processor 610 executes the program stored in the memory, and specifically executes the method disclosed in the embodiment shown in fig. 2, and implements the functions and benefits of the methods described in the foregoing method embodiments, which are not described herein.
The method disclosed above in the embodiment of the present application shown in fig. 2 may be applied to the processor 610 or implemented by the processor 610. The processor 610 may be an integrated circuit chip having signal processing capabilities. In implementation, the steps of the above method may be performed by integrated logic circuitry in hardware or instructions in software in the processor 610. The Processor 610 may be a general-purpose Processor including a central processing unit (Central Processing Unit, CPU), a network Processor (Network Processor, NP), etc., or may be a digital signal Processor (DIGITAL SIGNAL Processor, DSP), an Application SPECIFIC INTEGRATED Circuit (ASIC), a Field-Programmable gate array (Field-Programmable GATE ARRAY, FPGA) or other Programmable logic device, discrete gate or transistor logic device, discrete hardware components. The disclosed methods, steps, and logic blocks in the embodiments of the present application may be implemented or performed. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like. The steps of the method disclosed in connection with the embodiments of the present application may be embodied directly in the execution of a hardware decoding processor, or in the execution of a combination of hardware and software modules in a decoding processor. The software modules may be located in a random access memory, flash memory, read only memory, programmable read only memory, or electrically erasable programmable memory, registers, etc. as well known in the art. The storage medium is located in a memory, and the processor reads the information in the memory and, in combination with its hardware, performs the steps of the above method.
The computer device may also execute the methods described in the foregoing method embodiments, and implement the functions and beneficial effects of the methods described in the foregoing method embodiments, which are not described herein.
Of course, other implementations, such as a logic device or a combination of hardware and software, are not excluded from the electronic device 600 of the present application, that is, the execution subject of the following processing flows is not limited to the respective logic units, but may be hardware or a logic device.
The embodiment of the present application further provides a computer readable storage medium, where the computer readable storage medium stores one or more programs, where the one or more programs, when executed by an electronic device including a plurality of application programs, cause the electronic device to execute the method disclosed in the embodiment shown in fig. 2 and implement the functions and benefits of the methods described in the foregoing method embodiments, which are not described herein again.
The computer readable storage medium includes Read-Only Memory (ROM), random access Memory (Random Access Memory RAM), magnetic disk or optical disk, etc.
Further, embodiments of the present application also provide a computer program product, where the computer program product includes a computer program stored on a non-transitory computer readable storage medium, where the computer program includes program instructions, when the program instructions are executed by a computer, implement the method disclosed in the embodiment shown in fig. 2 and implement the functions and benefits of the methods described in the foregoing method embodiments, which are not described herein again.
The embodiment of the application can be applied to various electronic equipment cooperation or interconnection scenes, including cooperation and interconnection of a mobile phone and a notebook computer/tablet personal computer, cooperation and interconnection of a mobile terminal and an intelligent television/display, cooperation and interconnection of the mobile phone or the tablet personal computer and a vehicle-mounted entertainment system, cooperation and interconnection of the mobile terminal and an intelligent conference system and the like. Thereby satisfying the diversified scene demands of users in intelligent house, intelligent office, intelligent trip etc.
In summary, the foregoing description is only of the preferred embodiments of the present application, and is not intended to limit the scope of the present application. Any modification, equivalent replacement, improvement, etc. made within the spirit and principle of the present application should be included in the protection scope of the present application.
The system, apparatus, module or unit set forth in the above embodiments may be implemented in particular by a computer chip or entity, or by a product having a certain function. One typical implementation is a computer. In particular, the computer may be, for example, a personal computer, a laptop computer, a cellular telephone, a camera phone, a smart phone, a personal digital assistant, a media player, a navigation device, an email device, a game console, a tablet computer, a wearable device, or a combination of any of these devices.
Computer readable media, including both non-transitory and non-transitory, removable and non-removable media, may implement information storage by any method or technology. The information may be computer readable instructions, data structures, modules of a program, or other data. Examples of storage media for a computer include, but are not limited to, phase change memory (PRAM), static Random Access Memory (SRAM), dynamic Random Access Memory (DRAM), other types of Random Access Memory (RAM), read Only Memory (ROM), electrically Erasable Programmable Read Only Memory (EEPROM), flash memory or other memory technology, compact disc read only memory (CD-ROM), digital Versatile Discs (DVD) or other optical storage, magnetic cassettes, magnetic tape disk storage or other magnetic storage devices, or any other non-transmission medium, which can store information that can be accessed by a computing device. Computer-readable media, as defined herein, does not include transitory computer-readable media (transmission media), such as modulated data signals and carrier waves.
It should also be noted that the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising one does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises an element.
In this specification, each embodiment is described in a progressive manner, and identical and similar parts of each embodiment are all referred to each other, and each embodiment mainly describes differences from other embodiments. In particular, for system embodiments, since they are substantially similar to method embodiments, the description is relatively simple, as relevant to see a section of the description of method embodiments.