Detailed Description
The technical solutions of the embodiments of the present application will be clearly described below with reference to the drawings in the embodiments of the present application, and it is apparent that the described embodiments are some embodiments of the present application, but not all embodiments. All other embodiments, which are derived by a person skilled in the art based on the embodiments of the application, fall within the scope of protection of the application.
The terms first, second and the like in the description and in the claims, are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It is to be understood that the terms so used are interchangeable under appropriate circumstances such that the embodiments of the application are capable of operation in sequences other than those illustrated or otherwise described herein, and that the "first" and "second" distinguishing between objects generally are not limited in number to the extent that the first object may, for example, be one or more. Furthermore, in the description and claims, "and/or" means at least one of the connected objects, and the character "/" generally means a relationship in which the associated object is an "or" before and after.
It should be noted that the techniques described in the embodiments of the present application are not limited to long term evolution (Long Term Evolution, LTE)/LTE evolution (LTE-Advanced, LTE-a) systems, but may also be used in other wireless communication systems, such as code division multiple access (Code Division Multiple Access, CDMA), time division multiple access (Time Division Multiple Access, TDMA), frequency division multiple access (Frequency Division Multiple Access, FDMA), orthogonal frequency division multiple access (Orthogonal Frequency Division Multiple Access, OFDMA), single carrier frequency division multiple access (Single-carrier Frequency Division Multiple Access, SC-FDMA), and other systems. The terms "system" and "network" in embodiments of the application are often used interchangeably, and the techniques described may be used for both the above-mentioned systems and radio technologies, as well as other systems and radio technologies. The following description describes a New Radio (NR) system for exemplary purposes and NR terminology is used in much of the following description, but these techniques may also be applied to applications other than NR system applications, such as 6 th Generation (6G) communication systems.
Fig. 1 shows a block diagram of a wireless communication system to which an embodiment of the present application is applicable. The wireless communication system includes a terminal device 11 and a network device 12. Wherein the terminal device 11 can be a Mobile phone, a tablet Computer (Tablet Personal Computer), a Laptop (Laptop Computer) or a notebook (Personal DIGITAL ASSISTANT, PDA), a Personal Digital Assistant (PDA), a palm Computer, a netbook, an ultra-Mobile Personal Computer (UMPC), a Mobile internet device (Mobile INTERNET DEVICE, MID), Augmented reality (augmented reality, AR)/Virtual Reality (VR) equipment, robots, wearable equipment (Wearable Device), vehicle-mounted equipment (VUE), pedestrian terminals (PUE), smart home (home equipment with wireless communication function, such as refrigerators, televisions, washing machines or furniture), game machines, personal computers (personal computer, PCs), teller machines or self-service machines, and other terminal side equipment, and the wearable equipment includes: intelligent watch, intelligent bracelet, Intelligent headphones, intelligent glasses, intelligent jewelry (intelligent bracelets, intelligent rings, intelligent necklaces, intelligent ankles, intelligent footchains, etc.), intelligent bracelets, intelligent clothing, etc. It should be noted that the specific type of the terminal device 11 is not limited in the embodiment of the present application. The network-side device 12 may include an access network device or a core network device, where the access network device 12 may also be referred to as a radio access network device, a radio access network (Radio Access Network, RAN), a radio access network function, or a radio access network element. Access network device 12 may include a base station, a WLAN access Point, a WiFi node, or the like, which may be referred to as a node B, an evolved node B (eNB), an access Point, a base transceiver station (Base Transceiver Station, BTS), a radio base station, a radio transceiver, a Basic service set (Basic SERVICE SET, BSS), an Extended service set (Extended SERVICE SET, ESS), a home node B, a home evolved node B, a transmission and reception Point (TRANSMITTING RECEIVING Point, TRP), or some other suitable terminology in the art, and the base station is not limited to a particular technical vocabulary so long as the same technical effect is achieved, and it should be noted that in the embodiment of the present application, only a base station in an NR system is described as an example, and the specific type of the base station is not limited. The core network devices may include, but are not limited to, at least one of core network nodes, core network functions, mobility management entity (Mobility MANAGEMENT ENTITY, MME), access Mobility management function (ACCESS AND Mobility Management Function, AMF), session management function (Session Management Function, SMF), user plane function (User Plane Function, UPF), policy control function (Policy Control Function, PCF), access Mobility management function (ACCESS AND Mobility Management Function, AMF), Policy and Charging Rules Function (PCRF) Policy AND CHARGING Rules Function, edge application service discovery Function (Edge Application Server Discovery Function, EASDF), unified data management (Unified DATA MANAGEMENT, UDM), unified data repository (Unified Data Repository, UDR), home subscriber server (Home Subscriber Server, HSS), a Policy and Charging Rules Function (PCRF), Centralized network configuration (Centralized network configuration, CNC), network storage functions (Network Repository Function, NRF), network open functions (Network Exposure Function, NEF), local NEF (or L-NEF), binding support functions (Binding Support Function, BSF), application functions (Application Function, AF), etc. It should be noted that, in the embodiment of the present application, only the core network device in the NR system is described as an example, and the specific type of the core network device is not limited.
The model determining method provided by the embodiment of the application is described in detail below through some embodiments and application scenes thereof with reference to the accompanying drawings.
The embodiment of the application provides a model determining method. Referring to fig. 2, a flowchart of a model determining method provided by an embodiment of the present application is shown. The method is applied to the first device, as shown in fig. 2, and specifically may include:
Step 201, the first device determines a first model based on the first information.
Step 202, the first device activates the first model.
Wherein the first information includes any one of:
scene information of the terminal equipment and mapping relation between the machine learning model and the scene information; the first device is the terminal device or network side device;
model information of a machine learning model associated with scene information in which the terminal device is located.
In the embodiment of the present application, the first device may be a terminal device or a network device. The terminal device may comprise a conventional terminal device and/or a positioning reference unit. Wherein the conventional terminal device may be the terminal device 11 in fig. 1. The positioning reference unit (Positioning Reference Unit, PRU) may perform positioning measurements such as reference signal time difference (REFERENCE SIGNAL TIME DIFFERENCE, RSTD), reference signal received Power (REFERENCE SIGNAL RECEIVING Power, RSRP), UE Rx-Tx time difference measurements, etc., and report these measurements to a positioning server. In addition, the PRU may transmit a Positioning Reference Signal (PRS) to a Transmission-reception Point (Transmission AND RECEIVING Point) such that the TRP is able to measure and report uplink (Up-Link, UL) Positioning measurements from the PRU of known location, such as RTOA, UL-AoA, gNB Rx-Tx time difference, etc. The location server may compare the PRU measurements to expected measurements at known PRU locations to determine correction terms for other target devices in the vicinity and then correct DL and/or UL location measurements for the other target devices based on the correction terms.
The Network side device may be an access Network device in fig. 1, such as a base station or an artificial intelligence processing node newly defined at the access Network side, or a core Network device in fig. 1, such as a Network DATA ANALYTICS Function (NWDAF), a location management Function (Location Management Function, LMF), or a processing node newly defined at the core Network side, or a combination of the above multiple nodes.
In the embodiment of the application, the machine learning model can be trained by the network side equipment, the network side equipment transmits the trained machine learning model to the terminal equipment through model transfer/delivery, and the association relationship between the model identification and the scene information of each machine learning model is recorded in the network side equipment.
Or the machine learning model is trained by a third-party server, the third-party server sends the trained machine learning model to the terminal equipment and/or the network side equipment, and sends the association relation between the model identification and the scene information of the machine learning model to the terminal equipment and/or the network side equipment.
It should be noted that, the machine learning model in the embodiment of the present application may be an artificial intelligence (ARTIFICIAL INTELLIGENCE, AI) model, for example, any one of a fully connected neural network, a convolutional neural network, a decision tree, a support vector machine, and a bayesian classifier. Taking a neural network model as an example, a schematic diagram of the neural network model can be shown in fig. 3. As shown in fig. 3, the neural network may include one or more input layers, one or more hidden layers, and one output layer. The data to be processed [ X1, X2...Xn ] are respectively input into the neural network from the corresponding input layers, and the output result Y is obtained through the processing of the input layers, the hidden layers and the output layers. In addition, the neural network is composed of neurons, and a schematic diagram of the neurons is shown in fig. 4. Where in fig. 4, a1, a2,. AK represents the input, w represents the weight (i.e., multiplicative coefficient), b represents the bias (i.e., additive coefficient), and σ (& gt) represents the activation function. Common activation functions include Sigmoid (mapping variables between 0, 1), tanh (translation and contraction of Sigmoid), linear rectification function/correction linear units (RECTIFIED LINEAR Unit, reLU), and the like.
In addition, taking a neural network model as an example, the process of model training is described as follows:
The parameters of the neural network can be optimized through a gradient optimization algorithm. Gradient optimization algorithms are a class of algorithms that minimize or maximize an objective function (sometimes also referred to as a loss function), which is often a mathematical combination of model parameters and data. For example, given data X and its corresponding label Y, a neural network model f (), a predicted output f (X) can be obtained from the input X, and the difference (f (X) -Y) between the predicted value and the true value, which is the loss function, can be calculated. The optimization objective of the gradient optimization algorithm is to find a suitable w (i.e. weight) and b (i.e. bias) to minimize the value of the loss function, and the smaller the loss value, the closer the model is to the real situation.
The most common optimization algorithms are basically based on an error back propagation (error Back Propagation, BP) algorithm. The basic idea of the BP algorithm is that the learning process consists of two processes, forward propagation of the signal and backward propagation of the error. In forward propagation, an input sample is transmitted from an input layer, is processed layer by each hidden layer, and is transmitted to an output layer. If the actual output of the output layer does not match the desired output, the back propagation phase of the error is shifted. The error back transmission is to make the output error pass through hidden layer to input layer by layer back transmission in a certain form and to distribute the error to all units of each layer so as to obtain the error signal of each layer unit, which is the basis for correcting the weight of each unit. The process of adjusting the weights of the layers of forward propagation and error back propagation of the signal is performed repeatedly. The constant weight adjustment process is the learning and training process of the network. This process is continued until the error in the network output is reduced to an acceptable level or until a preset number of learnings is performed.
In addition, common optimization algorithms are gradient descent (GRADIENT DESCENT), random gradient descent (Stochastic GRADIENT DESCENT, SGD), small batch gradient descent (mini-batch GRADIENT DESCENT), momentum method (Momentum), nesterov (name of the inventor, specifically random gradient descent with Momentum), adaptive gradient descent (ADAPTIVE GRADIENT DESCENT, adagrad), expansion algorithm (Adadelta) of Adagrad, root mean square error descent (root mean square prop, RMSprop), adaptive Momentum estimation (Adaptive Moment Estimation, adam), and the like.
When the errors are counter-propagated, the optimization algorithms are all used for obtaining errors/losses according to the loss function, obtaining derivatives/partial derivatives of the current neurons, adding influences such as learning rate, previous gradients/derivatives/partial derivatives and the like to obtain gradients, and transmitting the gradients to the upper layer.
The machine learning model in the embodiment of the present application may also be referred to as an AI unit, an AI model, ML (machine learning) model, an ML unit, an AI structure, an AI function, an AI characteristic, a neural network function, or the like, or the AI unit/AI model may also refer to a processing unit capable of implementing a specific algorithm, a formula, a processing flow, a capability, or the like related to AI, or the AI unit/AI model may be a processing method, an algorithm, a function, a module, or a unit for a specific data set, or the AI unit/AI model may be a processing method, an algorithm, a function, a module, or a unit that operates on AI/ML related hardware such as GPU, NPU, TPU, ASIC, which is not specifically limited in the present application. Optionally, the specific data set comprises an input and/or an output of the AI unit/AI model.
Alternatively, the identifier of the AI unit/AI model may be an AI model identifier, an AI structure identifier, an AI algorithm identifier, or an identifier of a specific data set associated with the AI unit/AI model, or an identifier of a specific scene, environment, channel characteristic, device associated with the AI/ML, or an identifier of a function, characteristic, capability, or module associated with the AI/ML, which is not specifically limited in the embodiment of the present application.
In an embodiment of the present application, the first device may determine the first model based on the first information. For example, the first device determines a machine learning model associated with scene information in which the terminal device is located, based on the scene information in which the terminal device is located and a mapping relationship between the machine learning model and the scene information, and determines the model as the first model. Or in the case where the first information includes model information of a machine learning model associated with scene information in which the terminal device is located, the first device will be able to directly determine the machine learning model indicated by the model information as the first model.
After determining the first model, the first device may activate the first model, so as to process data generated by the terminal device in the current scene by using the first model.
It will be appreciated that, in the case where the first device is the network side, the machine learning model running in the first device may be used to process data corresponding to the terminal device, for example, determine location information of the terminal device, analyze communication quality of a cell in which the terminal device is located, perform access control on the terminal device, and so on. When the scene in which the terminal device is located changes, the machine learning model running in the first device may not meet the data processing requirement of the scene in which the terminal device is currently located, and in this case, the terminal device may determine, according to the first information, the first model matching the scene in which the terminal device is currently located and activate the first model.
It should be noted that, in the embodiment of the present application, the scene information of the scene where the terminal device is located may include, but is not limited to, a scene identifier (scene ID), scene information (scenario information), a scene category (scenario category), an area identifier (area ID), area information (area information), an area category (area category), a dataset identifier (DATASET ID), dataset information (dataset information), a dataset category (dataset category), and the like. Wherein the granularity of the scene or region or dataset may be a cell, in one possible implementation, the scene ID, region ID, dataset ID may be associated with physical cell identities (PHYSICAL CELL IDENTIFIER, PCI) of one or more cells, such that the scene ID, region ID, and dataset ID corresponding to the first device are determined from the cell in which the first device is located. In another possible implementation, the granularity of the scene or region or data set may also be smaller than the cell, such as AI location, and a scene may be a building, or even a layer in a building within the cell.
A machine learning model may correspond to one or more scenes, regions, or datasets.
In the embodiment of the invention, if the first model determined by the first device based on the first information is different from the machine learning model currently running in the first device, the first device can determine that the machine learning model currently running cannot meet the calculation requirement of the scene currently located, in other words, in the scene currently located by the first device, the machine learning model currently running in the first device is an invalid model. In this case, the first device may switch the running machine learning model to the first model.
The embodiment of the application associates the machine learning model with the scene, the first equipment can determine which model should be applied in the current environment according to the scene information of the terminal equipment and the mapping relation between the machine learning model and the scene information, or the first equipment can directly determine the machine learning model associated with the scene information of the current terminal equipment according to the model information in the first information and activate the model. In the embodiment of the application, the first equipment can determine which model should be applied according to the first information, so that the machine learning model running in the terminal equipment or the network side equipment can be always adapted to the scene where the terminal equipment is located in the moving process of the terminal equipment, and the accuracy and the processing efficiency of data processing are ensured.
Optionally, before the first device determines the first model based on the first information, the method further comprises:
The first equipment acquires scene information of the terminal equipment;
the first device obtains a mapping relationship between the machine learning model and the scene information.
In the embodiment of the application, the mapping relation between the machine learning model and the scene information can be generated by network side equipment of the training model or a third party server. The mapping relationship between the machine learning model and the scene information may be sent to the first device by a network side device or a third party server of the training model if the first device is a terminal device, the network side device may locally generate the mapping relationship between the machine learning model and the scene information based on the model training process if the first device is a network side device, and the network side device may read the mapping relationship between the machine learning model and the scene information from the third party server if the first device is a network side device.
Likewise, the first device may also obtain the scene information in which the terminal device is located in a plurality of manners.
As an example, the first device obtains scene information where the terminal device is located, including:
step S11, the first equipment acquires second information, wherein the second information is used for indicating communication information of the terminal equipment, and the second information is associated with scene information of the terminal equipment;
step S12, the first equipment determines scene information of the terminal equipment according to the second information.
The second information is associated with a scene where the first device is located, for example, the second information is associated with information such as a scene ID, an area ID, a data set ID, a scene type, an area type, a data set type, and the like where the first device is currently located. As an example, the second information may include a cell ID, a reference signal ID, a transmission reception point ID, an area ID, a tracking area (TRACKING AREA) ID, and the like, which correspond to the first device.
In the embodiment of the application, the first device can determine the scene information of the terminal device according to the second information.
Optionally, in the case that the first device is a terminal device, the first device acquires second information, including:
The first device measures a reference signal and determines second information based on the measurement result.
Referring to fig. 5, a flow chart of a model determining method according to an embodiment of the present application is shown. As shown in fig. 5, if the first device is a terminal device, the first device may determine the second information by measuring the reference signal.
Optionally, in the case that the first device is a network side device, the first device acquires second information, including:
And the first equipment receives the second information sent by the terminal equipment.
Referring to fig. 6, a flow chart of another model determining method according to an embodiment of the present application is shown. As shown in fig. 6, if the first device is a network side device, the second information may be sent to the network side device after the terminal device generates the second information according to the measurement result of the reference signal. The network side equipment does not need to perform any measurement operation on the reference signal.
Optionally, the first device determines scene information of the terminal device according to the second information, including:
the first equipment acquires an association relation between communication information and scene information;
And the first equipment determines the scene information of the terminal equipment according to the second information and the association relation between the communication information and the scene information.
The association relationship between the communication information and the scene may be sent by the second device to the first device, or may be specified by a protocol. In the case that the first device is a network side device, for example, the first device is an access network device, the second device may be a core network device, and in the case that the first device is a terminal device, the second device may be a network side device or a higher layer of the terminal device.
After the first device obtains the second information, the scene information where the terminal device is located can be determined based on the communication information indicated by the second information and the association relationship between the communication information and the scene information.
For example, as shown in fig. 6, if the first device is a network side device, after receiving the second information reported by the terminal device, the network side device may determine, according to the communication information indicated by the second information and the association relationship between the communication information and the scene information, the scene information where the terminal device is located, and then further determine and activate the first model by combining with the association relationship between the machine learning model and the scene information.
As shown in fig. 5, if the first device is a terminal device, after the terminal device determines the second information by measuring the reference signal, the terminal device may determine, according to the communication information indicated by the second information and the association relationship between the communication information and the scene information, the scene information where the terminal device is located, and then further combine the association relationship between the machine learning model and the scene information to determine and activate the first model. Or the terminal equipment can also send the second information to the network side equipment, and the network side equipment determines the scene information of the terminal equipment based on the second information and indicates the scene information to the terminal equipment.
Optionally, the first device determines scene information of the terminal device according to the second information, including:
step S21, the first device sends the second information to network side device;
step S22, the first device receives a first instruction sent by the network side device, wherein the first instruction is used for indicating scene information of the first device.
As shown in fig. 5, in one possible application scenario of the present application, the first device is a terminal device, and the terminal device may determine the second information by measuring the reference signal, and send the second information to the network side device. The network side equipment determines scene information of the terminal equipment at present based on the second information and the association relation between the communication information and the scene information, indicates the scene information to the terminal equipment through the first indication, and determines and activates a first model according to the scene information indicated by the first indication and the association relation between the machine learning model and the scene information.
Or the network side equipment determines the scene information of the terminal equipment at present based on the communication information indicated by the second information and the association relation between the communication information and the scene information, further determines a machine learning model associated with the scene information of the terminal equipment at present, namely a first model based on the association relation between the machine learning model and the scene information, and indicates the first model to the terminal equipment through a third indication.
In an optional embodiment of the present application, the first device obtains scene information where the terminal device is located, including:
The first device receives a first indication and a second indication sent by the second device, wherein the first indication is used for indicating scene information of the terminal device, and the second indication is used for indicating a mapping relation between a machine learning model and the scene information.
In another possible application scenario of the present application, the scenario information where the terminal device is located, and the mapping relationship between the machine learning model and the scenario information may also be indicated to the first device by the second device.
It should be noted that, in the embodiment of the present application, the second device may be a network side device, or may be a higher layer of the terminal device. For example, in the case that the first device is a network side device, for example, the first device is an access network device, the second device may be a core network device, and in the case that the first device is a terminal device, the second device may be a network side device, or a higher layer of the terminal device.
As an example, the first indication and the second indication may be carried in the same signaling, the second device sends the first indication and the second indication to the first device simultaneously through a certain signaling, and the first device determines the first model according to the received first indication and second indication and activates. Or the second device may send the first indication to the first device via one signaling and the second indication to the first device via other signaling. The signaling carrying the first indication and/or the second indication may include, but is not limited to, radio resource Control protocol (Radio Resource Control, RRC) signaling, radio link layer Control protocol (Radio Link Control, RLA) signaling, medium access Control (MEDIA ACCESS Control, MAC) signaling, LTE positioning protocol (LTE Positioning Protocol, LPP) signaling, NR positioning protocol a (NRPPa) signaling, downlink Control information (Downlink Control Information, DCI), etc.
As another example, the first indication is sent by the second device to the first device, the machine learning model is trained by the third party server, and the second indication is sent by the third party server to the first device.
Optionally, before the first device determines the first model based on the first information, the method further comprises:
the first device receives a third instruction sent by the second device, wherein the third instruction is used for indicating a machine learning model associated with scene information where the terminal device is located.
In an embodiment of the present application, the third indication may be sent by the second device to the first device. The first device, upon receiving the third indication, may directly determine the machine learning model indicated by the third indication as the first model to be activated.
It can be understood that the second device may determine, according to the location information of the terminal device, the scene information in which the terminal device is located, and further determine, according to the association relationship between the scene information and the machine learning model, a machine learning model that matches the current scene in which the terminal device is located, generate a third indication, and send the third indication to the first device. Or the first device may send the scene information where the terminal device is located to the second device, and the second device determines, according to the scene information and the association relationship between the machine learning model and the scene information, a machine learning model associated with the scene where the terminal device is currently located, generates a third indication, and sends the third indication to the first device.
Optionally, the first device measures a reference signal and determines second information based on a measurement result, including:
step S31, the first equipment measures a first reference signal under the condition of receiving a fourth instruction, wherein the fourth instruction is used for instructing the first equipment to measure at least one reference signal;
step S32, the first device determines the second information based on a first measurement result of the first reference signal.
In one possible application scenario, the first device is a terminal device, and the terminal device may measure the first reference signal if the fourth indication is received, and determine the second information based on the first measurement result of the first reference signal.
The fourth instruction may be sent by the second device to the first device, or may be sent by another device to the first device. In another possible application scenario, the fourth indication may also be triggered automatically by a higher layer of the first device in case a certain measurement condition is fulfilled.
The first reference signal may include, but is not limited to, a positioning reference signal, a downlink-State-Information REFERENCE SIGNAL, CSI-RS, an uplink-Sounding REFERENCE SIGNAL, SRS, a synchronization signal block (Synchronization Signal Block, SSB), a time-frequency tracking reference signal (TRACKING REFERENCE SIGNAL, TRS), and the like.
Optionally, the first device measures a reference signal and determines second information based on a measurement result, including:
step S41, the first equipment receives a second reference signal sent by a reference point;
step S42, the first device measures the second reference signal, and determines second information based on a second measurement result of the second reference signal.
In the embodiment of the present application, the first device is a terminal device, and the terminal device may also measure a second reference signal from the receiving reference point and determine the second information based on a second measurement result of the second reference signal. As an example, the second information may include a cell ID, a reference signal ID, a reception reference point ID, a scene ID, an area ID, a tracking area (TRACKING AREA) ID, etc., corresponding to the terminal device.
Optionally, the second information includes at least one of:
The first communication information, the said first parameter is used for pointing out the communication resource of the said terminal equipment;
And the fifth parameter is used for indicating the communication area where the terminal equipment is located.
Optionally, the first communication information includes at least one of:
reference signal information of the first device;
communication index information of the first device.
Optionally, the reference signal information includes at least one of:
A first parameter for indicating a reference signal resource;
A second parameter for indicating reference signal measurement information;
And the third parameter is used for indicating the reporting information of the reference signal.
Wherein the first parameter may be a reference signal resource ID, a reference signal resource set ID, etc. The second parameter may be a reference signal measurement ID, a reference signal measurement configuration ID, etc. The third parameter may be a reference signal reporting ID, a reference signal reporting configuration ID.
Optionally, the communication index information includes at least one of:
a fourth parameter, the first parameter being used to indicate channel quality;
Beam information;
channel state information;
Multipath average time delay;
Multipath delay spread.
The fourth parameter may be a statistic or a representation of Signal Quality, such as Signal-to-noise Ratio (SNR), signal-to-interference-plus-noise Ratio (Signal to Interference plus Noise Ratio, SINR), RSRP, reference Signal received Quality (REFERENCE SIGNAL RECEIVED Quality, RSRQ), signal power, noise power, interference power, etc., or L1-RSRP, L1-SINR, L1-RSRP, L1-RSRQ, L3-RSRP, L3-SINR, L3-RSRP, L3-RSRQ, etc.
The beam information may include information of a beam index (index), a beam direction, and the like.
In an alternative embodiment of the present application, the first device measures a reference signal and determines second information based on a measurement result, including:
step S51, the first equipment measures the reference signal to obtain a measurement result;
Step S52, the first device determines a target reference signal resource according to the measurement result, and determines second information according to the resource information of the target reference signal resource.
Wherein the target reference signal resource comprises at least one of:
A1, N first target reference signal resources in the reference signal resources configured by each sending and receiving point, wherein the reference signal receiving power of the N first target reference signal resources is larger than the reference signal receiving power of other reference signal resources of the same sending and receiving point;
A2, screening out a second target reference signal resource from the reference signal resources configured by each transmitting and receiving point, wherein the reference signal receiving power of the second target reference signal resource is larger than or equal to a preset threshold;
A3, each transmitting and receiving point configures the reference signal resource.
In the embodiment of the application, the first device is a terminal device, and the terminal device can screen out first target reference signal resources of other reference signal resources with N reference signal receiving powers greater than the same transmitting and receiving point from each reference signal resource configured by each transmitting and receiving point, and determine second information according to the resource information of the first target reference signal resources, such as information of a reference signal ID, a reference signal measurement ID, a reference signal reporting ID and the like.
Or the terminal device may also screen out a second target reference signal resource with the reference signal receiving power greater than or equal to the preset threshold from the reference signal resources configured by each transmitting and receiving point, so as to determine second information according to the resource information of the second target reference signal resource, such as information including a reference signal ID, a reference signal measurement ID, a reference signal reporting ID, and the like. The preset threshold may be indicated by the network side device or may be specified by a protocol, which is not specifically limited in the embodiment of the present application.
Or the terminal equipment determines the second information according to the reference signal resources configured by each sending and receiving point, and the reference signal resources are not required to be screened.
In the embodiment of the application, the terminal equipment can screen the target reference signal resource based on any one of the items A1 to A3, and determine the second information based on the screened target reference signal resource, so as to determine the scene information of the terminal equipment, wherein the determined scene information is matched with the reference signal resource configured by the transmitting and receiving point, the specific reference signal receiving power is met, the reliability of the determined scene information is ensured, and the reliability of the finally determined first model is favorably improved, thereby ensuring that the machine learning model running in the terminal equipment can be always matched with the reference signal resource configured by the transmitting and receiving point in the moving process of the terminal equipment.
Optionally, the resource information includes at least one of:
reference signal received power;
a reference signal resource identifier;
Beam identification;
Beam direction.
In the embodiment of the present application, the terminal device may determine the second information according to the resource information of the reference signal received power, the reference signal resource representation, the beam identifier, the beam direction, and the like of the target reference signal resource (including at least one of A1 to A3).
In another optional embodiment of the present application, the first device obtains scene information where the terminal device is located, including:
Step S61, the first equipment acquires position information of the terminal equipment, wherein the position information is associated with scene information of the terminal equipment;
Step S62, the first device determines the scene information of the terminal device according to the position information and the association relation between the position coordinates and the scene information.
In the embodiment of the application, the first device can determine the scene information of the terminal device according to the position information of the terminal device and the association relationship between the position coordinates and the scene information besides the scene information of the terminal device according to the second information.
It will be appreciated that the location information of the terminal device may be determined by the terminal device according to an AI model or other positioning methods, such as satellite positioning, GPS positioning system, beidou positioning system, bluetooth positioning, radar positioning, and other positioning methods based on a mobile communication network, such as positioning methods based on an NR system, an LTE system, etc.
The association relationship between the position coordinates and the scene information may be determined by the network side device, or may be specified by a protocol, or may be sent by the second device to the first device, which is not specifically limited in the embodiment of the present application. The second device may be a network side device or a higher layer of the terminal device. For example, in the case that the first device is an access network device, such as a base station, the second device may be a core network device, and in the case that the first device is a terminal device, the second device may be a network side device, or a higher layer of the terminal device.
Optionally, before the first device determines the scene information of the terminal device according to the location information and the association relationship between the location coordinates and the scene information, the method further includes:
the first device receives third information sent by the second device, wherein the third information is used for indicating the association relation between the position coordinates and the scene information.
In one possible application scenario of the present application, the second device may also indicate, through the third information, an association relationship between the position coordinates and the scenario information to the first device, and after the first device receives the third information, the first device may determine, according to the position information of the terminal device and the association relationship between the position coordinates and the scenario information, the scenario information where the terminal device is located, and further determine the first model according to the scenario information and the association relationship between the machine learning model and the scenario information.
Or the first device sends the determined scene information, such as scene ID, region ID, data set ID and the like, to the second device, and the second device determines a machine learning model matched with the scene information according to the scene information reported by the first device and indicates the machine learning model to the first device.
Optionally, the first device acquires location information of the terminal device, including:
In the case that the first device is a terminal device, the first device determines current location information based on a positioning technology;
And under the condition that the first equipment is network side equipment, the first equipment receives fourth information sent by the terminal equipment, wherein the fourth information is used for indicating the position information of the terminal equipment.
Referring to fig. 7, a flowchart of a model determining method provided by an embodiment of the present application is shown in fig. 7, and if the first device is a terminal device, the location information of the terminal device may be determined by the terminal device according to an AI model or other positioning methods, such as satellite positioning, GPS positioning system, beidou positioning system, bluetooth positioning, radar positioning, and other positioning methods based on a mobile communication network, such as positioning methods based on an NR system, an LTE system, and so on.
After the terminal equipment determines the position information, the association relation between the position coordinates and the scene information is combined, so that the scene information can be determined. Or the terminal equipment reports the position information to the network side equipment through the fourth information, the network side equipment determines the scene information of the terminal equipment according to the position information of the terminal equipment and the association relation between the position coordinates and the scenes, and the determined scene information is indicated to the terminal equipment through the first indication.
After determining the scene information, the terminal device may further determine the first model by combining the association relationship between the machine learning model and the scene information. Or the network side equipment determines the scene information of the terminal equipment based on the position information reported by the terminal equipment, further determines a machine learning model associated with the scene information of the terminal equipment, namely a first model by combining the association relation between the machine learning model and the scene information, and indicates the first model to the terminal equipment through a third indication.
Referring to fig. 8, a flow chart of another model determining method according to an embodiment of the present application is shown. As shown in fig. 8, if the first device is a network side device, the location information of the terminal device may be reported to the first device by the terminal device through the fourth information. Optionally, in the case that the first device is a network side device, the terminal device may report the acquiring method of the location information and reliability or confidence level to the first device at the same time.
After the network side equipment receives the position information reported by the terminal equipment, the scene information of the terminal equipment can be determined according to the position information and the association relation between the position coordinates and the scene information. Further, the network side device can determine a first model associated with the scene information where the terminal device is located based on the association relationship between the machine learning model and the scene information.
Optionally, the first device activates the first model, including:
and the first equipment deactivates the second model and activates the first model under the condition that the second model which is currently running is not matched with the first model.
In the embodiment of the application, if the second model currently running in the first device is not matched with the first model, the second model currently running cannot meet the data processing requirement of the first scene where the first device is currently located, and in this case, the first device can deactivate the second model and activate the first model.
The first model and the second model in the present application are not limited to a certain AI model, in other words, the first model and the second model in the present application may each include one or more AI models, or may be one AI function, and one AI function may be associated with one or more AI models. Accordingly, the second model may be deactivated either simultaneously with one or more AI models contained in the second model or simultaneously with one or more AI functions referred to by the second model. Likewise, the first model may be activated simultaneously with one or more AI models contained in the first model, or simultaneously with one or more AI functions referred to by the first model.
Further, the deactivation operation and the activation operation may be independent of each other, for example, if the first model determined by the first device based on the first information includes a machine learning model currently running in the first device, it is explained that the machine learning model currently running in the first device is valid without performing the deactivation operation, in which case the normal operation of the machine learning model currently running may be maintained, and then the AI model and/or the AI function other than the machine learning model currently running may be activated among the AI models or AI functions included in the first model. Or if the AI model and/or AI function contained in the first model contains only the AI model and/or AI function currently running in the first device, then no further deactivation and activation operations need to be performed. Or if there is no AI model and/or AI function in the first device that matches the AI model and/or AI function contained by the first model, then no activation operation can be performed. Or if no AI model or AI function is currently running in the first device, then no deactivation operation need be performed.
It should be noted that, in the embodiment of the present application, if a certain AI function is deactivated, all AI models associated with the AI function are deactivated, and similarly, if a certain AI function is activated, all AI models associated with the AI function are valid models.
According to the embodiment of the application, under the condition that the second model running currently is not matched with the first model, the second model is deactivated and the first model is activated, so that the machine learning model running in the first device can be always adapted to the scene where the terminal device is located in the moving process of the terminal device, and the accuracy and the processing efficiency of data processing are ensured.
Optionally, in case the first device is a terminal device, the method further comprises:
And the first equipment sends fifth information to the network equipment.
Wherein the fifth information includes at least one of:
A model identification of the first model;
a model identification of the second model;
activation time of the first model.
In the embodiment of the present application, after determining the first model to be activated, the terminal device may send at least one of the model identifier of the deactivated second model, the model identifier of the first model to be activated, and the activation time of the first model to the network side device through the fifth information. The activation time of the first model is used for indicating the time of model switching, for example, the model switching is performed after M time units, and the activation time comprises the steps of deactivating and activating the second model.
In summary, the embodiment of the application provides a model determining method, which associates a machine learning model with a scene, wherein a first device can determine which model should be applied in a current environment according to scene information of a terminal device and a mapping relation between the machine learning model and the scene information, or the first device can directly determine the machine learning model associated with the scene information of the current terminal device according to model information in the first information and activate the model. In the embodiment of the application, the first equipment can determine which model should be applied according to the first information, so that the machine learning model running in the terminal equipment or the network side equipment can be always adapted to the scene where the terminal equipment is located in the moving process of the terminal equipment, and the accuracy and the processing efficiency of data processing are ensured.
The embodiment of the application provides a data transmission method. Referring to fig. 9, a flowchart of a data transmission method according to an embodiment of the present application is shown. The method is applied to the second device, as shown in fig. 9, and specifically may include:
step 501, the second device sends sixth information to the first device.
Wherein the six information includes at least one of:
Third information for indicating an association relationship between the position coordinates and the scene information;
the first indication is used for indicating scene information of the terminal equipment;
A second indication for indicating a mapping relationship between the machine learning model and the scene information;
and a third indication for indicating a machine learning model associated with the scene information in which the terminal device is located.
It should be noted that, in the embodiment of the present application, the second device may be a network side device, or may be a higher layer of the terminal device. For example, in the case that the first device is a network side device, for example, the first device is an access network device, the second device may be a core network device, and in the case that the first device is a terminal device, the second device may be a network side device, or a higher layer of the terminal device.
The third information is used for indicating the association relation between the position coordinates and the scene information. In one possible application scenario of the present application, the second device may indicate, through the third information, an association relationship between the position coordinates and the scenario information to the first device, and after the first device receives the third information, the scenario information where the terminal device is located may be determined according to the position information of the terminal device and the association relationship between the position coordinates and the scenario information, and further, the first model may be determined according to the scenario information and the association relationship between the machine learning model and the scenario information.
The first indication may include, but is not limited to, a scene identification (scene ID), scene information (scenario information), a scene category (scenario category), an area identification (area ID), area information (area information), an area category (area category), a dataset identification (DATASET ID), dataset information (dataset information), a dataset category (dataset category), and the like of a scene in which the first device is located. Wherein the granularity of the scene or region or dataset may be a cell, in one possible implementation, the scene ID, region ID, dataset ID may be associated with physical cell identities (PHYSICAL CELL IDENTIFIER, PCI) of one or more cells, such that the scene ID, region ID, and dataset ID corresponding to the first device are determined from the cell in which the first device is located. In another possible implementation, the granularity of the scene or region or data set may also be smaller than the cell, such as AI location, and a scene may be a building, or even a layer in a building within the cell. A machine learning model may correspond to one or more scenes, regions, or datasets.
In the embodiment of the application, the machine learning model can be trained by the second device, and the association relationship between the model identification and the scene information of each machine learning model is recorded in the second device. Or the machine learning model is trained by a third-party server, and the third-party server sends the trained machine learning model to the first device and sends the association relation between the machine learning model and the scene information to the first device and/or the second device.
The second device may send an association between the machine school model and the context information to the first device via the second indication, such that the first device determines the first model based on the second indication.
Or the second device may determine, according to the scene information where the terminal device is located and the positional relationship between the machine learning model and the scene information, a machine learning model associated with the scene information where the terminal device is located, and indicate model information of the model to the first device through a third indication. The third indication may include model information, such as model identification, of a machine learning model associated with the context information in which the terminal device is located.
In summary, the embodiment of the present application provides a data transmission method, where the second device may send, to the first device, at least one of association between the position coordinates and the scene information, the scene information where the terminal device is located, a mapping relationship between the machine learning model and the scene information, and model information of the machine learning model associated with the scene information where the terminal device is located, through the sixth information, so that the first device determines, based on the received sixth information, what model should be applied in the scene where the terminal device is currently located.
According to the model determining method provided by the embodiment of the application, the execution subject can be the model determining device. In the embodiment of the application, a model determining device is taken as an example to execute a model determining method by using a model determining device, and the model determining device provided by the embodiment of the application is described.
The embodiment of the application provides a model determining device. Referring to fig. 10, there is shown a block diagram of a model determination apparatus, which is applicable to a first device, according to an embodiment of the present application. As shown in fig. 10, the apparatus may specifically include:
a model determination module 601 for determining a first model based on the first information;
A model activation module 602, configured to activate the first model;
wherein the first information includes any one of:
scene information of the terminal equipment and mapping relation between the machine learning model and the scene information; the first device is the terminal device or network side device;
model information of a machine learning model associated with scene information in which the terminal device is located.
Optionally, the apparatus further comprises:
the scene information acquisition module is used for acquiring scene information of the terminal equipment;
and the first relation acquisition module is used for acquiring the mapping relation between the machine learning model and the scene information.
Optionally, the scene information acquisition module includes:
The first acquisition sub-module is used for acquiring second information, wherein the second information is used for indicating communication information of the terminal equipment, and the second information is associated with scene information of the terminal equipment;
and the first determining submodule is used for determining scene information of the terminal equipment according to the second information.
Optionally, in the case that the first device is a terminal device, the first obtaining sub-module includes:
And a measurement unit for measuring the reference signal and determining the second information based on the measurement result.
Optionally, in the case that the first device is a network side device, the first obtaining sub-module includes:
And the first receiving unit is used for receiving the second information sent by the terminal equipment.
Optionally, the first determining sub-module includes:
The first acquisition unit is used for acquiring the association relationship between the communication information and the scene information by the first equipment;
And the first determining unit is used for determining the scene information of the terminal equipment according to the second information and the association relation between the communicated information and the scene information.
Optionally, the first determining sub-module includes:
the first sending unit is used for sending the second information to the network side equipment;
The first receiving unit is used for receiving a first instruction sent by the network side equipment, and the first instruction is used for indicating scene information of the first equipment.
Optionally, the scene information acquisition module includes:
the second acquisition sub-module is used for acquiring the position information of the terminal equipment, wherein the position information is associated with scene information of the terminal equipment;
And the second determining submodule is used for determining the scene information of the terminal equipment according to the position information and the association relation between the position coordinates and the scene information.
Optionally, the scene information acquisition module further includes:
The first receiving sub-module is used for receiving third information sent by the second equipment, and the third information is used for indicating the association relation between the position coordinates and the scene information.
Optionally, the second obtaining sub-module includes:
a second determining unit, configured to determine current location information based on a positioning technology in a case where the first device is a terminal device;
And the third receiving unit is used for receiving fourth information sent by the terminal equipment under the condition that the first equipment is network side equipment, wherein the fourth information is used for indicating the position information of the terminal equipment.
Optionally, the scene information acquisition module includes:
The second receiving sub-module is used for receiving a first instruction and a second instruction sent by the second equipment, wherein the first instruction is used for indicating the scene information of the terminal equipment, and the second instruction is used for indicating the mapping relation between the machine learning model and the scene information.
Optionally, the apparatus further comprises:
And the third instruction receiving module is used for receiving a third instruction sent by the second equipment, wherein the third instruction is used for indicating a machine learning model associated with the scene information where the terminal equipment is located.
Optionally, the measuring unit is specifically configured to:
Measuring the first reference signal if a fourth indication is received, wherein the fourth indication is used for indicating the first equipment to measure at least one reference signal;
The second information is determined based on a first measurement of the first reference signal.
Optionally, the measuring unit is specifically configured to:
receiving a second reference signal sent by a reference point;
and measuring the second reference signal, and determining the second information based on a second measurement result of the second reference signal.
Optionally, the second information includes at least one of:
The first communication information, the said first parameter is used for pointing out the communication resource of the said terminal equipment;
And the fifth parameter is used for indicating the communication area where the terminal equipment is located.
Optionally, the first communication information includes at least one of:
reference signal information of the first device;
communication index information of the first device.
Optionally, the reference signal information includes at least one of:
A first parameter for indicating a reference signal resource;
A second parameter for indicating reference signal measurement information;
And the third parameter is used for indicating the reporting information of the reference signal.
Optionally, the communication index information includes at least one of:
a fourth parameter, the first parameter being used to indicate channel quality;
Beam information;
channel state information;
Multipath average time delay;
Multipath delay spread.
Optionally, the measuring unit is specifically configured to:
measuring the reference signal to obtain a measurement result;
Determining a target reference signal resource according to the measurement result, and determining second information according to the resource information of the target reference signal resource;
wherein the target reference signal resource comprises at least one of:
N first target reference signal resources in the reference signal resources configured by each sending and receiving point, wherein the reference signal receiving power of the N first target reference signal resources is larger than the reference signal receiving power of other reference signal resources of the same sending and receiving point;
The method comprises the steps of selecting a second target reference signal resource from reference signal resources configured by each sending and receiving point, wherein the reference signal receiving power of the second target reference signal resource is larger than or equal to a preset threshold;
each transmitting a reference signal resource configured by a receiving point.
Optionally, the resource information includes at least one of:
reference signal received power;
a reference signal resource identifier;
Beam identification;
Beam direction.
Optionally, the model activation module includes:
And the model activation submodule is used for deactivating the second model and activating the first model when the second model which is currently running is not matched with the first model.
Optionally, the apparatus further comprises:
a fifth information sending module, configured to send fifth information to a network side device, where the fifth information includes at least one of the following:
A model identification of the first model;
a model identification of the second model;
activation time of the first model.
The model determining device in the embodiment of the application can be an electronic device, for example, an electronic device with an operating system, or can be a component in the electronic device, for example, an integrated circuit or a chip.
The model determining device provided by the embodiment of the application can realize each process realized by the method embodiment and achieve the same technical effect, and in order to avoid repetition, the description is omitted here.
The embodiment of the application provides a data transmission device. Referring to fig. 11, there is shown a block diagram of a data transmission apparatus according to an embodiment of the present application, which is applicable to a second device. As shown in fig. 11, the apparatus may specifically include:
An information sending module 701, configured to send sixth information to the first device.
Wherein the six information includes at least one of:
Third information for indicating an association relationship between the position coordinates and the scene information;
A first indication, wherein the first indication is used for indicating scene information of the terminal equipment;
A second indication for indicating a mapping relationship between the machine learning model and the scene information;
and a third indication for indicating a machine learning model associated with the scene information in which the terminal device is located.
The data transmission device in the embodiment of the application can be an electronic device, for example, an electronic device with an operating system, or can be a component in the electronic device, for example, an integrated circuit or a chip.
The data transmission device provided by the embodiment of the application can realize each process realized by the embodiment of the method and achieve the same technical effect, and in order to avoid repetition, the description is omitted here.
Optionally, as shown in fig. 12, the embodiment of the present application further provides a communication device 900, including a processor 901 and a memory 902, where the memory 902 stores a program or an instruction that can be executed on the processor 901, for example, when the communication device 900 is a network side device, the program or the instruction is executed by the processor 901 to implement each step of the foregoing embodiment of the model determining method, or implement each step of the foregoing embodiment of the data transmission method, and achieve the same technical effects. When the communication device 900 is a terminal device, the program or the instruction when executed by the processor 901 implements the steps of the foregoing embodiment of the model determining method, or implements the steps of the foregoing embodiment of the data transmission method, and can achieve the same technical effects, so that repetition is avoided, and no further description is given here.
As shown in fig. 13, a schematic hardware structure of a terminal device for implementing an embodiment of the present application is shown.
The terminal device 1000 includes, but is not limited to, at least some of the components of a radio frequency unit 1001, a network module 1002, an audio output unit 1003, an input unit 1004, a sensor 1005, a display unit 1006, a user input unit 1007, an interface unit 1008, a memory 1009, and a processor 1010, etc.
Those skilled in the art will appreciate that terminal device 1000 can also include a power source (e.g., a battery) for powering the various components, which can be logically coupled to processor 1010 via a power management system to perform functions such as managing charge, discharge, and power consumption via the power management system. The terminal device structure shown in fig. 13 does not constitute a limitation of the terminal device, and the terminal device may include more or less components than those shown in the drawings, or may combine some components, or may be arranged in different components, which will not be described in detail herein.
It should be appreciated that in embodiments of the present application, the input unit 1004 may include a graphics processing unit (Graphics Processing Unit, GPU) 10041 and a microphone 10042, where the graphics processor 10041 processes image data of still pictures or video obtained by an image capturing device (e.g., a camera) in a video capturing mode or an image capturing mode. The display unit 1006 may include a display panel 10061, and the display panel 10061 may be configured in the form of a liquid crystal display, an organic light emitting diode, or the like. The user input unit 1007 includes at least one of a touch panel 10071 and other input devices 10072. The touch panel 10071 is also referred to as a touch screen. The touch panel 10071 can include two portions, a touch detection device and a touch controller. Other input devices 10072 may include, but are not limited to, a physical keyboard, function keys (e.g., volume control keys, switch keys, etc.), a trackball, a mouse, a joystick, and so forth, which are not described in detail herein.
In the embodiment of the present application, after receiving the downlink data from the network side device, the radio frequency unit 1001 may transmit the downlink data to the processor 1010 for processing, and in addition, the radio frequency unit 1001 may send the uplink data to the network side device. In general, the radio frequency unit 1001 includes, but is not limited to, an antenna, an amplifier, a transceiver, a coupler, a low noise amplifier, a duplexer, and the like.
The memory 1009 may be used to store software programs or instructions and various data. The memory 1009 may mainly include a first memory area storing programs or instructions and a second memory area storing data, wherein the first memory area may store an operating system, application programs or instructions (such as a sound playing function, an image playing function, etc.) required for at least one function, and the like. Further, the memory 1009 may include volatile memory or nonvolatile memory, or the memory 1009 may include both volatile and nonvolatile memory. The nonvolatile Memory may be a Read-Only Memory (ROM), a Programmable ROM (PROM), an Erasable PROM (EPROM), an Electrically Erasable EPROM (EEPROM), or a flash Memory. The volatile memory may be random access memory (Random Access Memory, RAM), static random access memory (STATIC RAM, SRAM), dynamic random access memory (DYNAMIC RAM, DRAM), synchronous Dynamic Random Access Memory (SDRAM), double data rate Synchronous dynamic random access memory (Double DATA RATE SDRAM, DDRSDRAM), enhanced Synchronous dynamic random access memory (ENHANCED SDRAM, ESDRAM), synchronous link dynamic random access memory (SYNCH LINK DRAM, SLDRAM), and Direct random access memory (DRRAM). Memory 1009 in embodiments of the application includes, but is not limited to, these and any other suitable types of memory.
The processor 1010 may include one or more processing units, and optionally the processor 1010 integrates an application processor that primarily processes operations involving an operating system, user interface, application program, etc., and a modem processor that primarily processes wireless communication signals, such as a baseband processor. It will be appreciated that the modem processor described above may not be integrated into the processor 1010.
The embodiment of the application also provides network side equipment, which comprises a processor and a communication interface, wherein the communication interface is coupled with the processor, and the processor is used for running programs or instructions to realize the steps of the embodiment of the method. The network side device embodiment corresponds to the network side device method embodiment, and each implementation process and implementation manner of the network side device in the method embodiment can be applied to the network side device embodiment, and the same technical effects can be achieved.
Specifically, the embodiment of the present application further provides a network side device, as shown in fig. 14, where the network side device 1100 includes an antenna 111, a radio frequency device 112, a baseband device 113, a processor 114, and a memory 115. The antenna 111 is connected to a radio frequency device 112. In the uplink direction, the radio frequency device 112 receives information via the antenna 111, and transmits the received information to the baseband device 113 for processing. In the downlink direction, the baseband device 113 processes information to be transmitted, and transmits the processed information to the radio frequency device 112, and the radio frequency device 112 processes the received information and transmits the processed information through the antenna 111.
The method performed by the network side device in the above embodiment may be implemented in the baseband apparatus 113, where the baseband apparatus 113 includes a baseband processor.
The baseband apparatus 113 may, for example, include at least one baseband board, where a plurality of chips are disposed, as shown in fig. 14, where one chip, for example, a baseband processor, is connected to the memory 115 through a bus interface, so as to call a program in the memory 115 to perform the network device operation shown in the above method embodiment.
The network-side device may also include a network interface 116, such as a common public radio interface (common public radio interface, CPRI).
Specifically, the network side device 1100 of the embodiment of the present invention further includes instructions or programs stored in the memory 115 and capable of running on the processor 114, and the processor 114 invokes the instructions or programs in the memory 115 to execute the method executed by each module in fig. 10 or fig. 11, so as to achieve the same technical effect, and thus, for avoiding repetition, the description is omitted herein.
The embodiment of the application also provides network side equipment. As shown in fig. 15, the network-side device 1200 includes a processor 1201, a network interface 1202, and a memory 1203. The network interface 1202 is, for example, a common public radio interface (common public radio interface, CPRI).
Specifically, the network side device 1200 of the embodiment of the present invention further includes instructions or programs stored in the memory 1203 and capable of running on the processor 1201, where the processor 1201 calls the instructions or programs in the memory 1203 to execute the method executed by each module shown in fig. 10 or fig. 11, and achieve the same technical effects, so that repetition is avoided and therefore, the description is omitted herein.
The embodiment of the application also provides a readable storage medium, on which a program or an instruction is stored, which when executed by a processor, implements each process of the foregoing method embodiment, and can achieve the same technical effects, so that repetition is avoided, and no further description is given here.
Wherein the processor is a processor in the terminal device described in the above embodiment. The readable storage medium includes computer readable storage medium such as computer readable memory ROM, random access memory RAM, magnetic or optical disk, etc.
The embodiment of the application further provides a chip, which comprises a processor and a communication interface, wherein the communication interface is coupled with the processor, and the processor is used for running programs or instructions to realize the processes of the embodiment of the method, and can achieve the same technical effects, so that repetition is avoided, and the description is omitted here.
It should be understood that the chips referred to in the embodiments of the present application may also be referred to as system-on-chip chips, or the like.
The embodiments of the present application further provide a computer program/program product stored in a storage medium, where the computer program/program product is executed by at least one processor to implement each process of the foregoing method embodiments, and achieve the same technical effects, and are not repeated herein.
The embodiment of the application also provides a model determining system, which comprises a first device and a second device, wherein the first device can be used for executing the steps of the model determining method according to the first aspect, and the second device can be used for executing the steps of the data transmission method according to the second aspect.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising one does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element. Furthermore, it should be noted that the scope of the methods and apparatus in the embodiments of the present application is not limited to performing the functions in the order shown or discussed, but may also include performing the functions in a substantially simultaneous manner or in an opposite order depending on the functions involved, e.g., the described methods may be performed in an order different from that described, and various steps may be added, omitted, or combined. Additionally, features described with reference to certain examples may be combined in other examples.
From the above description of the embodiments, it will be clear to those skilled in the art that the above-described embodiment method may be implemented by means of software plus a necessary general hardware platform, but of course may also be implemented by means of hardware, but in many cases the former is a preferred embodiment. Based on such understanding, the technical solution of the present application may be embodied essentially or in a part contributing to the prior art in the form of a computer software product stored in a storage medium (e.g. ROM/RAM, magnetic disk, optical disk) comprising instructions for causing a terminal (which may be a mobile phone, a computer, a server, an air conditioner, or a network device, etc.) to perform the method according to the embodiments of the present application.
The embodiments of the present application have been described above with reference to the accompanying drawings, but the present application is not limited to the above-described embodiments, which are merely illustrative and not restrictive, and many forms may be made by those having ordinary skill in the art without departing from the spirit of the present application and the scope of the claims, which are to be protected by the present application.