DETAILED DESCRIPTION OF EMBODIMENT (S) OF INVENTION
The technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application. It is to be understood that the embodiments described are only a few embodiments of the present application and not all embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
In the description of the present application, it is to be understood that the terms "center," "longitudinal," "lateral," "length," "width," "thickness," "upper," "lower," "front," "rear," "left," "right," "vertical," "horizontal," "top," "bottom," "inner," "outer," "clockwise," "counterclockwise," and the like are used in the orientations and positional relationships indicated in the drawings for convenience in describing the present application and for simplicity in description, and are not intended to indicate or imply that the referenced devices or elements must have a particular orientation, be constructed in a particular orientation, and be operated in a particular manner, and are not to be construed as limiting the present application. Furthermore, the terms "first", "second" and "first" are used for descriptive purposes only and are not to be construed as indicating or implying relative importance or implicitly indicating the number of technical features indicated. Thus, features defined as "first", "second", may explicitly or implicitly include one or more of the described features. In the description of the present application, "a plurality" means two or more unless specifically limited otherwise.
In the description of the present application, it is to be noted that, unless otherwise explicitly specified or limited, the terms "mounted," "connected," and "connected" are to be construed broadly, e.g., as meaning either a fixed connection, a removable connection, or an integral connection; may be mechanically connected, may be electrically connected or may be in communication with each other; either directly or indirectly through intervening media, either internally or in any other relationship. The specific meaning of the above terms in the present application can be understood by those of ordinary skill in the art as appropriate.
In this application, unless expressly stated or limited otherwise, the first feature "on" or "under" the second feature may comprise direct contact of the first and second features, or may comprise contact of the first and second features not directly but through another feature in between. Also, the first feature being "on," "above" and "over" the second feature includes the first feature being directly on and obliquely above the second feature, or merely indicating that the first feature is at a higher level than the second feature. A first feature being "under," "below," and "beneath" a second feature includes the first feature being directly under and obliquely below the second feature, or simply meaning that the first feature is at a lesser elevation than the second feature.
The following disclosure provides many different embodiments, or examples, for implementing different features of the application. In order to simplify the disclosure of the present application, specific example components and arrangements are described below. Of course, they are merely examples and are not intended to limit the present application. Moreover, the present application may repeat reference numerals and/or letters in the various examples, which have been repeated for purposes of simplicity and clarity and do not in themselves dictate a relationship between the various embodiments and/or arrangements discussed. In addition, examples of various specific processes and materials are provided herein, but one of ordinary skill in the art may recognize applications of other processes and/or use of other materials.
Referring to the drawings, wherein like reference numbers refer to like elements throughout, the principles of the present application are illustrated as being implemented in a suitable computing environment. The following description is based on illustrated embodiments of the application and should not be taken as limiting the application with respect to other embodiments that are not detailed herein.
While the principles of the application have been described in the foregoing text, it is not meant to be limiting and those of skill in the art will appreciate that various steps and operations described below may be implemented in hardware. The principles of the present application may be employed in numerous other general-purpose or special-purpose computing, communication environments or configurations.
The application program management and control method provided by the application program is mainly applied to electronic equipment, such as: the mobile terminal comprises intelligent mobile electronic equipment such as a bracelet, a smart phone, a tablet computer based on an apple system or an android system, or a notebook computer based on a Windows or Linux system. It should be noted that the application program may be a chat application program, a video application program, a music application program, a shopping application program, a shared bicycle application program, or a mobile banking application program.
Referring to fig. 1, fig. 1 is a system schematic diagram of an application management and control apparatus according to an embodiment of the present disclosure. The application program management and control device is mainly used for: obtaining historical characteristic information x of application program from databaseiThen, the history feature information xiCalculating through an algorithm to obtain a training model, then inputting the current characteristic information s of the application program into the training model for calculation, and judging whether the application program can be closed or not according to a calculation result so as to control a preset application program, such as closing or freezing.
Specifically, please refer to fig. 2, and fig. 2 is a schematic view of an application scenario of the application management and control method according to the embodiment of the present application. In thatIn one embodiment, historical feature information x of an application program is obtained from a databaseiThen, the history feature information xiCalculating through an algorithm to obtain a training model, inputting current characteristic information s of the application program into the training model for calculation when the application program control device detects that the application program enters a background of the electronic equipment, and judging whether the application program can be closed or not according to a calculation result. For example, the historical characteristic information x of the application program a is obtained from the databaseiThen, the history feature information xiCalculating through an algorithm to obtain a training model, secondly, when an application program control device detects that an application program a enters the background of the electronic equipment, inputting the current characteristic information s of the application program into the training model to calculate, judging that the application program a can be closed through a calculation result, closing the application program a, inputting the current characteristic information s of the application program b into the training model to calculate when the application program control device detects that the application program b enters the background of the electronic equipment, judging that the application program b needs to be reserved through the calculation result, and reserving the application program b.
An execution subject of the application management and control method may be the application management and control apparatus provided in the embodiment of the present invention, or an electronic device that becomes the application management and control apparatus, where the application management and control apparatus may be implemented in a hardware or software manner.
Referring to fig. 3, fig. 3 is a flowchart illustrating an application management and control method according to an embodiment of the present disclosure. The application program management and control method provided by the embodiment of the application program is applied to the electronic equipment, and the specific flow can be as follows:
step S101, obtaining the sample vector set of the application program, wherein the sample vectors in the sample vector set include the historical feature information x of multiple dimensions of the application programi。
Acquiring a sample vector set of the application program from a sample database, wherein the sample vectors in the sample vector set comprise historical feature information x of multiple dimensions of the application programi。
Wherein, the characteristic information of the plurality of dimensions can refer to table 1.
TABLE 1
It should be noted that the 10-dimensional characteristic information shown in table 1 is only one in the embodiment of the present application, but the present application is not limited to the 10-dimensional characteristic information shown in table 1, and may also be one of the 10-dimensional characteristic information, or at least two of the 10-dimensional characteristic information, or may also include characteristic information of other dimensions, for example, whether charging is currently performed, the current amount of power is currently performed, or whether WiFi is currently connected, or the like.
In one embodiment, historical feature information may be selected for 6 dimensions:
A. the time that the application resides in the background;
B. whether the screen is bright, for example, the screen is bright and is marked as 1, and the screen is off and is marked as 0;
C. counting the total use times in the week;
D. counting the total use time of the week;
E. whether WiFi is on or not, for example, WiFi is on and is recorded as 1, WiFi is off and is recorded as 0; and
F. whether charging is currently in progress, e.g., currently charging, noted as 1, and not currently charging, noted as 0.
And step S102, calculating a sample vector set by adopting a BP neural network algorithm to generate a training model.
Referring to fig. 4, fig. 4 is a schematic flowchart illustrating an application management and control method according to an embodiment of the present disclosure. In one embodiment, the step S102 may include:
step S1021: defining a network structure; and
step S1022: and bringing the sample vector set into a network structure for calculation to obtain a training model.
In step S1021, the defining the network structure comprises:
step S1021a, an input layer is set, the input layer comprises N nodes, the number of the nodes of the input layer and the historical characteristic information xiAre the same in dimension.
Wherein the historical feature information xiIs less than 10, and the number of nodes of the input layer is less than 10, so as to simplify the operation process.
In one embodiment, the historical feature information xiIs 6 dimensions, the input layer comprises 6 nodes.
Step S1021b, setting a hidden layer, where the hidden layer includes M nodes.
Wherein the hidden layer may comprise a plurality of hidden hierarchies. The number of nodes of each implicit hierarchy is less than 10, so that the operation process is simplified.
In one embodiment, the hidden layers may include a first hidden layer, a second hidden layer, and a third hidden layer. The first implied hierarchy includes 10 nodes, the second implied hierarchy includes 5 nodes, and the third implied hierarchy includes 5 nodes.
Step S1021c, setting a classification layer, wherein the classification layer adopts a softmax function, and the softmax function is
Wherein p is a prediction probability value, Z
KIs a median value, C is the number of classes of prediction,
is the jth intermediate value.
Step S1021d, setting an output layer, the output layer comprising 2 nodes.
Step S1021e, setting an activation function, wherein the activation function adopts a sigmoid function, and the sigmoid function is
Wherein f (x) ranges from 0 to 1.
Step S1021f, set a batch size, the batch size being a.
The batch size can be flexibly adjusted according to actual conditions. The batch size may be 50-200 a.
In one embodiment, the batch size is 128.
In step S1021g, a learning rate is set, where the learning rate is B.
Wherein, the learning rate can be flexibly adjusted according to the actual situation. The learning rate may be 0.1-1.5.
In one embodiment, the learning rate is 0.9.
It should be noted that the sequence of the steps S1021a, S1021b, S1021c, S1021d, S1021e, S1021f, and S1021g can be flexibly adjusted.
In step S1022, the step of bringing the sample vector set into the network structure for calculation to obtain the training model may include:
step S1022a, the sample vector set is input to the input layer for calculation, so as to obtain an output value of the input layer.
Step S1022b, inputting the output value of the input layer into the hidden layer, and obtaining the output value of the hidden layer.
Wherein the output value of the input layer is the input value of the hidden layer.
In one embodiment, the hidden layer may include a plurality of hidden hierarchies. The output value of the input layer is the input value of the first implied hierarchy. The output value of the first implied hierarchy is the input value of the second implied hierarchy. And the output value of the second implied hierarchy is the input value of the third implied hierarchy, and so on.
Step S1022c, the output value of the hidden layer is input into the classification layer for calculation, and the prediction probability value [ p ] is obtained1p2]T。
Wherein the output value of the hidden layer is the input value of the classification layer.
In one embodiment, the hidden layer may include a plurality of hidden hierarchies. The output value of the last implied hierarchy is the input value of the hierarchy.
Step S1022d, the prediction probability value is brought into the output layer for calculation to obtain the prediction result value y, when p1Greater than p2When y is [10 ]]TWhen p is1P is less than or equal to2When y is [ 01 ]]T。
Wherein the output value of the classification layer is the input value of the output layer.
Step S1022e, the network structure is corrected according to the prediction result value y, and a training model is obtained.
And S103, when the application program enters a background, inputting the current characteristic information S of the application program into the training model for calculation.
Referring to fig. 4, in an embodiment, the step S103 may include:
step S1031: and collecting the current characteristic information s of the application program.
Acquiring the dimension of the current characteristic information s of the application program and the acquired historical characteristic information x of the application programiAre the same.
Step S1032: and substituting the current characteristic information s into the training model for calculation.
Inputting the current characteristic information s into the training model for calculation to obtain the prediction probability value [ p ] of the classification layer1’ p2’]TWhen p is1' greater than p2When y is 10]TWhen p is1' less than or equal to p2When y is ═ 01]T。
And step S104, judging whether the application program needs to be closed.
When y is ═ 10]TDetermining that the application needs to be closed; when y is ═ 01]TAnd determining that the application needs to be reserved.
The application program control method provided by the application program control method obtains historical characteristic information xiBy using BP spiritAnd generating a training model through a network algorithm, and when detecting that the application program enters a background, bringing the current characteristic information s of the application program into the training model so as to judge whether the application program needs to be closed or not and intelligently close the application program.
Referring to fig. 5, fig. 5 is a schematic structural diagram of an application management and control apparatus according to an embodiment of the present disclosure. Theapparatus 30 includes an obtainingmodule 31, a generatingmodule 32, a calculatingmodule 33 and a judgingmodule 34.
It should be noted that the application program may be a chat application program, a video application program, a music application program, a shopping application program, a shared bicycle application program, or a mobile banking application program.
The obtainingmodule 31 is configured to obtain a sample vector set of the application program, where sample vectors in the sample vector set include historical feature information x of multiple dimensions of the application programi。
Acquiring a sample vector set of the application program from a sample database, wherein the sample vectors in the sample vector set comprise historical feature information x of multiple dimensions of the application programi。
Referring to fig. 6, fig. 6 is a schematic structural diagram of an application management and control apparatus according to an embodiment of the present disclosure. Theapparatus 30 further comprises adetection module 35 for detecting that the application enters the background.
Thedevice 30 may also include astorage module 36. Thestorage module 36 is used for storing the historical feature information x of the application programi。
Wherein, the characteristic information of the plurality of dimensions can refer to table 2.
TABLE 2
It should be noted that the 10-dimensional characteristic information shown in table 2 is only one in the embodiment of the present application, but the present application is not limited to the 10-dimensional characteristic information shown in table 1, and may also be one of the 10-dimensional characteristic information, or at least two of the 10-dimensional characteristic information, or may also include characteristic information of other dimensions, for example, whether charging is currently performed, the current amount of power is currently performed, or whether WiFi is currently connected, or the like.
In one embodiment, historical feature information may be selected for 6 dimensions:
A. the time that the application resides in the background;
B. whether the screen is bright, for example, the screen is bright and is marked as 1, and the screen is off and is marked as 0;
C. counting the total use times in the week;
D. counting the total use time of the week;
E. whether WiFi is on or not, for example, WiFi is on and is recorded as 1, WiFi is off and is recorded as 0; and
F. whether charging is currently in progress, e.g., currently charging, noted as 1, and not currently charging, noted as 0.
The generatingmodule 32 is configured to calculate a sample vector set by using a BP neural network algorithm, and generate a training model.
The generatingmodule 32 trains the historical feature information x acquired by the acquiringmodule 31iInputting the historical characteristic information x in a BP neural network algorithmi。
Referring to fig. 6, the generatingmodule 32 includes a definingmodule 321 and asolving module 322.
The definingmodule 321 is configured to define a network structure.
The definingmodule 321 may include an inputlayer defining module 3211, a hiddenlayer defining module 3212, a classificationlayer defining module 3213, an outputlayer defining module 3214, an activationfunction defining module 3215, a batchsize defining module 3216, and a learningrate defining module 3217.
The inputlayer definition module 3211 is configured to set an input layer, where the input layer includes N nodes, and the number of nodes of the input layer and the historical feature information xiAre the same in dimension.
Wherein the historical feature information xiIs less than 10, and the number of nodes of the input layer is less than 10, so as to simplify the operation process.
In one embodiment, the historical feature information xiIs 6 dimensions, the input layer comprises 6 nodes.
The hiddenlayer defining module 3212 is configured to set a hidden layer, which includes M nodes.
Wherein the hidden layer may comprise a plurality of hidden hierarchies. The number of nodes of each implicit hierarchy is less than 10, so that the operation process is simplified.
In one embodiment, the hidden layers may include a first hidden layer, a second hidden layer, and a third hidden layer. The first implied hierarchy includes 10 nodes, the second implied hierarchy includes 5 nodes, and the third implied hierarchy includes 5 nodes.
The classification
layer definition module 3213 is configured to set a classification layer, where the classification layer uses a softmax function, and the softmax function is
Wherein p is a prediction probability value, Z
KIs a median value, C is the number of classes of prediction,
is the jth intermediate value.
The outputlayer defining module 3214 is configured to set an output layer, where the output layer includes 2 nodes.
The activation
function definition module 3215 is configured to set an activation function, where the activation function is a sigmoid function, and the sigmoid function is
Wherein f (x) ranges from 0 to 1.
The batchsize definition module 3216 is configured to set a batch size, which is a.
The batch size can be flexibly adjusted according to actual conditions. The batch size may be 50-200 a.
In one embodiment, the batch size is 128.
The learningrate defining module 3217 is configured to set a learning rate, where the learning rate is B.
Wherein, the learning rate can be flexibly adjusted according to the actual situation. The learning rate may be 0.1-1.5.
In one embodiment, the learning rate is 0.9.
It should be noted that the sequence of the inputlayer defining module 3211 setting an input layer, the hiddenlayer defining module 3212 setting a hidden layer, the classificationlayer defining module 3213 setting a classification layer, the outputlayer defining module 3214 setting an output layer, the activationfunction defining module 3215 setting an activation function, the batchsize defining module 3216 setting a batch size, and the learningrate defining module 3217 setting a learning rate may be flexibly adjusted.
Thesolving module 322 is configured to bring the sample vector set into a network structure for calculation, so as to obtain a training model.
Thesolving module 322 may include afirst solving module 3221, asecond solving module 3222, athird solving module 3223, afourth solving module 3224 and a correcting module.
Thefirst solving module 3221 is configured to input the sample vector set in the input layer for calculation, so as to obtain an output value of the input layer.
Thesecond solving module 3222 is configured to input the output value of the input layer in the hidden layer, so as to obtain the output value of the hidden layer.
Wherein the output value of the input layer is the input value of the hidden layer.
In one embodiment, the hidden layer may include a plurality of hidden hierarchies. The output value of the input layer is the input value of the first implied hierarchy. The output value of the first implied hierarchy is the input value of the second implied hierarchy. And the output value of the second implied hierarchy is the input value of the third implied hierarchy, and so on.
The third stepofThe solution module 3223 is configured to input the output value of the hidden layer in the classification layer for calculation, so as to obtain the prediction probability value [ p [ ]1p2]T。
Wherein the output value of the hidden layer is the input value of the classification layer.
Thefourth solving module 3224 is configured to bring the prediction probability value into an output layer for calculation, so as to obtain a prediction result value y, when p is1Greater than p2When y is [10 ]]TWhen p is1P is less than or equal to2When y is [ 01 ]]T。
Wherein the output value of the classification layer is the input value of the output layer.
The correctingmodule 3225 is configured to correct the network structure according to the prediction result value y, so as to obtain a training model.
Thecalculation module 33 is configured to input the current feature information s of the application program into the training model for calculation when the application program enters the background.
Referring to fig. 6, in an embodiment, thecalculation module 33 may include anacquisition module 331 and anoperation module 332.
The collectingmodule 331 is configured to collect current feature information s of the application program.
Acquiring the dimension of the current characteristic information s of the application program and the acquired historical characteristic information x of the application programiAre the same.
Theoperation module 332 is used for substituting the current feature information s into the training model for calculation.
Inputting the current characteristic information s into the training model for calculation to obtain the prediction probability value [ p ] of the classification layer1’ p2’]TWhen p is1' greater than p2When y is 10]TWhen p is1' less than or equal to p2’When y is [ 01 ]]T。
In an embodiment, the acquiringmodule 331 is configured to acquire current feature information s at regular time according to a predetermined acquiring time and store the current feature information s in thestorage module 36, and the acquiringmodule 331 is further configured to acquire the current feature information s corresponding to a time point when the application program enters the background, and input the current feature information s into theoperation module 332 for being taken into the training model for calculation.
The determiningmodule 34 is configured to determine whether the application needs to be closed.
When y is ═ 10]TDetermining that the application needs to be closed; when y is ═ 01]TAnd determining that the application needs to be reserved.
Theapparatus 30 may further include ashutdown module 37 configured to shutdown the application when it is determined that the application needs to be shutdown.
The device for the application program management and control method provided by the application obtains the historical characteristic information xiAnd generating a training model by adopting a BP neural network algorithm, and when detecting that the application program enters the background, bringing the current characteristic information s of the application program into the training model, further judging whether the application program needs to be closed, and intelligently closing the application program.
Referring to fig. 7, fig. 7 is a schematic structural diagram of an electronic device according to an embodiment of the present disclosure. Theelectronic device 500 includes: aprocessor 501 and amemory 502. Theprocessor 501 is electrically connected to thememory 502.
Theprocessor 501 is a control center of theelectronic device 500, connects various parts of the wholeelectronic device 500 by using various interfaces and lines, executes various functions of the electronic device and processes data by running or loading an application program stored in thememory 502 and calling the data stored in thememory 502, thereby monitoring the wholeelectronic device 500.
In this embodiment, theprocessor 501 in theelectronic device 500 loads instructions corresponding to processes of one or more application programs into thememory 502 according to the following steps, and theprocessor 501 runs the application programs stored in thememory 502, so as to implement various functions:
obtaining the sample vector set of the application program, wherein the sample vector packet in the sample vector setHistorical feature information x comprising multiple dimensions of the applicationi;
Calculating a sample vector set by adopting a neural network algorithm to generate a training model;
when an application program enters a background, inputting the current characteristic information s of the application program into the training model for calculation; and
and judging whether the application program needs to be closed or not.
It should be noted that the application program may be a chat application program, a video application program, a music application program, a shopping application program, a shared bicycle application program, or a mobile banking application program.
Acquiring a sample vector set of the application program from a sample database, wherein the sample vectors in the sample vector set comprise historical feature information x of multiple dimensions of the application programi。
Wherein, the characteristic information of the plurality of dimensions can refer to table 3.
TABLE 3
It should be noted that the 10-dimensional characteristic information shown in table 3 is only one in the embodiment of the present application, but the present application is not limited to the 10-dimensional characteristic information shown in table 1, and may also be one of the 10-dimensional characteristic information, or at least two of the 10-dimensional characteristic information, or may also include characteristic information of other dimensions, for example, whether charging is currently performed, the current amount of power is currently performed, or whether WiFi is currently connected, or the like.
In one embodiment, historical feature information may be selected for 6 dimensions:
A. the time that the application resides in the background;
B. whether the screen is bright, for example, the screen is bright and is marked as 1, and the screen is off and is marked as 0;
C. counting the total use times in the week;
D. counting the total use time of the week;
E. whether WiFi is on or not, for example, WiFi is on and is recorded as 1, WiFi is off and is recorded as 0; and
F. whether charging is currently in progress, e.g., currently charging, noted as 1, and not currently charging, noted as 0.
In one embodiment, theprocessor 501 calculates the sample vector set by using a BP neural network algorithm, and generating the training model further includes:
defining a network structure; and
and bringing the sample vector set into a network structure for calculation to obtain a training model.
Wherein the defining a network structure comprises:
setting an input layer, wherein the input layer comprises N nodes, the number of the nodes of the input layer is equal to the historical characteristic information xiAre the same in dimension;
wherein the historical feature information xiIs less than 10, and the number of nodes of the input layer is less than 10, so as to simplify the operation process.
In one embodiment, the historical feature information xiIs 6 dimensions, the input layer comprises 6 nodes.
Setting a hidden layer, wherein the hidden layer comprises M nodes.
Wherein the hidden layer may comprise a plurality of hidden hierarchies. The number of nodes of each implicit hierarchy is less than 10, so that the operation process is simplified.
In one embodiment, the hidden layers may include a first hidden layer, a second hidden layer, and a third hidden layer. The first implied hierarchy includes 10 nodes, the second implied hierarchy includes 5 nodes, and the third implied hierarchy includes 5 nodes.
Setting a classification layer, wherein the classification layer adopts a softmax function, and the softmax function is
Wherein p is a prediction probability value, Z
KIs a median value, C is the number of classes of prediction,
is the jth intermediate value.
And setting an output layer, wherein the output layer comprises 2 nodes.
Setting an activation function, wherein the activation function adopts a sigmoid function, and the sigmoid function is
Wherein f (x) ranges from 0 to 1.
And setting a batch size, wherein the batch size is A.
The batch size can be flexibly adjusted according to actual conditions. The batch size may be 50-200 a.
In one embodiment, the batch size is 128.
And setting a learning rate, wherein the learning rate is B.
Wherein, the learning rate can be flexibly adjusted according to the actual situation. The learning rate may be 0.1-1.5.
In one embodiment, the learning rate is 0.9.
It should be noted that the sequence of the setting input layer, the setting hidden layer, the setting classification layer, the setting output layer, the setting activation function, the setting batch size, and the setting learning rate can be flexibly adjusted.
The step of bringing the sample vector set into a network structure for calculation to obtain a training model may include:
and inputting the sample vector set in the input layer for calculation to obtain an output value of the input layer.
And inputting the output value of the input layer into the hidden layer to obtain the output value of the hidden layer.
Wherein the output value of the input layer is the input value of the hidden layer.
In one embodiment, the hidden layer may include a plurality of hidden hierarchies. The output value of the input layer is the input value of the first implied hierarchy. The output value of the first implied hierarchy is the input value of the second implied hierarchy. And the output value of the second implied hierarchy is the input value of the third implied hierarchy, and so on.
Inputting the output value of the hidden layer in the classification layer for calculation to obtain the prediction probability value [ p1p2]T。
Wherein the output value of the hidden layer is the input value of the classification layer.
In one embodiment, the hidden layer may include a plurality of hidden hierarchies. The output value of the last implied hierarchy is the input value of the hierarchy.
Substituting the prediction probability value into an output layer for calculation to obtain a prediction result value y, when p is1Greater than p2When y is [10 ]]TWhen p is1P is less than or equal to2When y is [ 01 ]]T。
Wherein the output value of the classification layer is the input value of the output layer.
And correcting the network structure according to the predicted result value y to obtain a training model.
When the application program enters the background, the step of inputting the current characteristic information s of the application program into the training model for calculation comprises the following steps:
and collecting the current characteristic information s of the application program.
Acquiring the dimension of the current characteristic information s of the application program and the acquired historical characteristic information x of the application programiAre the same.
And substituting the current characteristic information s into the training model for calculation.
Inputting the current characteristic information s into the training model for calculation to obtain the prediction probability value [ p ] of the classification layer1’ p2’]TWhen p is1' greater than p2When y is 10]TWhen p is1' less than or equal to p2When y is ═ 01]T。
In the step of judging whether the application program needs to be closed, when y is ═ 10]TDetermining that the application needs to be closed; when y is ═ 01]TAnd determining that the application needs to be reserved.
Thememory 502 may be used to store applications and data. Thememory 502 stores programs containing instructions executable in the processor. The programs may constitute various functional modules. Theprocessor 501 executes various functional applications and data processing by executing programs stored in thememory 502.
In some embodiments, as shown in fig. 8, fig. 8 is a schematic structural diagram of an electronic device provided in an embodiment of the present application. Theelectronic device 500 further comprises:radio frequency circuit 503,display 504,control circuit 505,input unit 506,audio circuit 507,sensor 508, andpower supply 509. Theprocessor 501 is electrically connected to theradio frequency circuit 503, thedisplay 504, thecontrol circuit 505, theinput unit 506, theaudio circuit 507, thesensor 508, and thepower supply 509.
Theradio frequency circuit 503 is used for transceiving radio frequency signals to communicate with a server or other electronic devices through a wireless communication network.
Thedisplay 504 may be used to display information entered by or provided to the user as well as various graphical user interfaces of the terminal, which may be comprised of images, text, icons, video, and any combination thereof.
Thecontrol circuit 505 is electrically connected to thedisplay 504 and is configured to control thedisplay 504 to display information.
Theinput unit 506 may be used to receive input numbers, character information, or user characteristic information (e.g., fingerprint), and generate keyboard, mouse, joystick, optical, or trackball signal inputs related to user settings and function control.
Theaudio circuit 507 may provide an audio interface between the user and the terminal through a speaker, microphone.
Thesensor 508 is used to collect external environmental information. Thesensors 508 may include one or more of ambient light sensors, acceleration sensors, gyroscopes, and the like.
Thepower supply 509 is used to power the various components of theelectronic device 500. In some embodiments,power supply 509 may be logically coupled toprocessor 501 through a power management system to manage charging, discharging, and power consumption management functions through the power management system.
Although not shown in fig. 8, theelectronic device 500 may further include a camera, a bluetooth module, and the like, which are not described in detail herein.
The electronic equipment provided by the application acquires the historical characteristic information xiAnd generating a training model by adopting a BP neural network algorithm, and when detecting that the application program enters the background, bringing the current characteristic information s of the application program into the training model, further judging whether the application program needs to be closed, and intelligently closing the application program.
The embodiment of the present invention further provides a medium, where multiple instructions are stored, and the instructions are suitable for being loaded by a processor to execute the application management and control method according to any one of the above embodiments.
The application program control method, device, medium and electronic device provided by the embodiment of the invention belong to the same concept, and the specific implementation process is detailed in the whole specification and is not described herein again.
Those skilled in the art will appreciate that all or part of the steps in the methods of the above embodiments may be implemented by associated hardware instructed by a program, which may be stored in a computer-readable storage medium, and the storage medium may include: read Only Memory (ROM), Random Access Memory (RAM), magnetic or optical disks, and the like.
The application management and control method, device, medium and electronic device provided by the embodiments of the present application are described in detail above, and specific examples are applied in the present application to explain the principles and embodiments of the present application, and the descriptions of the above embodiments are only used to help understanding the present application. Meanwhile, for those skilled in the art, according to the idea of the present application, there may be variations in the specific embodiments and the application scope, and in summary, the content of the present specification should not be construed as a limitation to the present application.