Movatterモバイル変換


[0]ホーム

URL:


CN111800569B - Photographing processing method and device, storage medium and electronic equipment - Google Patents

Photographing processing method and device, storage medium and electronic equipment
Download PDF

Info

Publication number
CN111800569B
CN111800569BCN201910282456.4ACN201910282456ACN111800569BCN 111800569 BCN111800569 BCN 111800569BCN 201910282456 ACN201910282456 ACN 201910282456ACN 111800569 BCN111800569 BCN 111800569B
Authority
CN
China
Prior art keywords
photographing
feature
matrix
user
image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Fee Related
Application number
CN201910282456.4A
Other languages
Chinese (zh)
Other versions
CN111800569A (en
Inventor
陈仲铭
何明
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangdong Oppo Mobile Telecommunications Corp Ltd
Original Assignee
Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangdong Oppo Mobile Telecommunications Corp LtdfiledCriticalGuangdong Oppo Mobile Telecommunications Corp Ltd
Priority to CN201910282456.4ApriorityCriticalpatent/CN111800569B/en
Publication of CN111800569ApublicationCriticalpatent/CN111800569A/en
Application grantedgrantedCritical
Publication of CN111800569BpublicationCriticalpatent/CN111800569B/en
Expired - Fee Relatedlegal-statusCriticalCurrent
Anticipated expirationlegal-statusCritical

Links

Images

Classifications

Landscapes

Abstract

Translated fromChinese

本申请实施例公开了一种拍照处理方法、装置、存储介质及电子设备,其中,本申请实施例在检测到拍照操作时,采集全景数据生成全景特征向量;并获取历史拍照数据生成历史特征向量;根据历史特征向量和全景特征向量生成用户特征矩阵;获取拍照操作得到的原始图像,根据原始图像、用户特征矩阵和预先训练好的分类模型,获取图像调整参数;根据图像调整参数调整原始图像。本申请的方案实时地通过照片内容,并参照用户历史拍照数据获取个性化图像调整参数,为用户提供细粒度照片优化方案,实现了有针对性的为用户提供照片处理方案。

Figure 201910282456

The embodiment of the present application discloses a photographing processing method, device, storage medium and electronic device, wherein, when a photographing operation is detected, the embodiment of the present application collects panoramic data to generate a panoramic feature vector; and obtains historical photographing data to generate a historical feature vector ; Generate the user feature matrix according to the historical feature vector and the panoramic feature vector; obtain the original image obtained by the photographing operation, obtain the image adjustment parameters according to the original image, the user feature matrix and the pre-trained classification model; adjust the original image according to the image adjustment parameters. The solution of the present application obtains personalized image adjustment parameters through the photo content in real time and with reference to the user's historical photographing data, provides a fine-grained photo optimization solution for the user, and realizes a targeted photo processing solution for the user.

Figure 201910282456

Description

Photographing processing method and device, storage medium and electronic equipment
Technical Field
The application relates to the technical field of terminals, in particular to a photographing processing method and device, a storage medium and electronic equipment.
Background
Terminals such as mobile phones and tablet computers mainly have the following two optimization schemes for shot pictures. The first is to use a preset filter function of the camera, after a user selects a photographing mode, the terminal processes an original photo obtained by photographing by using a preset default filter scheme in the mode, for example, if the user selects a beauty mode, the terminal selects a default beauty filter to beautify the photo. The second mode is to use a neural network to detect the currently shot content, directly provide a default filter matched with the detected content for the user, and simultaneously display other filter preview effects on the terminal for the user to select.
The two schemes are realized through a plurality of specific filters arranged in the terminal, the granularity of the intelligent photographing optimization scheme is coarse, and a targeted photo processing scheme cannot be provided according to the photographing habit and preference of a user.
Disclosure of Invention
The embodiment of the application provides a photographing processing method and device, a storage medium and electronic equipment, which can be used for providing a photo processing scheme for a user in a targeted manner by combining photographing habits and preferences of the user.
In a first aspect, an embodiment of the present application provides a photographing processing method, including:
when the photographing operation is detected, collecting panoramic data, and generating a panoramic feature vector according to the panoramic data;
acquiring historical photographing data, and generating a historical feature vector according to the photographing data;
generating a user feature matrix according to the historical feature vector and the panoramic feature vector;
when a photographing instruction is received, acquiring an original image obtained by the photographing operation, and acquiring image adjustment parameters according to the original image, the user characteristic matrix and a pre-trained classification model;
and adjusting the original image according to the image adjusting parameter.
In a second aspect, an embodiment of the present application provides a photographing processing apparatus, including:
the first feature extraction module is used for collecting panoramic data when a photographing operation is detected, and generating a panoramic feature vector according to the panoramic data;
the second feature extraction module is used for acquiring historical photographing data and generating a historical feature vector according to the photographing data;
the feature fusion module is used for generating a user feature matrix according to the historical feature vector and the panoramic feature vector;
the parameter acquisition module is used for acquiring an original image obtained by the photographing operation when a photographing instruction is received, and acquiring image adjustment parameters according to the original image, the user feature matrix and a pre-trained classification model;
and the image adjusting module is used for adjusting the original image according to the image adjusting parameters.
In a third aspect, a storage medium is provided in this application, where a computer program is stored, and when the computer program runs on a computer, the computer is caused to execute a photographing processing method according to any embodiment of the application.
In a fourth aspect, an embodiment of the present application provides an electronic device, which includes a processor and a memory, where the memory has a computer program, and the processor is configured to execute the photographing processing method according to any embodiment of the present application by calling the computer program.
According to the technical scheme, when the photographing operation is detected, panoramic data are collected, a panoramic feature vector is generated according to the panoramic data, historical photographing data of the user are obtained, a historical feature vector is generated according to the photographing data, a user feature matrix is generated according to the historical feature vector and the panoramic feature vector, an image obtained through the photographing operation is obtained, image adjusting parameters are obtained according to the image, the user feature matrix and a pre-trained classification model, and the original image is adjusted according to the image adjusting parameters. According to the scheme, the panoramic data collected when the user takes a picture, the historical photographing habit of the user and the shot picture are combined to obtain the user characteristic matrix in real time, the characteristic vector can reflect the current situation of the user, the photographing habit and the preference of the user are combined, the image adjusting parameters are determined according to the characteristic vector, and the targeted user picture processing scheme can be realized.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present application, the drawings needed to be used in the description of the embodiments are briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present application, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without creative efforts.
Fig. 1 is a schematic view of a panoramic sensing architecture of a photographing processing method according to an embodiment of the present application.
Fig. 2 is a first flowchart of a photographing processing method according to an embodiment of the present application.
Fig. 3 is a second flowchart of the photographing processing method according to the embodiment of the present application.
Fig. 4 is a third flowchart illustrating a photographing processing method according to an embodiment of the present application.
Fig. 5 is a schematic diagram of an image display manner of the photographing processing method according to the embodiment of the application.
Fig. 6 is a schematic structural diagram of a photographing processing device according to an embodiment of the present application.
Fig. 7 is a schematic structural diagram of a first electronic device according to an embodiment of the present application.
Fig. 8 is a schematic structural diagram of a second electronic device according to an embodiment of the present application.
Detailed Description
The technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application. It is to be understood that the embodiments described are only a few embodiments of the present application and not all embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without inventive step, are within the scope of the present application.
The terms "first", "second", and "third", etc. in this application are used to distinguish between different objects and not to describe a particular order. Furthermore, the terms "include" and "have," as well as any variations thereof, are intended to cover non-exclusive inclusions. For example, a process, method, system, article, or apparatus that comprises a list of steps or modules is not limited to only those steps or modules listed, but rather, some embodiments may include other steps or modules not listed or inherent to such process, method, article, or apparatus.
Reference herein to "an embodiment" means that a particular feature, structure, or characteristic described in connection with the embodiment can be included in at least one embodiment of the application. The appearances of the phrase in various places in the specification are not necessarily all referring to the same embodiment, nor are separate or alternative embodiments mutually exclusive of other embodiments. It is explicitly and implicitly understood by one skilled in the art that the embodiments described herein can be combined with other embodiments.
Referring to fig. 1, fig. 1 is a schematic view of a panoramic sensing architecture of a photographing processing method according to an embodiment of the present application. The photographing processing method is applied to electronic equipment. A panoramic perception framework is arranged in the electronic equipment. The panoramic perception framework is the integration of hardware and software used for realizing the photographing processing method in electronic equipment.
The panoramic perception architecture comprises an information perception layer, a data processing layer, a feature extraction layer, a scene modeling layer and an intelligent service layer.
The information perception layer is used for acquiring information of the electronic equipment or information in an external environment. The information-perceiving layer may include a plurality of sensors. For example, the information sensing layer includes a plurality of sensors such as a distance sensor, a magnetic field sensor, a light sensor, an acceleration sensor, a fingerprint sensor, a hall sensor, a position sensor, a gyroscope, an inertial sensor, an attitude sensor, a barometer, and a heart rate sensor.
Among other things, a distance sensor may be used to detect a distance between the electronic device and an external object. The magnetic field sensor may be used to detect magnetic field information of the environment in which the electronic device is located. The light sensor can be used for detecting light information of the environment where the electronic equipment is located. The acceleration sensor may be used to detect acceleration data of the electronic device. The fingerprint sensor may be used to collect fingerprint information of a user. The Hall sensor is a magnetic field sensor manufactured according to the Hall effect, and can be used for realizing automatic control of electronic equipment. The location sensor may be used to detect the geographic location where the electronic device is currently located. Gyroscopes may be used to detect angular velocity of an electronic device in various directions. Inertial sensors may be used to detect motion data of an electronic device. The gesture sensor may be used to sense gesture information of the electronic device. A barometer may be used to detect the barometric pressure of the environment in which the electronic device is located. The heart rate sensor may be used to detect heart rate information of the user.
And the data processing layer is used for processing the data acquired by the information perception layer. For example, the data processing layer may perform data cleaning, data integration, data transformation, data reduction, and the like on the data acquired by the information sensing layer.
The data cleaning refers to cleaning a large amount of data acquired by the information sensing layer to remove invalid data and repeated data. The data integration refers to integrating a plurality of single-dimensional data acquired by the information perception layer into a higher or more abstract dimension so as to comprehensively process the data of the plurality of single dimensions. The data transformation refers to performing data type conversion or format conversion on the data acquired by the information sensing layer so that the transformed data can meet the processing requirement. The data reduction means that the data volume is reduced to the maximum extent on the premise of keeping the original appearance of the data as much as possible.
The characteristic extraction layer is used for extracting characteristics of the data processed by the data processing layer so as to extract the characteristics included in the data. The extracted features may reflect the state of the electronic device itself or the state of the user or the environmental state of the environment in which the electronic device is located, etc.
The feature extraction layer may extract features or process the extracted features by a method such as a filtering method, a packing method, or an integration method.
The filtering method is to filter the extracted features to remove redundant feature data. Packaging methods are used to screen the extracted features. The integration method is to integrate a plurality of feature extraction methods together to construct a more efficient and more accurate feature extraction method for extracting features.
The scene modeling layer is used for building a model according to the features extracted by the feature extraction layer, and the obtained model can be used for representing the state of the electronic equipment, the state of a user, the environment state and the like. For example, the scenario modeling layer may construct a key value model, a pattern identification model, a graph model, an entity relation model, an object-oriented model, and the like according to the features extracted by the feature extraction layer.
The intelligent service layer is used for providing intelligent services for the user according to the model constructed by the scene modeling layer. For example, the intelligent service layer can provide basic application services for users, perform system intelligent optimization for electronic equipment, and provide personalized intelligent services for users.
In addition, the panoramic perception architecture can further comprise a plurality of algorithms, each algorithm can be used for analyzing and processing data, and the plurality of algorithms can form an algorithm library. For example, the algorithm library may include algorithms such as markov algorithm, hidden dirichlet distribution algorithm, bayesian classification algorithm, support vector machine, K-means clustering algorithm, K-nearest neighbor algorithm, conditional random field, residual network, long-short term memory network, convolutional neural network, cyclic neural network, and the like.
Based on the panoramic sensing architecture, when the electronic equipment detects a photographing operation, the information sensing layer collects panoramic data, the feature extraction layer generates panoramic feature vectors according to the panoramic data and then acquires historical photographing data of the user, the feature extraction layer generates historical feature vectors according to the photographing data and generates a user feature matrix according to the historical feature vectors and the panoramic feature vectors, the intelligent service layer acquires images obtained through the photographing operation, image adjustment parameters are acquired according to the images, the user feature matrices and a pre-trained classification model, and the original images are adjusted according to the image adjustment parameters. According to the scheme, panoramic data collected when a user takes a picture, the historical photographing habit of the user and the shot picture are combined to obtain the user feature matrix in real time, the feature vector not only can reflect the current situation of the user, but also combines the photographing habit and preference of the user, a fine-grained picture optimization scheme is provided for the user, personalized image adjustment parameters are generated, and a targeted user picture processing scheme can be realized.
An execution main body of the photographing processing method may be the photographing processing apparatus provided in the embodiment of the present application, or an electronic device integrated with the photographing processing apparatus, where the photographing processing apparatus may be implemented in a hardware or software manner. The electronic device may be a smart phone, a tablet computer, a palm computer, a notebook computer, or a desktop computer.
Referring to fig. 2, fig. 2 is a first flowchart of a photographing processing method according to an embodiment of the present disclosure. The specific flow of the photographing processing method provided by the embodiment of the application can be as follows:
step 101, when a photographing operation is detected, collecting panoramic data, and generating a panoramic feature vector according to the panoramic data.
In this embodiment of the application, the electronic device may monitor the photographing operation in real time, for example, detect whether to start the photographing operation by detecting whether a camera module of the electronic device is started. For example, it is detected that the user takes a picture through a camera APP (Application) of the electronic device, or it is detected that the user triggers a picture taking operation through a third party APP. Or, by monitoring the user instruction in real time, when an image shooting request is received, it is determined that the shooting operation is detected.
When the electronic equipment detects the photographing operation, panoramic data starts to be collected. The panoramic data includes, but is not limited to, the following data: terminal status data and sensor status data. The terminal operation data comprises operation modes of the electronic equipment in each time interval, wherein the operation modes comprise a game mode, an entertainment mode, a video mode and the like, the operation mode of the electronic equipment can be determined according to the type of the currently operated application program, and the type of the currently operated application program can be directly obtained from the classification information of the application program installation package; or, the terminal operation data may further include a remaining power, a display mode, a network state, a screen-off/lock state, and the like of the electronic device.
The sensor status data includes data collected by each sensor on the electronic device, for example, the following sensors are included on the electronic device: a plurality of sensors such as distance sensor, magnetic field sensor, light sensor, acceleration sensor, fingerprint sensor, hall sensor, position sensor, gyroscope, inertial sensor, gesture sensor, barometer, heart rate sensor. The method comprises the steps of obtaining sensor state data of the electronic equipment when the photographing operation is detected, or obtaining sensor state data of the electronic equipment for a period of time before the photographing operation is detected. In some embodiments, the status data of some sensors may be acquired in a targeted manner. For example, data collected by a position sensor and a light sensor are obtained, wherein current position information of the electronic device can be determined according to the data collected by the position sensor, and the light sensor can collect light intensity of the environment where the electronic device is currently located.
Referring to fig. 3, fig. 3 is a second flowchart of a photographing processing method according to an embodiment of the present disclosure. In some embodiments, thestep 101, when the photographing operation is detected, acquiring panoramic data, and generating a panoramic feature vector according to the panoramic data may include:
step 1011, when the photographing operation is detected, acquiring current terminal state data and sensor state data;
step 1012, generating a terminal state characteristic according to the terminal state data, and generating a terminal scene characteristic according to the sensor state data;
and 1013, fusing the terminal state characteristics and the terminal scene characteristics to generate the panoramic feature vector.
Generating terminal state characteristic ys according to terminal state data1(ii) a Acquiring state data of a magnetometer, an accelerometer and a gyroscope according to the state data of the sensors, and processing the acquired state data of the three sensors through a Kalman filtering algorithm to obtain a four-dimensional terminal attitude characteristic ys2~ys5(ii) a Obtaining barometric characteristics ys through data collected by barometer6(ii) a Determining WIFI connection state ys through network module7(ii) a Positioning is carried out through data acquired by position sensing, the current position attribute (such as market, home, company, park and the like) of the user is obtained, and the characteristic ys is generated8(ii) a Furthermore, the method can be combined with magnetometer, acceleration sensor, gyroscope and barometer 10 axis information to obtain new multi-dimensional data by using a filtering algorithm or a principal component analysis algorithm to generate corresponding characteristic ys9. For the non-digital features, index numbers can be established to convert the features into digital representations, for example, for the current systemThe characteristic of the operation mode of the end state is that the index number represents the current state mode, such as 1 is a game mode, 2 is an entertainment mode, and 3 is a video mode. If the current operation mode is the game mode, determining the current system state ys11. After the characteristics represented by all numbers are obtained, the characteristic data are fused to obtain a long vector, and the long vector is normalized to obtain a panoramic characteristic vector s1
s1={ys1,ys2,…,ysn}
Wherein, the more kinds of the collected panoramic data are, the generated panoramic feature vector s1The longer the length of (a), the larger the value of n. Ys in the above1-ys9By way of example only, the present application is not limited to these features, and features with more dimensions may be obtained according to actual needs.
102, obtaining historical photographing data, and generating a historical feature vector according to the photographing data.
The electronic equipment records and regularly updates historical photographing data of a user, wherein the historical photographing data comprises a historical photo editing scheme, calling third-party photographing APP information, calling system self-contained photographing APP information, sharing picture information and the like. The third-party photographing APP information is beneficial to mining the favorite retouching types of the users, because the subdivision degree of the third-party photographing APP on the market is very high, for example, a budding camera mainly processes a self-photographing head portrait, and Prime mainly performs style migration on a landscape image or a figure image, so that the user retouching preference can be known by calling the system self-photographing APP information. Generally, the installation package information of the third-party photographing APP has multiple layers, wherein the installation package information comprises the image modifying software type to which the APP belongs, the third-party photographing APP information is obtained and called, the image modifying software type to which the called APP belongs is further obtained according to the information, and the image modifying software type reflects the image modifying preference of a user.
In addition, in some current electronic devices, the photographing APP of the system itself also has a picture-repairing function, so that information of calling the photographing APP of the system itself can be recorded. In addition, different photo sharing habits of the user can also reflect the photo style preferred by the user. For example, if a user shares a photo with an Instagram (a social application for picture sharing), it indicates that the user is biased to a photo in the european and american style, and if the user shares a photo with a QQ space, it indicates that the user is biased to a photo in the youthful style, so that the user-shared photo information can be used as a history photographing data.
Furthermore, when the user manually adjusts the parameters of the brightness, contrast, sharpness, saturation, color temperature, etc. of the photo to change the display effect of the photo, these parameters are recorded as a historical photo editing scheme, for example, after the user takes a food photo at ordinary times, the contrast (+10) and saturation (+5) of the photo are generally manually increased to make the picture have more beautiful colors but not oversaturated with colors, and such parameters as the contrast (+10) and saturation (+5) are recorded as a historical photo editing scheme. The above-mentioned historical photo editing scheme, calling third party to take photo APP information, calling system self-bring photo APP information, sharing picture information and the like are all historical data which are recorded and stored in a preset path when detecting corresponding operation of a user, and the longer the time that the user uses the electronic equipment, the more the recorded historical data are, the stronger the referential of the data is.
After the data are acquired, historical feature vectors s are generated according to historical photographing data of the data2
s2={ya1,ya2,…,yam}
And 103, generating a user characteristic matrix according to the historical characteristic vector and the panoramic characteristic vector.
According to the panoramic feature vector s generated instep 1011And the historical feature vector s generated instep 1022And fusing the two feature vectors to generate a user feature matrix. There are many ways to fuse feature vectors. For example, in the first mode, s1And s2And adding to generate a long vector with the length of m + n, namely a matrix with m + n columns and 1 row. The user characteristic matrix is then as follows:
{ys1,ys2,…,ysn,ya1,ya2,…,yam}。
in a second mode, the step of generating the user feature matrix according to the historical feature vector and the panoramic feature vector comprises: and performing matrix superposition processing on the historical characteristic vector and the panoramic characteristic vector to generate a user characteristic matrix. I.e. s1And s2And (5) superposing to generate a matrix with m (or n) columns and 2 rows. If n is less than m, adopting a zero filling mode to carry out panoramic eigenvector s1Is extended by m. If n is more than m, adopting a zero filling mode to fill the historical characteristic vector s2Is extended by n. And when n is greater than m, obtaining a user feature matrix through superposition as follows:
Figure BDA0002022117910000091
and 104, when a photographing instruction is received, acquiring an original image obtained by the photographing operation, and acquiring image adjustment parameters according to the original image, the user feature matrix and a pre-trained classification model.
After the photographing operation is completed and the original image is obtained, the image adjusting parameters suitable for the current original image are generated by combining the obtained user characteristic vector, the original image and the pre-trained classification model. Referring to fig. 4, fig. 4 is a third schematic flow chart of the photographing processing method according to the embodiment of the present application. In some embodiments, thestep 104, when receiving a photographing instruction, acquiring an original image obtained by the photographing operation, and acquiring an image adjustment parameter according to the original image, the user feature matrix, and a pre-trained classification model includes the following refining steps:
step 1041, compressing the image according to a preset length and a preset width, and generating a pixel matrix according to the compressed image;
1042, converting the user characteristic matrix into a first characteristic matrix, wherein the row number of the first characteristic matrix is matched with the preset width;
step 1043, merging the pixel matrix and the first feature matrix to generate a second feature matrix;
andstep 1044, acquiring image adjustment parameters according to the second feature matrix and the pre-trained classification model.
In an optional embodiment, thestep 1043 of combining the pixel matrix and the first feature matrix and generating the second feature matrix may include the following steps of: converting the first feature matrix into a third feature matrix according to a Hilbert matrix; and combining the pixel matrix and the third feature matrix to generate the second feature matrix. The hilbert matrix is a mathematical transformation matrix, positive definite, and highly ill-conditioned. That is, when any element changes little, the determinant value and the inverse matrix of the whole matrix change greatly, the pathological degree and the order are related, and the hilbert matrix is multiplied to find out the more special characteristics and rules in the data. Therefore, the first feature matrix is converted by using the Hilbert matrix and then combined, which is beneficial for the classification model to better find the features in the data.
Because the image shot by the camera of the electronic device is generally high-definition, that is, the number of pixel points is large, if the image information is directly fused with the user feature vector, the data volume is large, and the calculation speed is low. Therefore, the image is compressed first. The preset length and the preset width may be preset, and the preset length is generally greater than or equal to the number of columns of the user feature matrix.
For example, assume that the predetermined length is equal to 200 pixels and the predetermined width is equal to 100 pixels. For example, the size of the image obtained by the camera is 2736 x 3648, the image is compressed to 100 x 200 size using image compression techniques. Assuming that the number of columns of the user feature matrix is 50 and the number of rows is 2, the size of the user feature matrix is 50 × 2, and then the vector is converted into a first feature matrix by superimposing the matrix itself twice in the transverse direction and fifty times in the longitudinal direction, the size of the first feature matrix is 100 × 100, as follows:
Figure BDA0002022117910000101
next, the first feature matrix is transformed into a third feature matrix according to the Hilbert matrix, wherein elements in the Hilbert matrix
Figure BDA0002022117910000102
And combining the pixel matrix of the original image with the size of 100 x 200 and the first feature matrix with the size of 100 x 100 to obtain a feature matrix with the size of 100 x 300 as a second feature matrix. The two-dimensional matrix comprises feature data of an original image, features extracted from panoramic data, the features can represent the current scene state of the electronic equipment, and the features extracted from historical photographing data of a user can represent the preference of the user on the style of photos.
And taking the two-dimensional feature matrix as input data of a pre-trained classification model to obtain image adjustment parameters. In some embodiments, the step 1045 of obtaining the image adjustment parameter according to the first feature matrix and the pre-trained classification model includes: and acquiring image adjustment parameters according to the second characteristic matrix and a preset convolutional neural network model.
In the embodiment of the application, a classification model is constructed by adopting a convolutional neural network, and a large amount of sample data is collected in advance to train the convolutional neural network model. For example, the method comprises the steps of collecting photos taken by a test user in various specific situations, recording panoramic data, obtaining historical photographing data of the test user, and adding image adjustment parameters to the data as tags in a manual tagging mode. Extracting feature matrices according to sample data in the same manner as in theabove steps 101 to 104, where the feature matrices have corresponding labels in the image, inputting the feature matrices of the sample data with the labels into a preset convolutional neural network model for training, obtaining weight parameters, and completing training of the convolutional neural network model, where the trained model is a classification model, the last layer of the classification model may be a full-link layer, the full-link layer has a plurality of nodes, and each node corresponds to an image adjustment parameter scheme. And inputting the second feature matrix obtained in thestep 1044 into the trained convolutional neural network model, so as to obtain corresponding image adjustment parameters.
In another alternative embodiment, the classification model may adopt an SVM (Support Vector Machine) classification model instead of the convolutional neural network model.
And 105, adjusting the original image according to the image adjusting parameter.
In an optional embodiment, the image adjustment parameter includes a filtering parameter, and thestep 105 of adjusting the original image according to the image adjustment parameter may include: generating prompt information based on the filtering parameters, and displaying the original image and the prompt information; and when a confirmation instruction triggered based on the prompt information is received, adjusting the original image according to the filtering parameters, and displaying the adjusted image.
The filter parameters include a brightness parameter, a saturation parameter, a contrast parameter, a sharpness parameter, a color temperature parameter, and the like. In addition, it can be understood that after the electronic device is started to take a picture, a picture captured by the current camera is displayed in the view finder, and when the user clicks a photographing instruction, a shot picture is generated and displayed. Therefore, in the embodiment of the application, the process of generating the user feature matrix by the electronic device by collecting the panoramic data and the historical photographing data is performed synchronously with the process of capturing the image by the camera. Therefore, after the original image obtained by the photographing operation is obtained, the original image is displayed on a display interface, meanwhile, prompt information is generated based on the obtained filtering parameters, whether the original image is adjusted by using the filtering parameters generated by the system is selected by a user, if the condition that the user triggers a confirmation instruction based on the prompt information is detected, the original image is adjusted according to the filtering parameters, and the adjusted image is displayed.
Or, in another alternative embodiment, the image adjustment parameter includes a filtering parameter, and the step of adjusting the original image using the image adjustment parameter may include:
adjusting the original image according to the filtering parameters; and displaying the original image and the adjusted image.
In this embodiment, the original image and the adjusted image are displayed on the display interface synchronously for the user to select from. Referring to fig. 5, fig. 5 is a schematic diagram illustrating an image display manner in the photographing processing method according to the embodiment of the present application.
Alternatively, in another alternative embodiment, the original image may be adjusted directly according to the image adjustment parameters, and the adjusted image may be displayed. Because the process of acquiring the user characteristic matrix and the process of capturing and generating the original image by the camera are carried out synchronously, after the photographing is finished, the image adjusting parameters are acquired, at the moment, the original image can be cached, and the image adjusting parameters are directly used for rendering and displaying the original image on the display interface. Meanwhile, a control for restoring the original image can be displayed on the display interface, and if the condition that the user triggers a corresponding instruction based on the control is detected, the original image can be restored and displayed.
Optionally, in an embodiment, the image adjustment parameters further include a 3A parameter, where the 3A parameter further includes: AF (Auto Focus), AE (Auto Exposure), and AWB (Auto White Balance) parameters. After the 3A parameter is obtained, the parameter is not used for directly adjusting the display effect of the image, but the photographing and imaging quality of the user can be improved by setting the parameter of the camera pipeline (image channel) at the bottom layer of the electronic equipment system. Thus, the user can improve the image imaging quality the next time the user uses the camera.
As can be seen from the above, in the photographing processing method provided in the embodiment of the present application, when a photographing operation is detected, panoramic data is collected, a panoramic feature vector is generated according to the panoramic data, then historical photographing data of the user is obtained, a historical feature vector is generated according to the photographing data, a user feature matrix is generated according to the historical feature vector and the panoramic feature vector, then an image obtained by the photographing operation is obtained, an image adjustment parameter is obtained according to the image, the user feature matrix, and a pre-trained classification model, and the original image is adjusted according to the image adjustment parameter. According to the scheme, panoramic data collected when a user takes a picture, the historical photographing habit of the user and the shot picture are combined to obtain the user feature matrix in real time, the feature vector not only can reflect the current situation of the user, but also combines the photographing habit and preference of the user, a fine-grained picture optimization scheme is provided for the user, personalized image adjustment parameters are generated, and a targeted user picture processing scheme can be realized.
In one embodiment, a photographing processing apparatus is also provided. Referring to fig. 6, fig. 6 is a schematic structural diagram of a photographingprocessing apparatus 400 according to an embodiment of the present disclosure. The photographingprocessing apparatus 400 is applied to an electronic device, and the photographingprocessing apparatus 400 includes a firstfeature extraction module 401, a secondfeature extraction module 402, afeature fusion module 403, aparameter acquisition module 404, and animage adjustment module 405, as follows:
the firstfeature extraction module 401 is configured to, when a photographing operation is detected, acquire panoramic data, and generate a panoramic feature vector according to the panoramic data.
In this embodiment of the application, the electronic device may monitor the photographing operation in real time, for example, detect whether to start the photographing operation by detecting whether a camera module of the electronic device is started. For example, it is detected that the user takes a picture through a camera APP (Application) of the electronic device, or it is detected that the user triggers a picture taking operation through a third party APP. Or, by monitoring the user instruction in real time, when an image shooting request is received, it is determined that a shooting operation is detected, and the firstfeature extraction module 401 starts to collect panoramic data.
When the electronic equipment detects the photographing operation, panoramic data starts to be collected. The panoramic data includes, but is not limited to, the following data: terminal status data and sensor status data. The terminal operation data comprises operation modes of the electronic equipment in each time interval, wherein the operation modes comprise a game mode, an entertainment mode, a video mode and the like, the operation mode of the electronic equipment can be determined according to the type of the currently operated application program, and the type of the currently operated application program can be directly obtained from the classification information of the application program installation package; or, the terminal operation data may further include a remaining power, a display mode, a network state, a screen-off/lock state, and the like of the electronic device.
The sensor status data includes data collected by each sensor on the electronic device, for example, the following sensors are included on the electronic device: a plurality of sensors such as distance sensor, magnetic field sensor, light sensor, acceleration sensor, fingerprint sensor, hall sensor, position sensor, gyroscope, inertial sensor, gesture sensor, barometer, heart rate sensor. The method comprises the steps of obtaining sensor state data of the electronic equipment when a user instruction is received, or obtaining sensor state data of the electronic equipment for a period of time before the user instruction is received. In some embodiments, the status data of some sensors may be acquired in a targeted manner. For example, data collected by a position sensor and a light sensor are obtained, wherein current position information of the electronic device can be determined according to the data collected by the position sensor, and the light sensor can collect light intensity of the environment where the electronic device is currently located.
In some embodiments, the firstfeature extraction module 401 is further configured to: acquiring current terminal state data and sensor state data; generating terminal state characteristics according to the terminal state data, and generating terminal scene characteristics according to the sensor state data; and fusing the terminal state characteristic and the terminal scene characteristic to generate the panoramic feature vector.
Generating terminal state characteristic ys according to terminal state data1(ii) a Acquiring state data of a magnetometer, an accelerometer and a gyroscope according to the state data of the sensors, and processing the acquired state data of the three sensors through a Kalman filtering algorithm to obtain a four-dimensional terminal attitude characteristic ys2~ys5(ii) a Obtaining barometric characteristics ys through data collected by barometer6(ii) a Determining WIFI connection state ys through network module7(ii) a Positioning is carried out through data acquired by position sensing to obtain the current position of the userProperties (e.g. mall, Home, company, park, etc.), Generation features ys8(ii) a Furthermore, the method can be combined with magnetometer, acceleration sensor, gyroscope and barometer 10 axis information to obtain new multi-dimensional data by using a filtering algorithm or a principal component analysis algorithm to generate corresponding characteristic ys9. For the non-digital features, index numbers can be established and converted into digital representations, for example, for the feature of the current system terminal state operation mode, the index numbers are used to represent the current state mode, such as 1 is game mode, 2 is entertainment mode, and 3 is video mode. If the current operation mode is the game mode, determining the current system state ys11. After the characteristics represented by all numbers are obtained, the characteristic data are fused to obtain a long vector, and the long vector is normalized to obtain a panoramic characteristic vector s1
s1={ys1,ys2,…,ysn}
The more the types of the panoramic data collected by the firstfeature extraction module 401 are, the generated panoramic feature vector s is1The longer the length of (a), the larger the value of n. Ys in the above1-ys9By way of example only, the present application is not limited to these features, and features with more dimensions may be obtained according to actual needs.
The secondfeature extraction module 402 is configured to obtain historical photographing data, and generate a historical feature vector according to the photographing data.
The electronic equipment records and regularly updates historical photographing data of a user, wherein the historical photographing data comprises a historical photo editing scheme, calling third-party photographing APP information, calling system self-contained photographing APP information, sharing picture information and the like. The third-party photographing APP information is beneficial to mining the favorite retouching types of the users, because the subdivision degree of the third-party photographing APP on the market is very high, for example, a budding camera mainly processes a self-photographing head portrait, and Prime mainly performs style migration on a landscape image or a figure image, so that the user retouching preference can be known by calling the system self-photographing APP information. Generally, the installation package information of the third-party photographing APP has multiple layers, wherein the installation package information comprises the image modifying software type to which the APP belongs, the third-party photographing APP information is obtained and called, the image modifying software type to which the called APP belongs is further obtained according to the information, and the image modifying software type reflects the image modifying preference of a user.
In addition, in some current electronic devices, the photographing APP of the system itself also has a picture-repairing function, so that information of calling the photographing APP of the system itself can be recorded. In addition, different photo sharing habits of the user can also reflect the photo style preferred by the user. For example, if a user shares a photo with an Instagram (a social application for picture sharing), it indicates that the user is biased to a photo in the european and american style, and if the user shares a photo with a QQ space, it indicates that the user is biased to a photo in the youthful style, so that the user-shared photo information can be used as a history photographing data.
Furthermore, when the user manually adjusts the parameters of the brightness, contrast, sharpness, saturation, color temperature, etc. of the photo to change the display effect of the photo, these parameters are recorded as a historical photo editing scheme, for example, after the user takes a food photo at ordinary times, the contrast (+10) and saturation (+5) of the photo are generally manually increased to make the picture have more beautiful colors but not oversaturated with colors, and such parameters as the contrast (+10) and saturation (+5) are recorded as a historical photo editing scheme. The above-mentioned historical photo editing scheme, calling third party to take photo APP information, calling system self-bring photo APP information, sharing picture information and the like are all historical data which are recorded and stored in a preset path when detecting corresponding operation of a user, and the longer the time that the user uses the electronic equipment, the more the recorded historical data are, the stronger the referential of the data is.
After the data are acquired, the secondfeature extraction module 402 generates a historical feature vector s according to the historical photographing data of the data2
s2={ya1,ya2,…,yam}
Afeature fusion module 403, configured to generate a user feature matrix according to the historical feature vector and the panoramic feature vector.
Panoramic feature vector s generated based on firstfeature extraction module 4011And the historical feature vector s generated by the secondfeature extraction module 4022Thefeature fusion module 403 fuses the two feature vectors to generate a user feature matrix. There are many ways to fuse feature vectors. For example, in the first mode, s1And s2And adding to generate a long vector with the length of m + n, namely a matrix with m + n columns and 1 row. The user characteristic matrix is then as follows:
{ys1,ys2,…,ysn,ya1,ya2,…,yam}。
in a second manner, the secondfeature extraction module 402 is further configured to: and performing matrix superposition processing on the historical characteristic vector and the panoramic characteristic vector to generate a user characteristic matrix. I.e. s1And s2Superposing to generate a matrix with m (or n) columns and 2 rows, if n is less than m, adopting a zero filling mode to fill the panoramic eigenvector s1Is extended by m. If n is more than m, adopting a zero filling mode to fill the historical characteristic vector s2Is extended by n. And when n is greater than m, obtaining a user feature matrix through superposition as follows:
Figure BDA0002022117910000151
aparameter obtaining module 404, configured to, when a photographing instruction is received, obtain an original image obtained by the photographing operation, and obtain an image adjustment parameter according to the original image, the user feature matrix, and a pre-trained classification model.
After the photographing operation is completed and the original image is obtained, the image adjusting parameters suitable for the current original image are generated by combining the obtained user characteristic vector, the original image and the pre-trained classification model. In some embodiments, theparameter acquisition module 404 is further configured to: compressing the image according to a preset length and a preset width, and generating a pixel matrix according to the compressed image; converting the user characteristic matrix into a first characteristic matrix, wherein the row number of the first characteristic matrix is matched with the preset width; combining the pixel matrix and the first feature matrix to generate a second feature matrix; and acquiring image adjustment parameters according to the second feature matrix and a pre-trained classification model.
In an optional implementation manner, theparameter obtaining module 404 is further configured to: converting the first feature matrix into a third feature matrix according to a Hilbert matrix; and combining the pixel matrix and the third feature matrix to generate the second feature matrix. The hilbert matrix is a mathematical transformation matrix, positive definite, and highly ill-conditioned. That is, when any element changes little, the determinant value and the inverse matrix of the whole matrix change greatly, the pathological degree and the order are related, and the hilbert matrix is multiplied to find out the more special characteristics and rules in the data. Therefore, the first feature matrix is converted by using the Hilbert matrix and then combined, which is beneficial for the classification model to better find the features in the data.
Because the image shot by the camera of the electronic device is generally high-definition, that is, the number of pixel points is large, if the image information is directly fused with the user feature vector, the data volume is large, and the calculation speed is low. Therefore, the image is compressed first. The preset length and the preset width may be preset, and the preset length is generally greater than or equal to the number of columns of the user feature matrix.
For example, assume that the predetermined length is equal to 200 pixels and the predetermined width is equal to 100 pixels. For example, the size of the image obtained by the camera is 2736 x 3648, the image is compressed to 100 x 200 size using image compression techniques. Assuming that the number of columns of the user feature matrix is 50 and the number of rows is 2, the size of the user feature matrix is 50 × 2, and then the vector is converted into a first feature matrix by superimposing the matrix itself twice in the transverse direction and fifty times in the longitudinal direction, the size of the first feature matrix is 100 × 100, as follows:
Figure BDA0002022117910000171
next, the first feature matrix is transformed into a third feature matrix according to the Hilbert matrix, wherein elements in the Hilbert matrix
Figure BDA0002022117910000172
And combining the pixel matrix of the original image with the size of 100 x 200 and the first feature matrix with the size of 100 x 100 to obtain a feature matrix with the size of 100 x 300 as a second feature matrix. The two-dimensional matrix comprises feature data of an original image, features extracted from panoramic data, the features can represent the current scene state of the electronic equipment, and the features extracted from historical photographing data of a user can represent the preference of the user on the style of photos.
And taking the two-dimensional feature matrix as input data of a pre-trained classification model to obtain image adjustment parameters. In some embodiments, theparameter acquisition module 404 is further configured to: and acquiring image adjustment parameters according to the second characteristic matrix and a preset convolutional neural network model.
In the embodiment of the application, a classification model is constructed by adopting a convolutional neural network, and a large amount of sample data is collected in advance to train the convolutional neural network model. For example, the method comprises the steps of collecting photos taken by a test user in various specific situations, recording panoramic data, obtaining historical photographing data of the test user, and adding image adjustment parameters to the data as tags in a manual tagging mode. Extracting feature matrixes according to sample data, wherein the feature matrixes have corresponding labels in images, inputting the feature matrixes of the sample data with the labels into a preset convolutional neural network model for training to obtain weight parameters, finishing the training of the convolutional neural network model, wherein the trained model is a classification model, the last layer of the classification model can be a full connection layer, the full connection layer is provided with a plurality of nodes, and each node corresponds to an image adjustment parameter scheme. And inputting the second characteristic matrix into the trained convolutional neural network model, so as to obtain corresponding image adjusting parameters.
In another alternative embodiment, the classification model may adopt an SVM (Support Vector Machine) classification model instead of the convolutional neural network model.
Animage adjusting module 405, configured to adjust the original image according to the image adjusting parameter.
In an alternative embodiment, the image adjustment parameters include filtering parameters, and theimage adjustment module 405 is further configured to: generating prompt information based on the filtering parameters, and displaying the original image and the prompt information; and when a confirmation instruction triggered based on the prompt information is received, adjusting the original image according to the filtering parameters, and displaying the adjusted image.
The filter parameters include a brightness parameter, a saturation parameter, a contrast parameter, a sharpness parameter, a color temperature parameter, and the like. In addition, it can be understood that after the electronic device is started to take a picture, a picture captured by the current camera is displayed in the view finder, and when the user clicks a photographing instruction, a shot picture is generated and displayed. Therefore, in the embodiment of the application, the process of generating the user feature matrix by the electronic device by collecting the panoramic data and the historical photographing data is performed synchronously with the process of capturing the image by the camera. Therefore, after the original image obtained by the photographing operation is acquired, the original image is displayed on the display interface, meanwhile, theimage adjusting module 405 generates prompt information based on the acquired filter parameters, the user selects whether to use the filter parameters generated by the system to adjust the original image, and if it is detected that the user triggers a confirmation instruction based on the prompt information, the original image is adjusted according to the filter parameters, and the adjusted image is displayed.
Alternatively, in another alternative embodiment, the image adjustment parameters include filtering parameters, and theimage adjustment module 405 is further configured to: adjusting the original image according to the filtering parameters; and displaying the original image and the adjusted image.
In this embodiment, the original image and the adjusted image are displayed on the display interface synchronously for the user to select from.
Alternatively, in another alternative embodiment, theimage adjustment module 405 may adjust the original image directly according to the image adjustment parameter and display the adjusted image. Because the process of acquiring the user characteristic matrix and the process of capturing and generating the original image by the camera are carried out synchronously, after the photographing is finished, the image adjusting parameters are acquired, at the moment, the original image can be cached, and the image adjusting parameters are directly used for rendering and displaying the original image on the display interface. Meanwhile, a control for restoring the original image can be displayed on the display interface, and if the condition that the user triggers a corresponding instruction based on the control is detected, the original image can be restored and displayed.
Optionally, in an embodiment, the image adjustment parameters further include a 3A parameter, where the 3A parameter further includes: AF (Auto Focus), AE (Auto Exposure), and AWB (Auto White Balance) parameters. After the 3A parameter is obtained, the parameter is not used for directly adjusting the display effect of the image, but the photographing and imaging quality of the user can be improved by setting the parameter of the camera pipeline (image channel) at the bottom layer of the electronic equipment system. Thus, the user can improve the image imaging quality the next time the user uses the camera.
As can be seen from the above, in the photographing processing apparatus provided in this embodiment of the application, when the firstfeature extraction module 401 detects a photographing operation, acquires panoramic data, and generates a panoramic feature vector according to the panoramic data, then the secondfeature extraction module 402 acquires historical photographing data of the user, and generates a historical feature vector according to the photographing data, thefeature fusion module 403 generates a user feature matrix according to the historical feature vector and the panoramic feature vector, then theparameter acquisition module 404 acquires an image obtained by the photographing operation, acquires an image adjustment parameter according to the image, the user feature matrix, and a pre-trained classification model, and theimage adjustment module 405 adjusts the original image according to the image adjustment parameter. According to the scheme, panoramic data collected when a user takes a picture, the historical photographing habit of the user and the shot picture are combined to obtain the user feature matrix in real time, the feature vector not only can reflect the current situation of the user, but also combines the photographing habit and preference of the user, a fine-grained picture optimization scheme is provided for the user, personalized image adjustment parameters are generated, and a targeted user picture processing scheme can be realized.
The embodiment of the application also provides the electronic equipment. The electronic device can be a smart phone, a tablet computer and the like. As shown in fig. 7, fig. 7 is a schematic view of a first structure of an electronic device according to an embodiment of the present application. Theelectronic device 300 comprises aprocessor 301 and amemory 302. Theprocessor 301 is electrically connected to thememory 302.
Theprocessor 301 is a control center of theelectronic device 300, connects various parts of the entire electronic device using various interfaces and lines, and performs various functions of the electronic device and processes data by running or calling a computer program stored in thememory 302 and calling data stored in thememory 302, thereby performing overall monitoring of the electronic device.
In this embodiment, theprocessor 301 in theelectronic device 300 loads instructions corresponding to one or more processes of the computer program into thememory 302 according to the following steps, and theprocessor 301 runs the computer program stored in thememory 302, so as to implement various functions:
when the photographing operation is detected, collecting panoramic data, and generating a panoramic feature vector according to the panoramic data;
acquiring historical photographing data, and generating a historical feature vector according to the photographing data;
generating a user feature matrix according to the historical feature vector and the panoramic feature vector;
when a photographing instruction is received, acquiring an original image obtained by the photographing operation, and acquiring image adjustment parameters according to the original image, the user characteristic matrix and a pre-trained classification model;
and adjusting the original image according to the image adjusting parameter.
In some embodiments, the panoramic data includes terminal state data and sensor state data; when collecting panoramic data and generating a panoramic feature vector from the panoramic data, theprocessor 301 performs the following steps:
acquiring current terminal state data and sensor state data;
generating terminal state characteristics according to the terminal state data, and generating terminal scene characteristics according to the sensor state data;
and fusing the terminal state characteristic and the terminal scene characteristic to generate the panoramic feature vector.
In some embodiments, when generating the user feature matrix from the historical feature vector and the panoramic feature vector, theprocessor 301 performs the following steps:
and performing matrix superposition processing on the historical characteristic vector and the panoramic characteristic vector to generate a user characteristic matrix.
In some embodiments, when obtaining the image adjustment parameter according to the image, the user feature matrix, and the pre-trained classification model, theprocessor 301 performs the following steps:
compressing the image according to a preset length and a preset width, and generating a pixel matrix according to the compressed image;
converting the user characteristic matrix into a first characteristic matrix, wherein the row number of the first characteristic matrix is matched with the preset width;
converting the first feature matrix into a second feature matrix according to a Hilbert matrix;
combining the pixel matrix and the second feature matrix to generate a third feature matrix;
and acquiring image adjustment parameters according to the second feature matrix and a pre-trained classification model.
In some embodiments, when obtaining the image adjustment parameter according to the second feature matrix and the pre-trained classification model, theprocessor 301 performs the following steps:
and acquiring image adjustment parameters according to the second characteristic matrix and a preset convolutional neural network model.
In some embodiments, the image adjustment parameters include filtering parameters, and when the original image is adjusted using the image adjustment parameters, theprocessor 301 performs the following steps:
generating prompt information based on the filtering parameters, and displaying the original image and the prompt information;
and when a confirmation instruction triggered based on the prompt information is received, adjusting the original image according to the filtering parameters, and displaying the adjusted image.
In some embodiments, when receiving a confirmation instruction triggered based on the hint information, after the step of adjusting the original image according to the filtering parameters, theprocessor 301 performs the following steps:
and updating the historical photographing data according to the filtering parameters.
In some embodiments, the image adjustment parameters further include a 3A parameter, and after the step of adjusting the original image using the image adjustment parameters, theprocessor 301 performs the steps of:
and resetting image channel parameters according to the 3A parameters.
Memory 302 may be used to store computer programs and data. Thememory 302 stores computer programs containing instructions executable in the processor. The computer program may constitute various functional modules. Theprocessor 301 executes various functional applications and data processing by calling a computer program stored in thememory 302.
In some embodiments, as shown in fig. 8, fig. 8 is a second schematic structural diagram of an electronic device provided in the embodiments of the present application. Theelectronic device 300 further includes:radio frequency circuit 303,display screen 304,control circuit 305,input unit 306,audio circuit 307,sensor 308, andpower supply 309. Theprocessor 301 is electrically connected to therf circuit 303, thedisplay 304, thecontrol circuit 305, theinput unit 306, theaudio circuit 307, thesensor 308, and thepower source 309, respectively.
Theradio frequency circuit 303 is used for transceiving radio frequency signals to communicate with a network device or other electronic devices through wireless communication.
Thedisplay screen 304 may be used to display information entered by or provided to the user as well as various graphical user interfaces of the electronic device, which may be comprised of images, text, icons, video, and any combination thereof.
Thecontrol circuit 305 is electrically connected to thedisplay screen 304, and is used for controlling thedisplay screen 304 to display information.
Theinput unit 306 may be used to receive input numbers, character information, or user characteristic information (e.g., fingerprint), and to generate keyboard, mouse, joystick, optical, or trackball signal inputs related to user settings and function control. Theinput unit 306 may include a fingerprint recognition module.
Audio circuitry 307 may provide an audio interface between the user and the electronic device through a speaker, microphone. Whereaudio circuitry 307 includes a microphone. The microphone is electrically connected to theprocessor 301. The microphone is used for receiving voice information input by a user.
Thesensor 308 is used to collect external environmental information. Thesensor 308 may include one or more of an ambient light sensor, an acceleration sensor, a gyroscope, and the like.
Thepower supply 309 is used to power the various components of theelectronic device 300. In some embodiments, thepower source 309 may be logically coupled to theprocessor 301 through a power management system, such that functions to manage charging, discharging, and power consumption management are performed through the power management system.
Although not shown in fig. 8, theelectronic device 300 may further include a camera, a bluetooth module, and the like, which are not described in detail herein.
As can be seen from the above, an embodiment of the present application provides an electronic device, where when the electronic device detects a photographing operation, panoramic data is collected, and a panoramic feature vector is generated according to the panoramic data; acquiring historical photographing data, and generating a historical feature vector according to the photographing data; generating a user feature matrix according to the historical feature vector and the panoramic feature vector; when a photographing instruction is received, acquiring an original image obtained by the photographing operation, and acquiring image adjustment parameters according to the original image, the user characteristic matrix and a pre-trained classification model; and adjusting the original image according to the image adjusting parameter.
An embodiment of the present application further provides a storage medium, where a computer program is stored in the storage medium, and when the computer program runs on a computer, the computer executes the photographing processing method according to any one of the above embodiments.
It should be noted that, all or part of the steps in the methods of the above embodiments may be implemented by hardware related to instructions of a computer program, which may be stored in a computer-readable storage medium, which may include, but is not limited to: read Only Memory (ROM), Random Access Memory (RAM), magnetic or optical disks, and the like.
The photographing processing method, the photographing processing apparatus, the storage medium, and the electronic device provided in the embodiments of the present application are described in detail above. The principle and the implementation of the present application are explained herein by applying specific examples, and the above description of the embodiments is only used to help understand the method and the core idea of the present application; meanwhile, for those skilled in the art, according to the idea of the present application, there may be variations in the specific embodiments and the application scope, and in summary, the content of the present specification should not be construed as a limitation to the present application.

Claims (10)

Translated fromChinese
1.一种拍照处理方法,其特征在于,包括:1. a photographic processing method, is characterized in that, comprises:当检测到拍照操作时,获取当前的终端状态数据和传感器状态数据;根据所述终端状态数据生成终端状态特征,并根据所述传感器状态数据生成终端情景特征;融合所述终端状态特征和所述终端情景特征,得到全景特征向量;When a photographing operation is detected, current terminal status data and sensor status data are obtained; terminal status features are generated according to the terminal status data, and terminal scene features are generated according to the sensor status data; and the terminal status features and the Terminal scene features to obtain panoramic feature vectors;获取历史拍照数据,根据所述拍照数据生成历史特征向量,所述历史拍照数据包括历史编辑照片方案、调用第三方拍照APP信息、调用系统自带拍照APP信息、分享图片信息;Obtaining historical photographing data, and generating a historical feature vector according to the photographing data, where the historical photographing data includes a historical editing photo scheme, calling third-party photographing APP information, calling the system's own photographing APP information, and sharing image information;对所述历史特征向量和所述全景特征向量进行合并处理,得到用户特征矩阵;The historical feature vector and the panoramic feature vector are combined to obtain a user feature matrix;当接收到拍照指令时,获取所述拍照操作得到的原始图像,根据所述原始图像、所述用户特征矩阵和预先训练好的分类模型,获取图像调整参数,包括:按照预设长度和预设宽度压缩所述图像,根据压缩后的图像生成像素矩阵,将所述用户特征矩阵进行自身横向叠加,得到第一特征矩阵,所述第一特征矩阵的行数与所述预设宽度匹配,合并所述像素矩阵和所述第一特征矩阵,生成第二特征矩阵,根据所述第二特征矩阵和预先训练好的分类模型,获取图像调整参数;When a photographing instruction is received, the original image obtained by the photographing operation is obtained, and the image adjustment parameters are obtained according to the original image, the user feature matrix and the pre-trained classification model, including: according to the preset length and preset Width compress the image, generate a pixel matrix according to the compressed image, and horizontally superimpose the user feature matrix to obtain a first feature matrix, the number of rows of the first feature matrix matches the preset width, and merge The pixel matrix and the first feature matrix generate a second feature matrix, and the image adjustment parameters are obtained according to the second feature matrix and the pre-trained classification model;根据所述图像调整参数调整所述原始图像。The original image is adjusted according to the image adjustment parameter.2.如权利要求1所述的拍照处理方法,其特征在于,对所述历史特征向量和所述全景特征向量进行合并处理,得到用户特征矩阵的步骤,包括:2. The photographing processing method of claim 1, wherein the historical feature vector and the panoramic feature vector are combined and processed to obtain a user feature matrix, comprising:对所述历史特征向量和所述全景特征向量沿横向合并处理,得到用户特征矩阵,所述用户特征矩阵的列数等于所述历史特征向量的长度与所述全景特征向量的长度之和,所述用户特征矩阵的行数等于1;The historical feature vector and the panoramic feature vector are combined in the horizontal direction to obtain a user feature matrix. The number of columns of the user feature matrix is equal to the sum of the length of the historical feature vector and the length of the panoramic feature vector. The number of rows of the user feature matrix is equal to 1;或者,对所述历史特征向量和所述全景特征向量沿纵向合并处理,得到用户特征矩阵,所述用户特征矩阵的列数等于所述所述历史特征向量和所述全景特征向量的最大长度,所述用户特征向量的行数等于2。Alternatively, the historical feature vector and the panoramic feature vector are vertically combined to obtain a user feature matrix, where the number of columns of the user feature matrix is equal to the maximum length of the historical feature vector and the panoramic feature vector, The number of rows of the user feature vector is equal to two.3.如权利要求1所述的拍照处理方法,其特征在于,合并所述像素矩阵和所述第一特征矩阵,生成第二特征矩阵的步骤,包括:3. The photographing processing method according to claim 1, wherein the step of merging the pixel matrix and the first feature matrix to generate the second feature matrix comprises:将希尔伯特矩阵与所述第一特征矩阵相乘,得到第三特征矩阵;multiplying the Hilbert matrix by the first eigenmatrix to obtain a third eigenmatrix;将所述像素矩阵和所述第三特征矩阵沿纵向拼接处理,得到第二特征矩阵。The pixel matrix and the third feature matrix are spliced along the longitudinal direction to obtain a second feature matrix.4.如权利要求1所述的拍照处理方法,其特征在于,将所述第二特征矩阵输入预先训练好的分类模型,输出图像调整参数的步骤,包括:4. The photographing processing method of claim 1, wherein the second feature matrix is input into a pre-trained classification model, and the step of outputting image adjustment parameters comprises:将所述第二特征矩阵输入预设的卷积神经网络模型,输出图像调整参数。Inputting the second feature matrix into a preset convolutional neural network model, and outputting image adjustment parameters.5.如权利要求1至4任一项所述的拍照处理方法,其特征在于,图像调整参数包括滤波参数,使用所述图像调整参数调整所述原始图像的步骤,包括:5. The photographing processing method according to any one of claims 1 to 4, wherein the image adjustment parameters include filter parameters, and the step of using the image adjustment parameters to adjust the original image includes:基于所述滤波参数生成提示信息,显示所述原始图像和所述提示信息;generating prompt information based on the filtering parameters, and displaying the original image and the prompt information;当接收到基于所述提示信息触发的确认指令时,根据所述滤波参数调整所述原始图像,并显示调整后的图像。When a confirmation instruction triggered based on the prompt information is received, the original image is adjusted according to the filtering parameter, and the adjusted image is displayed.6.如权利要求5所述的拍照处理方法,其特征在于,当接收到基于所述提示信息触发的确认指令时,根据所述滤波参数调整所述原始图像的步骤之后,所述方法还包括:6. The photographing processing method according to claim 5, wherein when receiving a confirmation instruction triggered based on the prompt information, after the step of adjusting the original image according to the filtering parameter, the method further comprises: :根据所述滤波参数更新所述历史拍照数据。The historical photographing data is updated according to the filtering parameter.7.如权利要求5所述的拍照处理方法,其特征在于,所述图像调整参数还包括3A参数,使用所述图像调整参数调整所述原始图像的步骤之后,所述方法还包括:7. The photographing processing method according to claim 5, wherein the image adjustment parameter further comprises a 3A parameter, and after the step of adjusting the original image using the image adjustment parameter, the method further comprises:根据所述3A参数重设影像通道参数。The image channel parameters are reset according to the 3A parameters.8.一种拍照处理装置,其特征在于,包括:8. A photographing processing device, characterized in that, comprising:第一特征提取模块,用于当检测到拍照操作时,获取当前的终端状态数据和传感器状态数据;根据所述终端状态数据生成终端状态特征,并根据所述传感器状态数据生成终端情景特征;融合所述终端状态特征和所述终端情景特征,得到全景特征向量;a first feature extraction module, configured to obtain current terminal status data and sensor status data when a photographing operation is detected; generate terminal status features according to the terminal status data, and generate terminal scene features according to the sensor status data; fusion the terminal state feature and the terminal scene feature to obtain a panoramic feature vector;第二特征提取模块,用于获取历史拍照数据,根据所述拍照数据生成历史特征向量,所述历史拍照数据包括历史编辑照片方案、调用第三方拍照APP信息、调用系统自带拍照APP信息、分享图片信息;The second feature extraction module is used for acquiring historical photographing data, and generating historical feature vectors according to the photographing data, where the historical photographing data includes historical editing photo plans, calling third-party photographing APP information, calling the system's own photographing APP information, sharing image information;特征融合模块,用于对所述历史特征向量和所述全景特征向量进行合并处理,得到用户特征矩阵;a feature fusion module, for combining the historical feature vector and the panoramic feature vector to obtain a user feature matrix;参数获取模块,用于当接收到拍照指令时,获取所述拍照操作得到的原始图像,根据所述原始图像、所述用户特征矩阵和预先训练好的分类模型,获取图像调整参数,包括:按照预设长度和预设宽度压缩所述图像,根据压缩后的图像生成像素矩阵,将所述用户特征矩阵进行自身横向叠加,得到第一特征矩阵,所述第一特征矩阵的行数与所述预设宽度匹配,合并所述像素矩阵和所述第一特征矩阵,生成第二特征矩阵,根据所述第二特征矩阵和预先训练好的分类模型,获取图像调整参数;A parameter obtaining module, configured to obtain the original image obtained by the photographing operation when receiving the photographing instruction, and obtain image adjustment parameters according to the original image, the user feature matrix and the pre-trained classification model, including: Compressing the image with a preset length and a preset width, generating a pixel matrix according to the compressed image, and superimposing the user feature matrix horizontally on itself to obtain a first feature matrix, the number of rows of the first feature matrix being the same as the number of rows in the Preset width matching, combining the pixel matrix and the first feature matrix to generate a second feature matrix, and obtaining image adjustment parameters according to the second feature matrix and the pre-trained classification model;图像调整模块,用于根据所述图像调整参数调整所述原始图像。An image adjustment module, configured to adjust the original image according to the image adjustment parameter.9.一种存储介质,其上存储有计算机程序,其特征在于,当所述计算机程序在计算机上运行时,使得所述计算机执行如权利要求1至7任一项所述的拍照处理方法。9 . A storage medium having a computer program stored thereon, wherein when the computer program runs on a computer, the computer is made to execute the photographing processing method according to any one of claims 1 to 7 . 10 .10.一种电子设备,包括处理器和存储器,所述存储器存储有计算机程序,其特征在于,所述处理器通过调用所述计算机程序,用于执行如权利要求1至7任一项所述的拍照处理方法。10. An electronic device comprising a processor and a memory, wherein the memory stores a computer program, wherein the processor is used to execute the computer program according to any one of claims 1 to 7 by invoking the computer program photo processing method.
CN201910282456.4A2019-04-092019-04-09Photographing processing method and device, storage medium and electronic equipmentExpired - Fee RelatedCN111800569B (en)

Priority Applications (1)

Application NumberPriority DateFiling DateTitle
CN201910282456.4ACN111800569B (en)2019-04-092019-04-09Photographing processing method and device, storage medium and electronic equipment

Applications Claiming Priority (1)

Application NumberPriority DateFiling DateTitle
CN201910282456.4ACN111800569B (en)2019-04-092019-04-09Photographing processing method and device, storage medium and electronic equipment

Publications (2)

Publication NumberPublication Date
CN111800569A CN111800569A (en)2020-10-20
CN111800569Btrue CN111800569B (en)2022-02-22

Family

ID=72805685

Family Applications (1)

Application NumberTitlePriority DateFiling Date
CN201910282456.4AExpired - Fee RelatedCN111800569B (en)2019-04-092019-04-09Photographing processing method and device, storage medium and electronic equipment

Country Status (1)

CountryLink
CN (1)CN111800569B (en)

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN112307227B (en)*2020-11-242023-08-29国家电网有限公司大数据中心 A Data Classification Method
CN113329173A (en)*2021-05-192021-08-31Tcl通讯(宁波)有限公司Image optimization method and device, storage medium and terminal equipment
CN113840086A (en)*2021-09-062021-12-24联想(北京)有限公司Information processing method and electronic equipment
CN117014561B (en)*2023-09-262023-12-15荣耀终端有限公司Information fusion method, training method of variable learning and electronic equipment

Citations (5)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
JP2018014686A (en)*2016-07-222018-01-25日本電信電話株式会社Camera calibration device, camera calibration method and camera calibration program
CN107944035A (en)*2017-12-132018-04-20合肥工业大学A kind of image recommendation method for merging visual signature and user's scoring
CN108174096A (en)*2017-12-292018-06-15广东欧珀移动通信有限公司Shooting parameter setting method and device, terminal and storage medium
CN109299994A (en)*2018-07-272019-02-01北京三快在线科技有限公司Recommended method, device, equipment and readable storage medium storing program for executing
CN109344314A (en)*2018-08-202019-02-15腾讯科技(深圳)有限公司A kind of data processing method, device and server

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
US8970720B2 (en)*2010-07-262015-03-03Apple Inc.Automatic digital camera photography mode selection
US8311973B1 (en)*2011-09-242012-11-13Zadeh Lotfi AMethods and systems for applications for Z-numbers
US9317770B2 (en)*2013-04-282016-04-19Tencent Technology (Shenzhen) Co., Ltd.Method, apparatus and terminal for detecting image stability

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
JP2018014686A (en)*2016-07-222018-01-25日本電信電話株式会社Camera calibration device, camera calibration method and camera calibration program
CN107944035A (en)*2017-12-132018-04-20合肥工业大学A kind of image recommendation method for merging visual signature and user's scoring
CN108174096A (en)*2017-12-292018-06-15广东欧珀移动通信有限公司Shooting parameter setting method and device, terminal and storage medium
CN109299994A (en)*2018-07-272019-02-01北京三快在线科技有限公司Recommended method, device, equipment and readable storage medium storing program for executing
CN109344314A (en)*2018-08-202019-02-15腾讯科技(深圳)有限公司A kind of data processing method, device and server

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
《采用图像直方图特征函数的高速相机自动曝光方法》;马泽龙;《光学精密工程》;20170430;第494-501页*

Also Published As

Publication numberPublication date
CN111800569A (en)2020-10-20

Similar Documents

PublicationPublication DateTitle
US11244170B2 (en)Scene segmentation method and device, and storage medium
CN113727012B (en) A shooting method and terminal
JP7645917B2 (en) A technique for capturing and editing dynamic depth images
CN114170349B (en) Image generation method, device, electronic device and storage medium
CN111800569B (en)Photographing processing method and device, storage medium and electronic equipment
CN112287852B (en)Face image processing method, face image display method, face image processing device and face image display equipment
CN109087376B (en) Image processing method, device, storage medium and electronic device
KR101727169B1 (en)Method and apparatus for generating image filter
WO2020151281A9 (en)Image processing method and device, electronic equipment and storage medium
CN110930335B (en)Image processing method and electronic equipment
WO2022001806A1 (en)Image transformation method and apparatus
CN107835367A (en)A kind of image processing method, device and mobile terminal
CN103458180A (en)Communication terminal, display method, and computer program product
CN113810588B (en)Image synthesis method, terminal and storage medium
CN113642359B (en)Face image generation method and device, electronic equipment and storage medium
EP4096211B1 (en)Image processing method, electronic device and computer-readable storage medium
CN105427369A (en)Mobile terminal and method for generating three-dimensional image of mobile terminal
CN115623313B (en) Image processing method, image processing device, electronic device, and storage medium
KR20250021511A (en)Device and method for generating object captured image
CN108174110A (en) A camera method and flexible screen terminal
CN115115679A (en) An image registration method and related equipment
CN115587938A (en) Video distortion correction method and related equipment
CN113407774A (en)Cover determining method and device, computer equipment and storage medium
CN107959790A (en)A kind of image capturing method, device and mobile terminal
CN114615520A (en)Subtitle positioning method, subtitle positioning device, computer equipment and medium

Legal Events

DateCodeTitleDescription
PB01Publication
PB01Publication
SE01Entry into force of request for substantive examination
SE01Entry into force of request for substantive examination
GR01Patent grant
GR01Patent grant
CF01Termination of patent right due to non-payment of annual fee

Granted publication date:20220222

CF01Termination of patent right due to non-payment of annual fee

[8]ページ先頭

©2009-2025 Movatter.jp