Movatterモバイル変換


[0]ホーム

URL:


CN110808069A - Evaluation system and method for singing songs - Google Patents

Evaluation system and method for singing songs
Download PDF

Info

Publication number
CN110808069A
CN110808069ACN201911096229.9ACN201911096229ACN110808069ACN 110808069 ACN110808069 ACN 110808069ACN 201911096229 ACN201911096229 ACN 201911096229ACN 110808069 ACN110808069 ACN 110808069A
Authority
CN
China
Prior art keywords
unit
singing
sound
characteristic parameter
training
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Withdrawn
Application number
CN201911096229.9A
Other languages
Chinese (zh)
Inventor
王斌
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Ruimei Jinxin Health Management Co Ltd
Original Assignee
Shanghai Ruimei Jinxin Health Management Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Ruimei Jinxin Health Management Co LtdfiledCriticalShanghai Ruimei Jinxin Health Management Co Ltd
Priority to CN201911096229.9ApriorityCriticalpatent/CN110808069A/en
Publication of CN110808069ApublicationCriticalpatent/CN110808069A/en
Withdrawnlegal-statusCriticalCurrent

Links

Images

Classifications

Landscapes

Abstract

The invention relates to the technical field of song singing evaluation, in particular to a system and a method for evaluating a singing song, wherein the system comprises a processing module, a singing module and a singing module, wherein the processing module is used for processing the type of an original song to extract a first characteristic parameter of the original song; the training module is connected with the processing module and is used for respectively training the sound speed and the sound intensity change of the segmented audio of the original song so as to respectively extract second characteristic parameters of the sound speed and the sound intensity change of the segmented audio; the algorithm module is respectively connected with the processing module and the training module and calculates a final score according to the first characteristic parameter, the second characteristic parameter and a third characteristic parameter of a corresponding time period of the singing audio; and the evaluation module is connected with the algorithm module and provides corresponding evaluation according to the final score. The technical scheme of the invention has the beneficial effects that: after the final score is calculated, the singing songs of the students for exercise or examination are evaluated, and customized advice is correspondingly provided for the students to help the students to improve the singing level.

Description

Evaluation system and method for singing songs
Technical Field
The invention relates to the technical field of song singing evaluation, in particular to a system and a method for evaluating singing songs.
Background
The lyrics are used as a text with a special expression mode, and different emotions and moods are expressed by endowing different rhythms and rhythms with the combination of lyrics contents. For the processing of lyrics, a matching tempo is often made from the content of the lyrics to form a song. That is, the lyrics are made first, and then the temperament corresponding to the music is made according to the lyrics. The finally formed song, whether the expression of the lyrics is in place or not, whether the lyrics are matched with the melody or not and the like are particularly important in music.
However, in the prior art, the evaluation scores of the singing songs are unified according to the same standard for all songs, the complex conditions that each song is different in singing method and different in emotion requirement cannot be handled, the standard is single, the artistic diversity cannot be adapted, only the scores are obtained, detailed suggestions are given to the latitude such as the tone, the sound speed and the intensity of each beat of each song after evaluation, the error points and weak points of the students cannot be made clear by the students in middle and primary schools, and the pertinence or the directivity of the students cannot be improved. Therefore, the above problems are difficult problems to be solved by those skilled in the art.
Disclosure of Invention
In view of the above problems in the prior art, a system and a method for evaluating a singing song are provided to facilitate evaluation of the singing song.
The specific technical scheme is as follows:
the invention comprises an evaluation system for singing songs, which is characterized by comprising the following steps:
the processing module is used for processing the type of each original song to extract a first characteristic parameter of the original song;
the training module is connected with the processing module and is used for respectively training the sound speed and the sound intensity change of each segmented audio frequency of each original song so as to respectively extract second characteristic parameters of the sound speed and the sound intensity change of each segmented audio frequency;
the algorithm module is respectively connected with the processing module and the training module and is used for calculating a final score according to the first characteristic parameter, the second characteristic parameter and a third characteristic parameter of a corresponding time period of a singing audio;
and the evaluation module is connected with the algorithm module and provides corresponding evaluation according to the final score.
Preferably, the processing module comprises:
a collecting unit for collecting each original song;
the classification unit is connected with the collection unit and is used for classifying the type of each collected original song;
the first training unit is connected with the classification unit and used for training the classified original song;
the first extraction unit is connected with the first training unit and is used for extracting a first characteristic parameter according to a training result of the first training unit;
and the first computing unit is connected with the first extracting unit and is used for computing the first characteristic parameter and the third characteristic parameter of the singing audio in a corresponding time period so as to obtain a singing type matching rate.
Preferably, the training module comprises:
a segmentation unit, configured to segment each original song according to a beat of the music score to form the segmented audio;
and the second training unit is connected to the segmenting unit and is used for respectively training the sound speed and the sound intensity change in the segmented audio so as to respectively extract the second characteristic parameters of the sound speed and the sound intensity change of each segmented audio.
Preferably, the algorithm module comprises:
a first comparing unit, configured to compare the second characteristic parameter of the change of the speed of sound and the sound intensity with the third characteristic parameter of the corresponding time interval of the singing audio, so as to obtain a speed of sound matching rate and a sound intensity change matching rate;
the second calculation unit is used for calculating the pitch value of each beat of the singing audio;
the second comparison unit is connected with the second calculation unit and is used for comparing the pitch value of each beat of the singing audio with the pitch value of each beat of the original song to obtain a correct pitch score;
and the third calculating unit is connected with the second comparing unit and is used for calculating the final score according to the singing type matching rate, the sound speed matching rate, the sound intensity change matching rate and the correct pitch score.
Preferably, the system further comprises a suggestion module, wherein the suggestion module is connected with the evaluation module and is used for providing corresponding suggestions for singers after the evaluations are provided according to the singing type matching rate, the sound speed matching rate, the sound intensity change matching rate and the pitch correct score.
The invention also provides an evaluation method of singing songs, which comprises the evaluation system of the singing songs, and the evaluation method comprises the following steps:
step S1, processing the type of each original song by adopting a processing module to extract a first characteristic parameter of the original song;
step S2, a training module is adopted for respectively training the sound speed and the sound intensity change of each segmented audio of each original song so as to respectively extract second characteristic parameters of the sound speed and the sound intensity change of each segmented audio;
step S3, an algorithm module is adopted for calculating a final score according to the first characteristic parameter, the second characteristic parameter and a third characteristic parameter of a corresponding time interval of a singing audio;
and step S4, adopting an evaluation module for providing corresponding evaluation according to the final score.
Preferably, the step S1 includes:
step S10, a collecting unit is adopted for collecting each original song;
step S11, adopting a classification unit for classifying the type of each collected original song;
step S12, a first training unit is adopted for training the classified original song;
step S13, a first extraction unit is adopted for extracting a first characteristic parameter according to the training result of the first training unit;
step S14, a first calculating unit is adopted to calculate the first characteristic parameter and the third characteristic parameter of the singing audio corresponding time interval, so as to obtain a singing type matching rate.
Preferably, the step S2 includes:
step S20, a segmentation unit is adopted for segmenting each original song according to the beat of the music score to form the segmented audio;
step S21, a second training unit is adopted to train the speed of sound and the change in sound intensity in the segmented audio respectively, so as to extract the second characteristic parameters of the speed of sound and the change in sound intensity of each segmented audio respectively.
Preferably, the step S3 includes:
step S30, a first comparing unit is adopted to compare the second characteristic parameter of the change of the speed of sound and the sound intensity with the third characteristic parameter of the corresponding time interval of the singing audio, so as to obtain a matching rate of speed of sound and a matching rate of change of sound intensity;
step S31, a second calculating unit is adopted for calculating the pitch value of each beat of the singing audio;
step S32, a second comparison unit is adopted for comparing the pitch value of each beat of the singing audio with the pitch value of each beat of the original song to obtain a correct pitch score;
step S33, a third calculating unit is adopted to calculate the final score according to the singing style matching rate, the sound velocity matching rate, the sound intensity variation matching rate, and the pitch correct score.
Preferably, the method further includes step S5, which is to adopt a suggestion module, and after providing evaluations according to the singing genre matching rate, the sound velocity matching rate, the sound intensity variation matching rate, and the pitch correct score, provide corresponding suggestions to the singer.
The technical scheme of the invention has the beneficial effects that: the evaluation system and method for singing songs are characterized in that an algorithm module calculates a final score according to a first characteristic parameter obtained by a processing module, a second characteristic parameter obtained after a training module respectively trains the sound speed and the sound intensity change of segmented audio of each original singing song and a third characteristic parameter of a singing audio corresponding time period, and then evaluates the singing songs learned or examined by students, correspondingly provides customized suggestions for the students, and helps the students improve the singing level.
Drawings
Embodiments of the present invention will now be described more fully hereinafter with reference to the accompanying drawings. The drawings are, however, to be regarded as illustrative and explanatory only and are not restrictive of the scope of the invention.
FIG. 1 is an overall functional block diagram of an evaluation system of an embodiment of the present invention;
FIG. 2 is a functional block diagram of a processing module in an embodiment of the invention;
FIG. 3 is a functional block diagram of a training module in an embodiment of the present invention;
FIG. 4 is a functional block diagram of an algorithm module in an embodiment of the present invention;
FIG. 5 is a flow diagram of the overall steps of a rating method in an embodiment of the present invention;
FIG. 6 is a flowchart of the steps of step S1 in an embodiment of the present invention;
FIG. 7 is a flowchart of the steps of step S2 in an embodiment of the present invention;
fig. 8 is a flowchart of the step S3 in the embodiment of the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
It should be noted that the embodiments and features of the embodiments may be combined with each other without conflict.
The invention is further described with reference to the following drawings and specific examples, which are not intended to be limiting.
The invention provides an evaluation system for singing songs, which comprises the following steps:
theprocessing module 1 is used for processing the type of each original song to extract a first characteristic parameter of the original song;
thetraining module 2 is connected with theprocessing module 1 and is used for respectively training the sound speed and the sound intensity change of each segmented audio frequency of each original song so as to respectively extract second characteristic parameters of the sound speed and the sound intensity change of each segmented audio frequency;
thealgorithm module 3 is respectively connected with theprocessing module 1 and thetraining module 2 and used for calculating a final score according to the first characteristic parameter, the second characteristic parameter and a third characteristic parameter of a corresponding time period of a singing audio;
and theevaluation module 4 is connected with thealgorithm module 3, and provides corresponding evaluation according to the final score.
With the evaluation system provided above, as shown in fig. 1, firstly, all the original songs are classified by theprocessing module 1, the type tempo of each original song can be obtained as, for example, 2/4, 4/4, 3/4, 3/8, 6/8, 9/8, 2/2, 1/4, and the type speed of each original song can also be obtained as, for example, 50-140 beats per minute (where each 10 beats is a classification), so that the type or style of each original song can be obtained as: the original song is cheerful or fluent or moderate or relaxed or hypertonic or intense or hot and the like, for example, the type beat of the original song 'love filling the world' is 4/4, the type speed is 76(70-79) beats per minute, so that the type or style of the original song is acquired as relaxed, and the original song is further processed to extract a first characteristic parameter of the original song for subsequent calculation and scoring.
Further, thetraining module 2 separately trains the sound speed and the sound intensity change of each segmented audio of each original song, and in this embodiment, the training methods adopted are CNN (Convolutional Neural networks) and ResNet (Residual Neural networks), so as to extract a large number of second characteristic parameters of the sound speed and the sound intensity change of each segmented audio respectively.
Further, the first characteristic parameter obtained by theprocessing module 1, the second characteristic parameter obtained by thetraining module 2 and the third characteristic parameter of the singing audio in the corresponding time period are calculated through thealgorithm module 3, so that the final score of the live singing song is obtained.
Furthermore, theevaluation module 4 gives corresponding evaluation to the singer according to the total score obtained by thealgorithm module 3, so as to help the student improve the singing level.
In addition, the first characteristic parameter, the second characteristic parameter, and the third characteristic parameter in the present invention are all high-dimensional characteristic parameters.
In a preferred embodiment, theprocessing module 1 comprises:
a collectingunit 10, for collecting each original song;
the classifyingunit 11 is connected with the collectingunit 10, and is used for classifying the type of each collected original song;
thefirst training unit 12, thefirst training unit 12 is connected to the classifyingunit 11, and is used for training the classified original song;
thefirst extraction unit 13 is connected with thefirst training unit 12, and a first characteristic parameter is extracted according to the training result of thefirst training unit 12;
and a first calculatingunit 14, where the first calculatingunit 14 is connected to the first extractingunit 13, and is configured to calculate the first characteristic parameter and a third characteristic parameter of a corresponding time period of the singing audio, so as to obtain a singing type matching rate.
Specifically, as shown in fig. 2, theprocessing module 1 first collects all original songs through thecollection unit 10, then classifies the type of each collected original song, and obtains the beat and speed of each song to know the type or style of each original song;
further, thefirst training unit 11 trains the classified original song by using CNN (Convolutional Neural Networks) and ResNet (Residual Neural Networks) algorithms, and then thefirst extraction unit 13 extracts a large number of first characteristic parameters.
Further, the first feature parameter and a third feature parameter of a corresponding time period of the singing audio are subjected to cosine distance calculation through the first calculatingunit 14, so that a singing type matching rate is obtained, and a style suggestion is obtained.
In a preferred embodiment, thetraining module 2 comprises:
asegmentation unit 20, configured to segment each original song according to the beat of the music score to form a segmented audio;
and thesecond training unit 21 is connected to the segmentingunit 20, and is configured to train the change of the sound velocity and the sound intensity in the segmented audio respectively, so as to extract second characteristic parameters of the change of the sound velocity and the sound intensity of each segmented audio respectively.
Specifically, as shown in fig. 3, thetraining module 2 first segments each original song according to the beat of the music score by thesegmentation unit 20 to form each small segmented audio, and then separately trains the sound velocity and the sound intensity variation of each small segmented audio by thesecond training unit 21 to extract a large number of second feature parameters of the sound velocity and the sound intensity variation of each segmented audio.
In a preferred embodiment, thealgorithm module 3 comprises:
a first comparingunit 30, configured to compare the second characteristic parameter of the change of the sound speed and the sound intensity with the third characteristic parameter of the corresponding time interval of the singing audio, so as to obtain a sound speed matching rate and a sound intensity change matching rate;
a second calculatingunit 31, for calculating the pitch value of each beat of the singing audio;
thesecond comparison unit 32 is connected to thesecond calculation unit 31, and is used for comparing the pitch value of each beat of the singing audio with the pitch value of each beat of the original singing song to obtain a correct pitch score;
a third calculatingunit 33, the third calculatingunit 33 is connected to the second comparingunit 32, and is used for calculating a final score according to the singing type matching rate, the sound velocity matching rate, the sound intensity variation matching rate and the pitch correct score.
Specifically, as shown in fig. 4, thealgorithm module 3 first compares the second characteristic parameter of the variation of the sound speed and the sound intensity with the third characteristic parameter of the corresponding time period of the singing audio through thefirst comparison unit 30, so as to obtain the sound speed matching rate and the sound intensity variation matching rate.
Further, the second calculatingunit 31 calculates the pitch value of each beat of the singing audio by using FFT (fast Fourier transform), the second comparingunit 32 compares the pitch value of each beat of the singing audio with the pitch value of each beat of the original song, if the pitch value of each beat of the singing audio is correct, the score is 1, and if the pitch value of each beat of the singing audio is incorrect, the score is 0, so as to calculate the correct pitch score, and the third calculatingunit 33 is used to obtain the final score according to the following formula:
Figure BDA0002268424760000091
wherein S represents the final score;
MaxS represents the total score;
o represents a sound velocity matching rate;
p represents a sound intensity change matching rate;
n represents the pitch correct score;
x represents the number of bars of the song;
y represents the number of beats of the song;
i represents the score weight of the sound speed and the sound intensity change;
j represents the score weight for the pitch correct score.
In a preferred embodiment, the system further comprises asuggestion module 5, wherein thesuggestion module 5 is connected to theevaluation module 4, and is configured to provide corresponding suggestions to the singer after providing evaluations according to the singing type matching rate, the sound velocity matching rate, the sound intensity change matching rate, and the pitch correct score.
Specifically, as shown in fig. 1, after the evaluation is made according to the singing type matching rate, the sound velocity matching rate, the sound intensity variation matching rate and the pitch correct score, the singer is suggested accordingly, for example, by taking the 18nd bar 2 in the song "beijing welcome you", the audio of the original song should be from low to high, but the singer may sing from high to low by mistake, so that the singer is suggested to pay attention to the mastering of the sound intensity, and thus the 2 nd bar is repeatedly exercised for the 18 nd bar, so that the singing level of the singer is improved.
The invention also provides an evaluation method of singing songs, which comprises the evaluation system of the singing songs, and the evaluation method comprises the following steps:
step S1, aprocessing module 1 is adopted to process the type of each original song so as to extract a first characteristic parameter of the original song;
step S2, atraining module 2 is adopted for respectively training the sound speed and the sound intensity change of each segmented audio of each original song so as to respectively extract second characteristic parameters of the sound speed and the sound intensity change of each segmented audio;
step S3, analgorithm module 3 is adopted for calculating a final score according to the first characteristic parameter, the second characteristic parameter and a third characteristic parameter of a corresponding time interval of a singing audio;
and step S4, adopting anevaluation module 4 for proposing corresponding evaluation according to the final score.
Through the evaluation method provided above, as shown in fig. 5, firstly, theprocessing module 1 is adopted to classify all the original songs, and the type tempo of each original song can be obtained as, for example, 2/4, 4/4, 3/4, 3/8, 6/8, 9/8, 2/2, 1/4, or the type speed of each original song can be obtained as, for example, 50-140 beats per minute (where each 10 beats is a classification), so that the type or style of each original song can be obtained as: the original song is cheerful or fluent or moderate or relaxed or hypertonic or intense or hot and the like, for example, the type beat of the original song 'love filling the world' is 4/4, the type speed is 76(70-79) beats per minute, so that the type or style of the original song is acquired as relaxed, and the original song is further processed to extract a first characteristic parameter of the original song for subsequent calculation and scoring.
Further, thetraining module 2 is adopted to separately train the sound speed and the sound intensity change of each segmented audio of each original song, in this embodiment, the adopted training methods are CNN (Convolutional Neural networks) and ResNet (Residual Neural networks Residual error Network), so as to extract a large number of second characteristic parameters of the sound speed and the sound intensity change of each segmented audio respectively.
Further, thealgorithm module 3 is adopted to calculate the first characteristic parameter obtained by theprocessing module 1, the second characteristic parameter obtained by thetraining module 2 and the third characteristic parameter of the singing audio in the corresponding time period, so as to obtain the final score of the live singing song.
Furthermore, anevaluation module 4 is adopted to give corresponding evaluation to the singer according to the total score obtained by thealgorithm module 3, so as to help the student improve the singing level.
In addition, the first characteristic parameter, the second characteristic parameter, and the third characteristic parameter in the present invention are all high-dimensional characteristic parameters.
In a preferred embodiment, step S1 includes:
step S10, a collectingunit 10 is adopted for collecting each original song;
step S11, aclassification unit 11 is adopted for classifying the type of each original song after collection;
step S12, afirst training unit 12 is adopted for training the classified original song;
step S13, a first extractingunit 13 is adopted to extract a first feature parameter according to the training result of thefirst training unit 12;
step S14, a first calculatingunit 14 is adopted to calculate the first characteristic parameter and the third characteristic parameter of the singing audio corresponding time interval, so as to obtain a singing type matching rate.
Specifically, as shown in fig. 6, in step S1, thecollection unit 10 is first used to collect all the original songs, then the types of each collected original song are classified, and the beat and speed of each song are obtained to know which type or style each original song is;
further, thefirst training unit 11 is adopted to train the classified original song by adopting CNN (Convolutional Neural Networks) and ResNet (Residual Neural Networks) algorithms, and then thefirst extraction unit 13 is adopted to extract a large number of first characteristic parameters.
Further, the first calculatingunit 14 is adopted to perform cosine distance calculation on the first characteristic parameter and the third characteristic parameter of the singing audio in the corresponding time period so as to obtain a singing type matching rate and further obtain a style suggestion.
In a preferred embodiment, step S2 includes:
step S20, asegmentation unit 20 is adopted for segmenting each original song according to the beat of the music score to form segmented audio;
step S21, asecond training unit 21 is adopted to train the variation of the sound velocity and the sound intensity in the segmented audio respectively, so as to extract the second characteristic parameters of the variation of the sound velocity and the sound intensity of each segmented audio respectively.
Specifically, as shown in fig. 7, in step S2, thesegmentation unit 20 is first used to segment each original song according to the tempo of the music score to form each small segmented audio, and thesecond training unit 21 is then used to perform separate training on the sound velocity and the sound intensity variation of each small segmented audio respectively to extract a large number of second feature parameters of the sound velocity and the sound intensity variation of each segmented audio respectively.
In a preferred embodiment, step S3 includes:
step S30, using a first comparingunit 30 for comparing the second characteristic parameter of sound velocity and sound intensity variation with the third characteristic parameter of the corresponding time interval of the singing audio to obtain a sound velocity matching rate and a sound intensity variation matching rate;
step S31, using a second calculatingunit 31 for calculating the pitch value of each beat of the singing audio;
step S32, asecond comparison unit 32 is adopted to compare the pitch value of each beat of the singing audio with the pitch value of each beat of the original singing song to obtain a correct pitch score;
step S33, a third calculatingunit 33 is adopted to calculate the final score according to the singing style matching rate, the sound velocity matching rate, the sound intensity variation matching rate and the pitch correct score.
Specifically, as shown in fig. 8, in step S3, thefirst comparison unit 30 is first adopted to compare the second characteristic parameters of the variation of the sound velocity and the sound intensity with the third characteristic parameters of the corresponding time period of the singing audio, so as to obtain the sound velocity matching rate and the sound intensity variation matching rate.
Further, the second calculatingunit 31 calculates a pitch value of each beat of the singing audio by using FFT (fast Fourier transform), the second comparingunit 32 compares the pitch value of each beat of the singing audio with the pitch value of each beat of the original song, if the pitch value of each beat of the singing audio is correct, the score is 1, and if the pitch value of each beat of the singing audio is incorrect, the score is 0, so that a correct pitch score is calculated, and the third calculatingunit 33 obtains the final score according to the following formula:
wherein S represents the final score;
MaxS represents the total score;
o represents a sound velocity matching rate;
p represents a sound intensity change matching rate;
n represents the pitch correct score;
x represents the number of bars of the song;
y represents the number of beats of the song;
i represents the score weight of the sound speed and the sound intensity change;
j represents the score weight for the pitch correct score.
In a preferred embodiment, the method further comprises a step S5 of using asuggestion module 5 for providing corresponding suggestions to the singer after making evaluations according to the singing type matching rate, the sound velocity matching rate, the sound intensity variation matching rate and the pitch correct score.
Specifically, as shown in fig. 5, after the evaluation is made according to the singing type matching rate, the sound velocity matching rate, the sound intensity variation matching rate and the pitch correct score, thesuggestion module 5 is adopted to correspondingly give suggestions to the singer, for example, by taking the 18 thminor bar 2 in the song "beijing welcome you", the audio of the original sung song should be from low to high, but the singer may possibly sing from high to low, so that the singer can be suggested to pay attention to the mastering of the sound intensity, and thus the 2 nd minor bar 2-time exercise is repeated, so that the singing level of the singer is improved.
While the invention has been described with reference to a preferred embodiment, it will be understood by those skilled in the art that various changes in form and detail may be made therein without departing from the spirit and scope of the invention.

Claims (10)

1. A system for evaluating a song performed, comprising:
the processing module is used for processing the type of each original song to extract a first characteristic parameter of the original song;
the training module is connected with the processing module and is used for respectively training the sound speed and the sound intensity change of each segmented audio frequency of each original song so as to respectively extract second characteristic parameters of the sound speed and the sound intensity change of each segmented audio frequency;
the algorithm module is respectively connected with the processing module and the training module and is used for calculating a final score according to the first characteristic parameter, the second characteristic parameter and a third characteristic parameter of a corresponding time period of a singing audio;
and the evaluation module is connected with the algorithm module and provides corresponding evaluation according to the final score.
2. The system of claim 1, wherein the processing module comprises:
a collecting unit for collecting each original song;
the classification unit is connected with the collection unit and is used for classifying the type of each collected original song;
the first training unit is connected with the classification unit and used for training the classified original song;
the first extraction unit is connected with the first training unit and is used for extracting a first characteristic parameter according to a training result of the first training unit;
and the first computing unit is connected with the first extracting unit and is used for computing the first characteristic parameter and the third characteristic parameter of the singing audio in a corresponding time period so as to obtain a singing type matching rate.
3. The system of claim 1, wherein the training module comprises:
a segmentation unit, configured to segment each original song according to a beat of the music score to form the segmented audio;
and the second training unit is connected to the segmenting unit and is used for respectively training the sound speed and the sound intensity change in the segmented audio so as to respectively extract the second characteristic parameters of the sound speed and the sound intensity change of each segmented audio.
4. The system of claim 2, wherein the algorithm module comprises:
a first comparing unit, configured to compare the second characteristic parameter of the change of the speed of sound and the sound intensity with the third characteristic parameter of the corresponding time interval of the singing audio, so as to obtain a speed of sound matching rate and a sound intensity change matching rate;
the second calculation unit is used for calculating the pitch value of each beat of the singing audio;
the second comparison unit is connected with the second calculation unit and is used for comparing the pitch value of each beat of the singing audio with the pitch value of each beat of the original song to obtain a correct pitch score;
and the third calculating unit is connected with the second comparing unit and is used for calculating the final score according to the singing type matching rate, the sound speed matching rate, the sound intensity change matching rate and the correct pitch score.
5. The system for evaluating a sung song according to claims 2 and 4, further comprising a suggestion module, wherein the suggestion module is connected to the evaluation module, and is configured to suggest to a singer a corresponding suggestion after proposing an evaluation according to the sung type matching rate, the sound velocity matching rate, the sound intensity variation matching rate, and the pitch correct score.
6. A method for evaluating a singing song, comprising the system for evaluating a singing song according to any one of claims 1 to 5, the method comprising:
step S1, processing the type of each original song by adopting a processing module to extract a first characteristic parameter of the original song;
step S2, a training module is adopted for respectively training the sound speed and the sound intensity change of each segmented audio of each original song so as to respectively extract second characteristic parameters of the sound speed and the sound intensity change of each segmented audio;
step S3, an algorithm module is adopted for calculating a final score according to the first characteristic parameter, the second characteristic parameter and a third characteristic parameter of a corresponding time interval of a singing audio;
and step S4, adopting an evaluation module for providing corresponding evaluation according to the final score.
7. The method of claim 6, wherein the step S1 includes:
step S10, a collecting unit is adopted for collecting each original song;
step S11, adopting a classification unit for classifying the type of each collected original song;
step S12, a first training unit is adopted for training the classified original song;
step S13, a first extraction unit is adopted for extracting a first characteristic parameter according to the training result of the first training unit;
step S14, a first calculating unit is adopted to calculate the first characteristic parameter and the third characteristic parameter of the singing audio corresponding time interval, so as to obtain a singing type matching rate.
8. The method of claim 6, wherein the step S2 includes:
step S20, a segmentation unit is adopted for segmenting each original song according to the beat of the music score to form the segmented audio;
step S21, a second training unit is adopted to train the speed of sound and the change in sound intensity in the segmented audio respectively, so as to extract the second characteristic parameters of the speed of sound and the change in sound intensity of each segmented audio respectively.
9. The method of claim 7, wherein the step S3 includes:
step S30, a first comparing unit is adopted to compare the second characteristic parameter of the change of the speed of sound and the sound intensity with the third characteristic parameter of the corresponding time interval of the singing audio, so as to obtain a matching rate of speed of sound and a matching rate of change of sound intensity;
step S31, a second calculating unit is adopted for calculating the pitch value of each beat of the singing audio;
step S32, a second comparison unit is adopted for comparing the pitch value of each beat of the singing audio with the pitch value of each beat of the original song to obtain a correct pitch score;
step S33, a third calculating unit is adopted to calculate the final score according to the singing style matching rate, the sound velocity matching rate, the sound intensity variation matching rate, and the pitch correct score.
10. The method as claimed in claim 7 or 9, further comprising step S5, wherein a suggestion module is used for providing corresponding suggestions to the singer after the evaluation is made according to the singing genre matching rate, the sound velocity matching rate, the sound intensity variation matching rate and the pitch correct score.
CN201911096229.9A2019-11-112019-11-11Evaluation system and method for singing songsWithdrawnCN110808069A (en)

Priority Applications (1)

Application NumberPriority DateFiling DateTitle
CN201911096229.9ACN110808069A (en)2019-11-112019-11-11Evaluation system and method for singing songs

Applications Claiming Priority (1)

Application NumberPriority DateFiling DateTitle
CN201911096229.9ACN110808069A (en)2019-11-112019-11-11Evaluation system and method for singing songs

Publications (1)

Publication NumberPublication Date
CN110808069Atrue CN110808069A (en)2020-02-18

Family

ID=69501940

Family Applications (1)

Application NumberTitlePriority DateFiling Date
CN201911096229.9AWithdrawnCN110808069A (en)2019-11-112019-11-11Evaluation system and method for singing songs

Country Status (1)

CountryLink
CN (1)CN110808069A (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN112424802A (en)*2020-10-012021-02-26曹庆恒Musical instrument teaching system, use method thereof and computer readable storage medium
CN112534425A (en)*2020-10-152021-03-19曹庆恒Singing teaching system, use method thereof and computer readable storage medium
CN112634939A (en)*2020-12-112021-04-09腾讯音乐娱乐科技(深圳)有限公司Audio identification method, device, equipment and medium
CN113140228A (en)*2021-04-142021-07-20广东工业大学Vocal music scoring method based on graph neural network

Citations (17)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
JPH11184478A (en)*1997-12-181999-07-09Ricoh Co Ltd Music performance equipment
JP2002175086A (en)*2001-10-152002-06-21Yamaha Corp Karaoke equipment
JP2002351477A (en)*2001-05-292002-12-06Daiichikosho Co Ltd Karaoke device with singing special training function
JP2006031041A (en)*2005-08-292006-02-02Yamaha Corp Karaoke device that sequentially changes the scoring image based on the scoring data output for each phrase
JP2008225111A (en)*2007-03-132008-09-25Yamaha Corp Karaoke device and program
JP2008250182A (en)*2007-03-302008-10-16Daiichikosho Co Ltd Karaoke equipment
CN101441865A (en)*2007-11-192009-05-27盛趣信息技术(上海)有限公司Method and system for grading sing genus game
JP2013213907A (en)*2012-04-022013-10-17Yamaha CorpEvaluation apparatus
CN104064180A (en)*2014-06-062014-09-24百度在线网络技术(北京)有限公司Singing scoring method and device
CN105244041A (en)*2015-09-222016-01-13百度在线网络技术(北京)有限公司Song audition evaluation method and device
CN107103912A (en)*2017-04-242017-08-29行知技术有限公司A kind of student for imparting knowledge to students and checking and rating sings performance points-scoring system
CN107578775A (en)*2017-09-072018-01-12四川大学 A multi-task speech classification method based on deep neural network
CN108320730A (en)*2018-01-092018-07-24广州市百果园信息技术有限公司Music assorting method and beat point detecting method, storage device and computer equipment
CN108520735A (en)*2018-02-062018-09-11南京歌者盟网络科技有限公司A kind of methods of marking of performance
CN108877835A (en)*2018-05-312018-11-23深圳市路通网络技术有限公司Evaluate the method and system of voice signal
CN109243416A (en)*2017-07-102019-01-18哈曼国际工业有限公司For generating the device arrangements and methods of drum type formula
CN109308912A (en)*2018-08-022019-02-05平安科技(深圳)有限公司Music style recognition methods, device, computer equipment and storage medium

Patent Citations (17)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
JPH11184478A (en)*1997-12-181999-07-09Ricoh Co Ltd Music performance equipment
JP2002351477A (en)*2001-05-292002-12-06Daiichikosho Co Ltd Karaoke device with singing special training function
JP2002175086A (en)*2001-10-152002-06-21Yamaha Corp Karaoke equipment
JP2006031041A (en)*2005-08-292006-02-02Yamaha Corp Karaoke device that sequentially changes the scoring image based on the scoring data output for each phrase
JP2008225111A (en)*2007-03-132008-09-25Yamaha Corp Karaoke device and program
JP2008250182A (en)*2007-03-302008-10-16Daiichikosho Co Ltd Karaoke equipment
CN101441865A (en)*2007-11-192009-05-27盛趣信息技术(上海)有限公司Method and system for grading sing genus game
JP2013213907A (en)*2012-04-022013-10-17Yamaha CorpEvaluation apparatus
CN104064180A (en)*2014-06-062014-09-24百度在线网络技术(北京)有限公司Singing scoring method and device
CN105244041A (en)*2015-09-222016-01-13百度在线网络技术(北京)有限公司Song audition evaluation method and device
CN107103912A (en)*2017-04-242017-08-29行知技术有限公司A kind of student for imparting knowledge to students and checking and rating sings performance points-scoring system
CN109243416A (en)*2017-07-102019-01-18哈曼国际工业有限公司For generating the device arrangements and methods of drum type formula
CN107578775A (en)*2017-09-072018-01-12四川大学 A multi-task speech classification method based on deep neural network
CN108320730A (en)*2018-01-092018-07-24广州市百果园信息技术有限公司Music assorting method and beat point detecting method, storage device and computer equipment
CN108520735A (en)*2018-02-062018-09-11南京歌者盟网络科技有限公司A kind of methods of marking of performance
CN108877835A (en)*2018-05-312018-11-23深圳市路通网络技术有限公司Evaluate the method and system of voice signal
CN109308912A (en)*2018-08-022019-02-05平安科技(深圳)有限公司Music style recognition methods, device, computer equipment and storage medium

Cited By (6)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN112424802A (en)*2020-10-012021-02-26曹庆恒Musical instrument teaching system, use method thereof and computer readable storage medium
WO2022067832A1 (en)*2020-10-012022-04-07曹庆恒Musical instrument teaching system, method for using same, and computer-readable storage medium
CN112534425A (en)*2020-10-152021-03-19曹庆恒Singing teaching system, use method thereof and computer readable storage medium
WO2022077405A1 (en)*2020-10-152022-04-21曹庆恒Singing instruction system and method for use thereof, and computer-readable storage medium
CN112634939A (en)*2020-12-112021-04-09腾讯音乐娱乐科技(深圳)有限公司Audio identification method, device, equipment and medium
CN113140228A (en)*2021-04-142021-07-20广东工业大学Vocal music scoring method based on graph neural network

Similar Documents

PublicationPublication DateTitle
CN110808069A (en)Evaluation system and method for singing songs
CN109448754B (en)Multidimensional singing scoring system
Dighe et al.Scale independent raga identification using chromagram patterns and swara based features
CN106485984B (en) An intelligent teaching method and device for piano
Mion et al.Score-independent audio features for description of music expression
CN108549675B (en) A piano teaching method based on big data and neural network
Barrington et al.Modeling music as a dynamic texture
CN108257614A (en)The method and its system of audio data mark
CN102880693A (en)Music recommendation method based on individual vocality
Mokhsin et al.Automatic music emotion classification using artificial neural network based on vocal and instrumental sound timbres
Clayton et al.Raga Classification From Vocal Performances Using Multimodal Analysis.
Zhou et al.Optimization of Multimedia Computer-aided Interaction System of Vocal Music Teaching Based on Voice Recognition.
CN112201100A (en)Music singing scoring system and method for evaluating artistic quality of primary and secondary schools
Han et al.Finding tori: Self-supervised learning for analyzing korean folk song
Kosta et al.A deep learning method for melody extraction from a polyphonic symbolic music representation.
Wang et al.Research on intelligent recognition and classification algorithm of music emotion in complex system of music performance
Koops et al.Chord label personalization through deep learning of integrated harmonic interval-based representations
CN111159465A (en)Song classification method and device
Gong et al.Pitch contour segmentation for computer-aided jinju singing training
CN110910714A (en)Piano learning system
Prasad Reddy et al.Automatic Raaga identification system for Carnatic music using hidden Markov model
Yang et al.A multi-stage automatic evaluation system for sight-singing
Nichols et al.Automatically discovering talented musicians with acoustic analysis of youtube videos
Ramirez et al.Automatic performer identification in commercial monophonic jazz performances
Tsunoo et al.Music mood classification by rhythm and bass-line unit pattern analysis

Legal Events

DateCodeTitleDescription
PB01Publication
PB01Publication
SE01Entry into force of request for substantive examination
SE01Entry into force of request for substantive examination
WW01Invention patent application withdrawn after publication

Application publication date:20200218

WW01Invention patent application withdrawn after publication
CI02Correction of invention patent application

Correction item:withdrawal of application for invention after its publication

Correct:Revoke the announcement

False:Withdrawal of an application for publication of a patent for invention

Number:48-02

Volume:37

CI02Correction of invention patent application
RJ01Rejection of invention patent application after publication

Application publication date:20200218

RJ01Rejection of invention patent application after publication

[8]ページ先頭

©2009-2025 Movatter.jp