Detailed Description
For the purpose of promoting an understanding of the principles and advantages of the disclosure, reference will now be made in detail to the drawings, in which it is apparent that the embodiments described are only some, but not all embodiments of the disclosure. Based on the embodiments in this disclosure, all other embodiments that a person of ordinary skill in the art would obtain without making any inventive effort are within the scope of protection of this disclosure.
The terminology used in the embodiments of the disclosure is for the purpose of describing particular embodiments only and is not intended to be limiting of the disclosure. As used in this disclosure of embodiments and the appended claims, the singular forms "a," "an," and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise, the "plurality" generally includes at least two.
It should be understood that the term "and/or" as used herein is merely one relationship describing the association of the associated objects, meaning that there may be three relationships, e.g., a and/or B, may represent: a exists alone, A and B exist together, and B exists alone. In addition, the character "/" herein generally indicates that the front and rear associated objects are an "or" relationship.
It should be understood that although the terms first, second, third, etc. may be used in embodiments of the present disclosure, these descriptions should not be limited to these terms. These terms are only used to distinguish one from another. For example, a first may also be referred to as a second, and similarly, a second may also be referred to as a first, without departing from the scope of embodiments of the present disclosure.
The words "if", as used herein, may be interpreted as "at … …" or "at … …" or "in response to a determination" or "in response to a detection", depending on the context. Similarly, the phrase "if determined" or "if detected (stated condition or event)" may be interpreted as "when determined" or "in response to determination" or "when detected (stated condition or event)" or "in response to detection (stated condition or event), depending on the context.
It should also be noted that the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a product or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such product or apparatus. Without further limitation, an element defined by the phrase "comprising one … …" does not exclude the presence of other like elements in a commodity or device comprising such element.
In particular, the symbols and/or numerals present in the description, if not marked in the description of the figures, are not numbered.
Alternative embodiments of the present disclosure are described in detail below with reference to the drawings.
Example 1
Embodiments provided for in the present disclosure are embodiments of a method for generating an ac group.
The embodiment of the disclosure is applied to the client terminal, and a reviewer can enter a review mode aiming at the presentation by using the client terminal, for example, the reviewer can enter a main interface of the review mode by logging in a review platform; in the main interface of the review mode, a reviewer can select and open a presentation for review from presentation used by a plurality of teaching teachers in teaching, and enter a secondary interface of the review mode; as shown in fig. 2, a presentation image of the presentation is displayed at the upper left part of the secondary interface, note information automatically generated during teaching is displayed at the lower left part of the secondary interface, wherein a text window 22 at the left half is used for displaying recommended text in the note information, and an audio window 23 at the right half is used for displaying audio clips in the loaded note information; in the note mode, each note information corresponds to a sentence text in the presentation image, and each sentence text comprises a complete semantic; the audio clip is from a teaching video of a teaching teacher and comprises a complete semantic meaning for explaining sentence texts in synchronously played presentation images.
The presentation refers to making a series of static presentation images into dynamic slides for playing, so that the presentation is more fluent and vivid in explanation, and the explanation efficiency is improved. For example, a PowerPoint presentation, i.e., a PPT presentation.
In some embodiments, the method comprises the steps of:
and step S100-1, in the review mode, responding to the fact that a reviewer determines a presentation, and acquiring third identification information of the presentation.
In the embodiment of the disclosure, each presentation has a unique third identification information for distinguishing the presentation.
The reviewer determines the presentation, which can be understood that a plurality of presentation files of different courses are displayed on the main interface of the review platform, and the reviewer selects any one of the plurality of presentation files as the presentation file for review at the time.
And step S100-2, acquiring historical communication information related to the presentation from the communication data set based on the third identification information.
The exchange data set is used for storing the exchange information which is released in the exchange group by all reviewers aiming at the presentation file in history. The communication data set comprises third identification information and historical communication information of the presentation. Wherein the history communication information includes: history communication time point, history communication member information and history communication text.
The historical communication text refers to communication content which is historically issued in a communication window of a communication group.
The exchange member is a reviewer who issues the exchange text. The communication member information can be the name of the reviewer, the network name of the reviewer, and the identity information representing the reviewer. The communication member information can be obtained when the reviewer logs onto the network review platform.
Step S100-3, generating a second communication window 25 of the user interface in the user interface.
The second communication window 25 is used for displaying a communication group for communicating the learning core of the presentation. For example, as shown in fig. 2, a second communication window 25 is displayed at the lower right of the secondary interface of the review platform.
Step S100-4, generating a group member list in the second communication window 25 based on the history communication member information.
Step S100-5, displaying the historical communication time points and the historical communication text in the second communication window 25 according to the time sequence.
The embodiments of the present disclosure show the communication content of all group members for the same presentation in the communication group of the second communication window 25. Each reviewer can timely obtain the communication information of the current presentation through the communication group in the second communication window 25, and master the global communication dynamics.
The following describes in detail a method for generating an ac group according to an embodiment of the present disclosure with reference to fig. 1. The method comprises the following steps:
in step S101, in a review mode of a presentation, first identification information of a presentation image determined by a reviewer is acquired in response to the presentation image being displayed in a presentation window 21 of a user interface.
The user interface described below in the embodiments of the present disclosure refers to a secondary interface in a review platform.
The presentation image determined by the reviewer is displayed in the presentation window 21 of the user interface, which can be understood that the reviewer plays the presentation, and each play of one presentation image in the presentation window 21 corresponds to the determination of one presentation image displayed in the presentation window 21 of the user interface.
The first identification information is unique information of the presentation images in the presentation, and each presentation image in the presentation has different first identification information for identifying the identity of the presentation image.
Step S102, acquiring all note information of each sentence text in the presentation image based on the first identification information.
At least one sentence text is included in the presentation image. Each sentence text comprises a complete semantic meaning. The sentence text may be understood as a text capable of expressing a complete meaning, for example, the text in the presentation image is divided by a period, a semicolon, an exclamation mark or a question mark, and each sentence text is obtained.
Only one sentence text may be included in the presentation image, or a plurality of sentence texts may be included. Each sentence text corresponds to at least one note information.
Wherein the note information includes at least second identification information for identifying the sentence text and at least one recommended text of the sentence text.
The recommended text may be historical note text recorded by others for the audio clip, or machine-generated machine note text. The history note text can be a better history note text or machine note text, and can also be a plurality of history note texts and/or machine note texts with different recording styles.
Step S103, the recommended text of each piece of note information is displayed in a triggerable mode in a text window 22 of the user interface respectively.
It will be understood that the note information of each sentence text is displayed in the text window 22 of the user interface, and clicking any one of the note information can generate a trigger event for the note information. For example, in the text window 22, note information of each sentence text is displayed in a tree structure, a plurality of subtrees are included under the root node, a parent node of each subtree is used to characterize the sentence text, and leaf nodes under the parent node are note information related to the sentence text.
Step S104, in response to the triggering of any one of the recommended texts, determining second identification information of the sentence text corresponding to the triggered recommended text, obtaining historical communication information related to the sentence text from a communication data set based on the second identification information, generating a communication group for communicating learning hearts of the sentence text based on the historical communication information, and displaying the communication group in the first communication window 24 of the user interface.
It will be understood that the communication group for the current sentence text is displayed in the first communication window 24, and when any one of the recommended texts of another sentence text other than the current sentence text is triggered, the communication group learned for the other sentence text is displayed in the first communication window 24.
Since the note information includes at least the second identification information of the sentence text and at least one recommended text of the sentence text, the second identification information has a correspondence relationship with the recommended text. And if the triggered recommended text is determined, obtaining second identification information corresponding to the recommended text.
The exchange data set stores historical exchange information issued by all reviewers for sentence texts of respective interest. The embodiment of the present disclosure obtains only the history exchange information related to the sentence text of interest to the reviewer from the exchange data set, and generates the exchange group displayed in the first exchange window 24 from the history exchange information.
The specific embodiment of the disclosure provides a method for generating an exchange group, which automatically generates a corresponding exchange group for note information concerned by a reviewer under the condition that the storage mode of original exchange data is unchanged and the use mode of the original exchange group is kept unchanged, and only displays the exchange information related to sentence text corresponding to the note information in the exchange group. The condition that the communication information in the original communication group is disordered and disordered is broken, and the related communication information concerned by the reviewer is intensively displayed in the communication group of the first communication window 24, so that the reviewer can browse and learn intensively. Thereby reducing repeated communication information and improving review efficiency.
In some embodiments, the historical communications information includes: history communication time point, history communication member information and history communication text.
Accordingly, the generating the communication group for communicating the sentence text learning core based on the history communication information includes the following steps:
step S104-1, generating a group member list in the first communication window 24 based on the history communication member information.
For example, as shown in FIG. 3, a group member list of the communication group is shown in a member area 241 at the left portion of the first communication window 24.
Step S104-2, displaying the historical communication time points and the historical communication text in the first communication window 24 according to time sequence.
For example, as shown in FIG. 3, an entry field 242 at the lower right of the first communication window 24 is used for the reviewer to edit his own communication text; text field 243 at the upper right of the communication window is used to show the communication text that the reviewer sends to the communication group and to show the communication text that other group members issue to the communication group. Meets the use habit of the reviewer on the instant communication tool.
In some specific embodiments, after the obtaining all the note information of each sentence text in the presentation image based on the first identification information, the method further includes the following steps:
step S102a-1, obtaining the region position information of the corresponding sentence text based on any one of the note information of each sentence text.
The note information not only comprises the second identification information of the sentence text and at least one recommended text of the sentence text, but also comprises at least one region position information of the sentence text.
The region position information is acquired from a presentation image in a note mode. For example, the region location information is characterized by pixel location information in the presentation image.
Since at least one sentence text is included in the presentation image and one sentence text may be displayed in a plurality of lines of text images in the presentation image, the embodiment of the present disclosure sets one region position information for each line. For example, if the sentence text is displayed in a line of text image in the presentation image, the sentence text has only one region position information, such as representing the region position information with pixel position information of 4 vertices of a rectangular region in which the line of text image is located; if the sentence text is displayed in a presentation image in a plurality of lines of text images, the sentence text has a plurality of region position information, each region position information limiting a region of each line of the current sentence text in the presentation image.
And step S102a-2, extracting sentence texts from the presentation image based on the region position information of each sentence text.
Sentence text images can be extracted from the demonstration text images through the regional position information of each sentence text, and sentence texts in the sentence text images can be extracted through an image recognition mode. The present embodiment of the process of extracting sentence text according to the manner of image recognition will not be described in detail, and may be implemented with reference to various implementations in the related art.
Step S102a-3, each sentence text is displayed in a triggerable manner in the text window 22 of the user interface.
It will be appreciated that each sentence text is displayed in the text window 22 of the user interface and clicking on any one sentence text can generate a trigger event for that sentence text.
Only sentence text in the presentation text image may be listed in the text window 22, or sentence text in the presentation text image may be listed together with note information associated with the sentence text. For example, in the text window 22, each sentence text and note information associated with the sentence text are displayed in a tree structure, a plurality of subtrees are included under the root node, a parent node of each subtree is used to display the sentence text, and leaf nodes under the parent node display the note information associated with the sentence text.
Step S102a-4, in response to the triggering of any statement text, determining second identification information of the triggered statement text, obtaining historical communication information related to the statement text from a communication data set based on the second identification information, generating a communication group for communicating learning hearts of the statement text based on the historical communication information, and displaying the communication group in a first communication window 24 of the user interface.
It will be understood that the communication group for the current sentence text is displayed in the first communication window 24, and when another sentence text other than the current sentence text is triggered, the communication group learned for the other sentence text is displayed in the first communication window 24.
The embodiment of the disclosure provides a method for generating an exchange group, which breaks through the conditions of messy and disordered exchange information in the original exchange group, and displays the related exchange information concerned by a reviewer in the exchange group of the first exchange window 24 in a concentrated manner, thereby facilitating the concentrated browsing and learning of the reviewer. Thereby reducing repeated communication information and improving review efficiency.
In other specific embodiments, after the obtaining all the note information of each sentence text in the presentation image based on the first identification information, the method further includes the following steps:
step S102b-1, obtaining the region position information of the corresponding sentence text based on any one of the note information of each sentence text.
The note information not only comprises the second identification information of the sentence text and at least one recommended text of the sentence text, but also comprises at least one region position information of the sentence text.
Step S102b-2, the image size of the presentation image and the window size of the presentation window 21 are obtained.
Since the presentation image is adaptively enlarged or reduced based on the window size of the presentation window 21, this step adaptively repositions the region position information.
The window size includes a length value and a height value of the presentation window 21; the image size includes a length value and a height value of the presentation image.
Step S102b-3, converting the region position information of each sentence text into the corresponding position information in the demonstration window 21 based on the ratio of the window size to the image size.
The ratio of the window size of the presentation window 21 to the image size of the presentation image includes a length ratio and a height ratio. The length ratio is the ratio of the length value of the window to the length value of the image; the height ratio refers to the ratio of the height value of the window to the height value of the image. For example, the length value of the presentation window 21 is 800 window unit lengths (the numerical value of the presentation window 21 is a value with the top left corner vertex thereof as the origin of coordinates, the X axis as the horizontal axis pointing to the right, and the Y axis as the vertical axis pointing downward), and the height value of the presentation window 21 is 600 window unit heights; the length value of the presentation image is 1024 pixels (the numerical value of the presentation image takes the top left corner vertex as the origin of coordinates, the X axis as the horizontal axis pointing to the right and the Y axis as the vertical axis pointing to the lower), and the height value of the presentation image is 768 pixels; the length ratio is 1.28 and the height ratio is 1.28; if the 4 pixel position information characterizes one region position information in the presentation image: first pixel position information (50, 20), second pixel position information (120, 20), third pixel position information (50, 50) and fourth pixel position information (120, 50), then the X value of the pixel position information divided by the length ratio of 1.28, and the Y value of the pixel position information divided by the height ratio of 1.28: after conversion, the first corresponding position information 211 (39, 16), the second corresponding position information 212 (94, 16), the third corresponding position information 213 (39, 39), and the fourth corresponding position information 214 (94, 39).
Step S102b-4, in the presentation window 21, hidden controls of each sentence text are generated based on the corresponding position information of each sentence text, respectively.
It is understood that one hidden control is built based on each corresponding location information of the sentence text.
The control is a basic component unit of the user interface, is encapsulation of attributes and methods, and the attributes comprise: the size, setting position and visibility of the control; the method comprises the function realized after triggering.
Hidden controls refer to invisible controls with control functionality. I.e. its properties are set to invisible. The rectangular area formed due to the corresponding position information is a display area of sentence text in the presentation window 21. In the embodiment of the disclosure, the hidden control is set in the display area, and after the setting, a reviewer can still see the content of the sentence text, but the function of the hidden control is added to the sentence text, for example, the hidden control is a button with a triggering function.
Step S102b-5, in response to the triggering of any one hidden control, determining second identification information of sentence text corresponding to the hidden control, obtaining historical communication information related to the sentence text from a communication data set based on the second identification information, generating a communication group for communicating learning hearts of the sentence text based on the historical communication information, and displaying the communication group in a first communication window 24 of the user interface.
It will be appreciated that the communication group for the current sentence text is displayed in the first communication window 24, and that when a hidden control associated with another sentence text other than the current sentence text is triggered, the communication group for the other sentence text is displayed in the first communication window 24.
The embodiment of the disclosure provides a method for generating an exchange group, which generates a touch effect on sentence text images of a demonstration text image, so that the operation is more direct, and good customer experience is provided. Meanwhile, the condition that the communication information in the original communication group is disordered and disordered is broken, and the related communication information concerned by the reviewer is displayed in the communication group of the first communication window 24 in a concentrated mode, so that the reviewer can browse and learn in a concentrated mode. Thereby reducing repeated communication information and improving review efficiency.
In order to enable the reviewer to participate in the communication for the sentence text, the method further comprises the steps of:
step S105-1, in response to determining the inputted communication text from the second communication window 25, acquires second identification information and history communication information related to the communication group of the second communication window 25.
Wherein the history communication information includes: the text is communicated, a time point is determined, and reviewer information is determined.
It will be appreciated that the reviewer inputs the text of the communication obtained by learning the sentence in the communication group into the entry field 242 of the first communication window 24, and when the reviewer determines to send the text of the communication to the text field 243, obtains the second identification information and the first historical communication information.
And step S105-2, storing the second identification information and the historical communication information into the communication data set.
The exchange information issued in the exchange group is stored in the exchange data set, so that each reviewer can timely update the exchange information of the exchange group in the first exchange window 24 from the exchange data set, and the real-time property, consistency and integrity of the data are ensured.
Further, after the second identification information and the historical communication information are stored in the communication data set, the method further comprises the following steps:
step S105-3, acquiring all historical communication information from the communication data set.
Step S105-4, refreshing the ac group in the second ac window 25 based on all the history ac information.
Each reviewer can timely acquire the communication information of the current presentation through the communication group in the second communication window 25, grasp global communication dynamics and timely participate in the discussion of the sentence text of interest.
Example 2
The disclosure further provides an embodiment of a device adapted to the above embodiment, which is configured to implement the method steps described in the above embodiment, and the explanation based on the meaning of the same names is the same as that of the above embodiment, which has the same technical effects as those of the above embodiment, and is not repeated herein.
As shown in fig. 4, the present disclosure provides an apparatus 400 for generating an ac group, including:
an identifier obtaining unit 401, configured to obtain, in a review mode of a presentation, first identifier information of a presentation image determined by a reviewer in response to the presentation image being displayed in a presentation window of a user interface;
a note acquiring unit 402, configured to acquire all note information of each sentence text in the presentation image based on the first identification information, where the note information includes at least second identification information for identifying the sentence text and at least one recommended text of the sentence text;
a display unit 403, configured to display the recommended text of each note information in a text window of the user interface in a triggerable manner;
A group generating unit 404, configured to determine, in response to a trigger of any one of the recommended texts, second identification information of a sentence text corresponding to the triggered recommended text, obtain, from an exchange dataset, historical exchange information related to the sentence text based on the second identification information, generate, based on the historical exchange information, an exchange group for exchanging learning hearts of the sentence text, and display the exchange group in a first exchange window of the user interface.
Optionally, the apparatus further includes:
a first area obtaining unit, configured to obtain, after all the note information of each sentence text in the presentation image based on the first identification information, area location information of a corresponding sentence text based on any one of the note information of each sentence text, where the note information further includes at least one area location information;
a text extraction unit for extracting sentence texts from the presentation image based on the region position information of each sentence text;
the text display unit is used for displaying each sentence text in a text window of the user interface in a triggerable manner;
the first triggering unit is used for responding to triggering of another sentence text, determining second identification information of the triggered sentence text, obtaining historical communication information related to the sentence text from a communication data set based on the second identification information, generating a communication group for communicating learning hearts of the sentence text based on the historical communication information, and displaying the communication group in a first communication window of the user interface.
Optionally, the apparatus further includes:
a second area obtaining unit, configured to obtain, after all the note information of each sentence text in the presentation image based on the first identification information, area location information of a corresponding sentence text based on any one of the note information of each sentence text, where the note information further includes at least one area location information;
the size acquisition unit is used for acquiring the image size of the presentation image and the window size of the presentation window;
a position conversion unit for converting the region position information of each sentence text into corresponding position information in the presentation window based on the ratio of the window size to the image size;
the control generation unit is used for respectively generating hidden controls of each sentence text based on the corresponding position information of each sentence text in the demonstration window;
the second triggering unit is used for responding to triggering of another hidden control, determining second identification information of the sentence text corresponding to the hidden control, obtaining historical communication information related to the sentence text from a communication data set based on the second identification information, generating a communication group for communicating learning hearts of the sentence text based on the historical communication information, and displaying the communication group in a first communication window of the user interface.
Optionally, the historical communication information includes: a history communication time point, history communication member information and history communication text;
the group generating unit 404 includes:
a first generation subunit, configured to generate a group member list in the first communication window based on the history communication member information;
and the first display subunit is used for displaying the historical communication time points and the historical communication text in the first communication window according to the time sequence.
Optionally, the device further comprises an alternating current unit;
the alternating current unit includes:
an identification obtaining subunit, configured to obtain, in the review mode, third identification information of the presentation in response to the reviewer determining the presentation;
an information obtaining subunit, configured to obtain, from the communication dataset, historical communication information related to the presentation based on the third identification information, where the historical communication information includes: a history communication time point, history communication member information and history communication text;
a window generation subunit, configured to generate a second communication window of the user interface in the user interface, where the second communication window is used to display a communication group for communicating the learning hearts of the presentation;
A second generation subunit, configured to generate a group member list in the second communication window based on the history communication member information;
and the second display subunit is used for displaying the historical communication time points and the historical communication text in the second communication window according to the time sequence.
Optionally, the device further comprises an alternating current unit;
the alternating current unit includes:
a first obtaining subunit, configured to obtain, in response to determining an input ac text from the first ac window, second identification information and historical ac information related to an ac group of the first ac window, where the historical ac information includes: the text is exchanged, the time point and the reviewer information are determined;
and the storage subunit is used for storing the second identification information and the historical communication information into the communication data set.
Optionally, the apparatus further comprises a group refresh unit;
the group refresh unit includes:
a second obtaining subunit, configured to obtain all the historical communication information from the communication data set after the second identification information and the historical communication information are stored in the communication data set;
And the refreshing subunit is used for refreshing the communication group in the second communication window based on all the history communication information.
The specific embodiment of the disclosure provides a generation device of an exchange group, which automatically generates a corresponding exchange group for note information concerned by a reviewer under the condition that the storage mode of original exchange data is unchanged and the use mode of the original exchange group is kept unchanged, and only displays the exchange information related to sentence text corresponding to the note information in the exchange group. The condition that the communication information in the original communication group is disordered and disordered is broken, and the related communication information concerned by the reviewer is intensively displayed in the communication group of the first communication window, so that the reviewer can browse and learn intensively. Thereby reducing repeated communication information and improving review efficiency.
Example 3
As shown in fig. 5, the present embodiment provides an electronic device, including: at least one processor; and a memory communicatively coupled to the at least one processor; wherein the memory stores instructions executable by the one processor to enable the at least one processor to perform the method steps described in the embodiments above.
Example 4
The disclosed embodiments provide a non-transitory computer storage medium storing computer executable instructions that perform the method steps described in the embodiments above.
Example 5
Referring now to fig. 5, a schematic diagram of an electronic device suitable for use in implementing embodiments of the present disclosure is shown. The terminal devices in the embodiments of the present disclosure may include, but are not limited to, mobile terminals such as mobile phones, notebook computers, digital broadcast receivers, PDAs (personal digital assistants), PADs (tablet computers), PMPs (portable multimedia players), in-vehicle terminals (e.g., in-vehicle navigation terminals), and the like, and stationary terminals such as digital TVs, desktop computers, and the like. The electronic device shown in fig. 5 is merely an example and should not be construed to limit the functionality and scope of use of the disclosed embodiments.
As shown in fig. 5, the electronic device may include a processing means (e.g., a central processing unit, a graphics processor, etc.) 501, which may perform various appropriate actions and processes according to a program stored in a Read Only Memory (ROM) 502 or a program loaded from a storage means 508 into a Random Access Memory (RAM) 503. In the RAM503, various programs and data required for the operation of the electronic device are also stored. The processing device 501, the ROM 502, and the RAM503 are connected to each other via a bus 504. An input/output (I/O) interface 505 is also connected to bus 504.
In general, the following devices may be connected to the I/O interface 505: input devices 506 including, for example, a touch screen, touchpad, keyboard, mouse, camera, microphone, accelerometer, gyroscope, etc.; an output device 505 including, for example, a Liquid Crystal Display (LCD), a speaker, a vibrator, and the like; storage 508 including, for example, magnetic tape, hard disk, etc.; and communication means 509. The communication means 509 may allow the electronic device to communicate with other devices wirelessly or by wire to exchange data. While fig. 5 shows an electronic device having various means, it is to be understood that not all of the illustrated means are required to be implemented or provided. More or fewer devices may be implemented or provided instead.
In particular, according to embodiments of the present disclosure, the processes described above with reference to flowcharts may be implemented as computer software programs. For example, embodiments of the present disclosure include a computer program product comprising a computer program embodied on a computer readable medium, the computer program comprising program code for performing the method shown in the flowcharts. In such an embodiment, the computer program may be downloaded and installed from a network via the communication means 509, or from the storage means 508, or from the ROM 502. The above-described functions defined in the methods of the embodiments of the present disclosure are performed when the computer program is executed by the processing device 501.
It should be noted that the computer readable medium described in the present disclosure may be a computer readable signal medium or a computer readable storage medium, or any combination of the two. The computer readable storage medium can be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or a combination of any of the foregoing. More specific examples of the computer-readable storage medium may include, but are not limited to: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the context of this disclosure, a computer-readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. In the present disclosure, however, the computer-readable signal medium may include a data signal propagated in baseband or as part of a carrier wave, with the computer-readable program code embodied therein. Such a propagated data signal may take any of a variety of forms, including, but not limited to, electro-magnetic, optical, or any suitable combination of the foregoing. A computer readable signal medium may also be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device. Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to: electrical wires, fiber optic cables, RF (radio frequency), and the like, or any suitable combination of the foregoing.
The computer readable medium may be contained in the electronic device; or may exist alone without being incorporated into the electronic device.
Computer program code for carrying out operations of the present disclosure may be written in one or more programming languages, including an object oriented programming language such as Java, smalltalk, C ++ and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the case of a remote computer, the remote computer may be connected to the user's computer through any kind of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or may be connected to an external computer (for example, through the Internet using an Internet service provider).
The flowcharts and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present disclosure. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
The units involved in the embodiments of the present disclosure may be implemented by means of software, or may be implemented by means of hardware. Wherein the names of the units do not constitute a limitation of the units themselves in some cases.