Detailed Description
The present application will be described in further detail with reference to the drawings and examples, in order to make the objects, technical solutions and advantages of the present application more apparent. It should be understood that the specific embodiments described herein are for purposes of illustration only and are not intended to limit the scope of the application.
The video shooting method in the driving process provided by the embodiment of the application can be applied to an application environment shown in fig. 1. The cabin software system 101 is respectively in communication connection with the member monitoring system OMS (Occupancy Monitoring System) 102, the automobile data recorder DVR (Driving Video Recorder) 103 and the vehicle-mounted interconnection terminal T-BOX (Telematics BOX) through CAN (Controller Area Network) buses. The cabin software system 101 may acquire the set shooting plan information, determine shooting devices corresponding to each shooting instruction in the shooting instruction set, which may be one or more shooting devices, and send a plurality of shooting instructions included in the shooting instruction set to at least one shooting device, which may be the member monitoring system 102, the vehicle event data recorder 103 and/or the T-BOX 104, respectively. The T-BOX 104 may transmit the received photographing instruction to the unmanned aerial vehicle 105. The unmanned aerial vehicle 105, the member monitoring system 102 and the automobile data recorder 103 respectively shoot videos based on shooting instructions, and return the shot videos to the cabin software system. A data storage system may be included in the cockpit software system 101 for storing captured video data. The cabin software system can comprise hardware such as a processor and a controller of the vehicle, and can also comprise a plurality of application software installed on the vehicle, and the cabin software system can transmit and control data to each part in the vehicle through the hardware and the software.
In an exemplary embodiment, as shown in fig. 2, a method for capturing video during driving is provided, and the method is applied to the cabin software system 101 in fig. 1 for illustration, and includes the following steps S202 to S204. Wherein:
step S202, the set shooting plan information is acquired.
The shooting plan information comprises a shooting instruction set and shooting starting conditions, wherein the shooting instruction set comprises shooting instructions corresponding to at least one of a plurality of different shooting devices, and the shooting devices comprise unmanned aerial vehicles and vehicle-mounted video equipment.
Specifically, the cabin software system may acquire the set photographing plan information from the pre-stored photographing plan information, and the cabin software system may receive photographing plan information configured in advance by the user. The cabin software system selects desired shooting plan information from a plurality of shooting plan information, and sets the shooting plan information. The shooting instruction set in the shooting plan information and the shooting starting conditions can be modified through the cabin software system.
In one example, when each photographing apparatus in the vehicle matches the set photographing plan information, the set plan information is determined to be valid photographing plan information. If each photographing apparatus in the vehicle cannot match with the photographing plan information to be set, that is, the photographing apparatus in the vehicle cannot meet the condition of completing video photographing according to the photographing plan information, the photographing plan information to be set cannot be regarded as the set photographing plan information.
In step S204, if it is detected that the shooting start condition is met, a shooting instruction included in the shooting instruction set is sent to the corresponding shooting device, so as to control the unmanned aerial vehicle to fly and shoot the first video based on the shooting instruction, and/or control the vehicle-mounted video recording device to shoot the second video based on the shooting instruction.
The shooting starting condition is used for determining whether the cabin software system sends each shooting instruction in the shooting instruction set to the corresponding shooting equipment. The photographing start condition may be a start condition set in advance by a user.
In one example, the photographing start condition may be when the vehicle traveling speed is less than a preset speed per hour, and the respective photographing devices of the vehicle have no malfunction, and the current time is a photographing time preset by a user. The shooting start condition may be that the cabin software system receives information of starting shooting issued by the user.
Specifically, the cabin software system detects a vehicle state, and when the vehicle state meets a shooting start condition, the cabin software system can identify shooting instructions corresponding to at least one shooting device in a shooting instruction set, and send the shooting instructions to each shooting device. If the shooting instruction set only comprises shooting instructions of one shooting device, all shooting instructions are sent to the shooting device. If the shooting instruction set contains shooting instructions corresponding to more than one shooting device respectively, the shooting instructions corresponding to the shooting devices are sent to different shooting devices respectively. The cabin software system can send shooting instructions corresponding to the unmanned aerial vehicle through the T-BOX. The cabin software system CAN send shooting instructions corresponding to the vehicle-mounted video equipment through the CAN bus.
After receiving the shooting instruction, the unmanned aerial vehicle can read the shooting instruction, take-off operation is carried out according to the shooting instruction, and video shooting is carried out in the air according to the shooting instruction. After receiving the shooting instruction, the vehicle-mounted video recording equipment can read the shooting instruction, and determine the target shooting equipment from the plurality of vehicle-mounted video recording equipment to carry out video shooting according to the shooting instruction.
In one example, the shooting instruction of the unmanned aerial vehicle may control the unmanned aerial vehicle to perform flight operations such as take-off and landing, the shooting instruction of the unmanned aerial vehicle may control shooting parameters such as shooting angles, shooting focal segments, shooting apertures and the like of each lens in the unmanned aerial vehicle, the shooting instruction of the unmanned aerial vehicle may control starting time and ending time of the unmanned aerial vehicle to shoot a video, and the shooting instruction of the unmanned aerial vehicle may control the unmanned aerial vehicle to determine which lens to use in a plurality of lenses to shoot.
In one example, the shooting instruction of the vehicle-mounted video recording device can control shooting parameters such as shooting angles, shooting focal segments, shooting apertures and the like of each lens in the vehicle-mounted video recording device, and the shooting instruction of the vehicle-mounted video recording device can control starting time and ending time of each vehicle-mounted video recording device to shoot videos.
Optionally, when the unmanned plane and the vehicle-mounted video recording device shoot videos, shooting pictures can be transmitted back to the cabin software system in real time through the communication connection, so that the cabin software system can display the video pictures in real time through the display terminal.
Optionally, the user can send shooting instructions to each shooting device through the control terminal, so that the cabin software system sends the shooting instructions to the corresponding shooting devices to adjust in real time, and the control terminal can be a mobile terminal bound by the user or a control terminal built in the vehicle.
In the video shooting method in the driving process, the shooting instruction set and shooting starting conditions are determined by acquiring preset shooting plan information. The shooting instruction set may include shooting instructions respectively corresponding to at least one shooting device, and each shooting device having the corresponding shooting instruction may correspond to at least one shooting instruction. The shooting instruction is used for controlling shooting equipment to shoot. Based on the above, when the cabin software system detects that the current state of the vehicle meets the shooting start condition during the running of the vehicle, a plurality of shooting instructions contained in the shooting instruction set are sent to the corresponding shooting devices, so that each shooting device respectively shoots videos according to the shooting instructions. The cabin software system can send shooting instructions corresponding to the unmanned aerial vehicle in the shooting instruction set to the unmanned aerial vehicle, and the cabin software system sends shooting instructions corresponding to the vehicle-mounted video equipment in the shooting instruction set to the vehicle-mounted video equipment. The unmanned aerial vehicle can fly according to the shooting instruction and shoot a first video. The vehicle-mounted video recording device can shoot the second video according to the shooting instruction. When the vehicle meets shooting starting conditions in the running process, at least one shooting device contained in shooting plan information can automatically shoot according to a shooting instruction set, the unmanned aerial vehicle can automatically fly and shoot at the same time, and a plurality of vehicle-mounted video recording devices can shoot different shooting angles inside and outside the vehicle, so that one or more of the plurality of shooting devices can be flexibly scheduled to automatically shoot videos with different positions and view angles, shooting by each shooting device is not required to be controlled manually, and the efficiency of a user for acquiring the videos meeting expectations is improved.
In an exemplary embodiment, as shown in fig. 3, the specific implementation procedure of the step of "acquiring the set photographing plan information" includes steps S302 to S306. Wherein:
Step S302, a plurality of recommended points in the travel path are determined based on the travel path between the travel start point and the travel end point of the vehicle.
The driving start point may be determined by a user through a map system, and the driving end point may be determined by a user through a map system. The travel path may be a path that the map system determines to be available for the vehicle to travel based on the travel start point to the travel end point. Each travel path in the map system can comprise a plurality of built-in recommended places, and the recommended places can be downloaded to the local map system through a cloud information center, and can also be set in the map system by a user. The recommended place may be a place where a plurality of photographing apparatuses included in the vehicle perform video photographing. For example, the recommended places may be sights, recommended punch points, and the like.
Specifically, the cabin software system can determine the current position of the vehicle as a driving starting point through the positioning system, and can receive first address information input by a user and determine the first address information as the driving starting point. The cabin software system can receive the second address information input by the user and determine the second address information as the driving terminal point. The cabin software system may determine whether there is a recommended place in the travel path of the vehicle or within a preset distance around the travel path, based on a plurality of recommended places contained in the map system. If the recommended location exists, displaying or broadcasting the recommended location through a client or a vehicle end, and determining at least one recommended location as a target location based on a screening instruction sent by a received user.
In one example, the recommended location may be a location that is preset in the map system by the user through the cabin software system and stored in the map system for the map system to take as a recommended location that can be screened by the user when the vehicle subsequently passes the location.
Step S304, obtaining environment information corresponding to each recommended place, calculating the recommendation degree of each recommended place based on the environment information, the view finding preference of the current user of the vehicle and the corresponding influence coefficients respectively, and determining at least one target place from a plurality of recommended places based on the recommendation degree.
The environmental information is used for indicating environmental conditions around the recommended place, and if the environmental conditions are not good, video shooting is not facilitated, such as lightning strike and storm. If the environmental conditions are good, video capturing can be performed. The reference video is usually captured under good environmental conditions, so whether the environmental conditions are good or not can be determined according to the reference video corresponding to the recommended place. The viewing preferences are used to determine the type of scenery that is currently of interest to the vehicle user, which may be mountain, river, sky, road, grassland, etc., to determine recommended places that meet the viewing preferences.
Specifically, for each recommended place, the cabin software system may determine weather information and temperature information of the current recommended place, and determine whether the current recommended place can capture an ideal video under the current weather condition and temperature condition. The cabin software system can determine the scoring standard of the environmental information according to the preset optimal environmental condition of the current recommended place, so that the recommendation degree of the current recommended place in the current environment is determined based on the scoring standard. For example, if the weather condition corresponding to the place a is rain and the time condition is night, the place a cannot shoot sunrise due to clouds and no sun in the dark, the total score of the place a is low, and the recommendation degree corresponding to the place a is determined to be not recommended.
The cabin software system can also acquire the view finding preference of the current user of the vehicle, namely, the favorite scenery types of the current user of the vehicle, score each recommended place based on the favorite degree of the current user of the vehicle on each scenery type, and acquire the recommendation degree corresponding to each recommended place. For example, the current user of the vehicle has three scenery types, namely, a river, a highway and a forest, and the cabin software system can determine the recommendation degree of each recommended place according to the scenery type and the viewing preference of each recommended place. If the reference video corresponding to the recommended place contains three scenery types of river, highway and forest, the recommendation degree of the recommended place can be determined to be very recommended, and if the reference video corresponding to the recommended place is desert and gobi, the recommendation degree of the recommended place can be determined to be not recommended.
Based on the above, the cabin software system can calculate the recommendation degree determined based on the environment information and the recommendation degree determined based on the view finding preference through the influence coefficients respectively corresponding to the environment information and the view finding preference to obtain the comprehensive recommendation degree corresponding to the recommendation place, and screen and obtain at least one target place from the plurality of recommendation places based on the comprehensive recommendation degree of each recommendation place.
In one example, the cockpit software system may take as the target location a recommended location with an integrated recommendation greater than a preset threshold.
Step S306, for each target location, obtains corresponding shooting plan information.
The shooting instruction set in the shooting plan information corresponding to each target place is determined based on the reference video corresponding to the target place.
Specifically, for each target location, the cabin software system may acquire at least one reference video corresponding to the current target location through the cloud information center, and acquire shooting plan information corresponding to each reference video. The cabin software system can receive the video screening instruction fed back by the user and acquire shooting plan information of the reference video corresponding to the video screening instruction based on the video screening instruction. The shooting plan information of each reference video may be plan information that is preconfigured by the user and uploaded and disclosed to the cloud information center for all users to use.
In one example, the cabin software system may create a recommended place in the map system, create preset shooting plan information corresponding to the recommended place, and upload the created recommended place and the preset shooting plan information corresponding to the recommended place to the cloud information center.
In this embodiment, a plurality of recommended points included in a travel path is determined by a travel path to be traveled by a vehicle set by a user, and at least one target point is obtained by determination by the user. For each target site, the cockpit software system can display a plurality of reference videos of the current target site and determine the target reference videos based on video screening instructions returned by the user. Finally, shooting plan information corresponding to the target reference video is obtained through the cloud information center, so that shooting plan information meeting the requirements of users is obtained, and the efficiency of obtaining the shooting plan information is improved.
In an exemplary embodiment, the step of determining a plurality of recommended points in the travel path based on the travel path between the travel start point and the travel end point of the vehicle includes:
If the shooting locations of the reference videos and/or the shooting locations preset by the current user of the vehicle are matched with the geographic areas corresponding to the driving paths, determining the shooting locations as recommended locations.
The cloud video library can store reference videos of a plurality of places, and each reference video comprises a shooting place corresponding to the video when the video is shot.
In particular, the cockpit software system may determine the geographic region to which the travel path corresponds. The cockpit software system can use the driving path as a datum line, and carry out edge tracing on the whole datum line through a preset maximum distance length to obtain a geographic area surrounding the driving path. Based on the above, the cabin software system can traverse the reference video in the cloud video library, acquire the shooting location of the reference video, and determine the shooting location in the geographic area as the recommended location. In addition, the cabin software system can traverse shooting places preset by the current user of the vehicle and take the shooting places in the geographic area as recommended places. The cabin software system can only judge shooting places in the cloud video library and determine recommended places, can only judge shooting places preset by a user and determine recommended places, can also judge shooting places in the cloud video library and shooting places preset by the user together and determine recommended places from the two groups of shooting places.
In the embodiment, the location of the video shooting location in the preset geographic area is selected from the cloud video library as the recommended location, and the location of the shooting location in the preset geographic area is selected from the shooting locations preset by the current user of the vehicle as the recommended location, so that the recommended location meeting the requirements of the public user can be provided, the recommended location meeting the requirements of the current user of the vehicle can be increased, and the accuracy of determining the recommended location is improved.
In an exemplary embodiment, the environment information includes time information, weather information, and/or traffic flow information, and as shown in fig. 4, the steps of "calculating the recommendation degree of each recommended place based on the environment information, the viewing preference of the current user of the vehicle, and the respective corresponding influence coefficients, and determining at least one target place from among the plurality of recommended places based on the recommendation degree" include steps S402 to S406. Wherein:
step S402, determining a first recommendation score corresponding to each recommended place based on time information, weather information and traffic flow data corresponding to each recommended place.
Wherein the environmental information includes time information, weather information, and/or traffic flow information. The time information is used to represent an estimated time for the vehicle to arrive at the recommended location. The weather information is used to indicate the weather condition when the vehicle arrives at the recommended place. The traffic flow information is used to recommend the current traffic flow of the location. For example, the weather information of the recommended place may be information such as rain, wind, snow, sunny days, corresponding temperature, humidity, etc., the time information may be time corresponding to a local time zone, for example, 8 o' clock traffic information at night in Beijing time may be the number of vehicles passing through the recommended place every 10 minutes.
Specifically, for each recommended place, a scoring standard corresponding to the current recommended place is obtained, scores respectively corresponding to time information, weather information and traffic flow information of the current recommended place can be determined according to the scoring standard, and a first recommendation score of the current recommended place is determined based on the sum of the scores.
In one example, location A is the location where sunrise was taken and the preset optimal environmental conditions for location A are weather information: sunny day, time information: 6 to 8 am, traffic information: small (4 vehicles/min). If the weather condition corresponding to the place A is rain, the time condition is night and the vehicle flow is medium (15 vehicles/min), if the single item is divided into 5 points, the similarity between the rain and the sunny is determined, the score corresponding to the weather information is 1 point, the similarity of the time information is determined, the score of the time information is 0 point, the similarity of the vehicle flow information is determined, the score corresponding to the vehicle flow information is 3 points, namely the total score is 4 points, and the recommendation degree can be not recommended.
Step S404, determining second recommendation scores corresponding to the recommendation places respectively based on the framing preference of the current user of the vehicle.
The framing preference is determined based on a reference video of interest of the current user of the vehicle and videos stored by the current user of the vehicle in the smart album and the cloud.
Specifically, the cabin software system may determine the scenery type of the pictures and videos by parsing the pictures and videos stored in the vehicle end, cloud end and/or user client of the vehicle by the current user, and determine the number of videos of each scenery type. The cockpit software system can also acquire the reference videos which are praised, collected and forwarded by the current user of the vehicle and the scenery types corresponding to the reference videos. The cockpit software system determines the viewing preference of the current user of the vehicle for the videos of different landscape types based on the number of videos and the landscape types of the videos. For example, the current user of the vehicle may have viewing preferences of three landscape types, river, highway and forest, with preference values of 70, 20, 10 points respectively.
Based on the above, the cabin software system can determine the type of scenery liked by the current user of the vehicle according to the view finding preference of the current user of the vehicle, score each recommended place based on the favorite degree of the user on each scenery type, and obtain a second recommendation score corresponding to each recommended place. For example, the reference video corresponding to the recommended place contains river, road and forest at the same time, and the second recommendation score of the recommended place may be determined to be 100 points, i.e., 70+20+10=100. If the reference video corresponding to the recommended site is desert, gobi and power, then the second recommendation score for the recommended site may be determined to be 20 points.
Step S406, determining the recommendation degree of each recommendation place according to the first recommendation score, the second recommendation score and the corresponding influence coefficient, and determining the recommendation place with the recommendation degree meeting the preset screening condition as the target place.
Specifically, for each recommended place, the cabin software system may perform weighted summation on the first recommendation score and the second recommendation score of the current recommended place according to the influence coefficient corresponding to the environmental information and the influence coefficient corresponding to the framing preference, obtain a comprehensive score of the current recommended place, and determine the recommendation degree of the current recommended place according to the score interval in which the comprehensive score is located. And finally, according to the recommendation degree of each recommendation place, selecting a recommendation place with high recommendation degree from a plurality of recommendation places as a target place. In one example, if the first recommendation score for the current recommendation location is 10 points, the second recommendation score is 80, and the impact coefficients are 0.8 and 0.2, respectively, then the composite score for the current recommendation location is determined to be 20 points. The score interval corresponding to the score of 20 is 20 to 30, namely the recommendation degree of the current recommendation place is the general recommendation.
In one example, the cockpit software system may perform weighted summation on the first recommendation score and the second recommendation score of the current recommendation location to obtain a composite score of the current recommendation location, thereby obtaining a composite score of each recommendation location, rank the composite scores of each recommendation location according to the score, obtain a ranked sequence, and determine a preset number of recommendation locations as target locations in the ranked sequence.
In this embodiment, a first recommendation score corresponding to a recommendation location may be determined according to time information, weather information and traffic flow data of the recommendation location, a second recommendation score corresponding to the recommendation location may be determined according to framing preference of a current user of the vehicle, the first recommendation score and the second recommendation score are weighted and summed according to an influence coefficient to obtain a comprehensive score of the recommendation location, a recommendation degree to which the comprehensive score belongs is determined, and a target location from a plurality of recommendation locations is determined based on the recommendation degree, so that a favorite target location of the user may be determined more accurately.
In an exemplary embodiment, the photographing start condition includes a distance start condition, and as shown in fig. 5, the step of transmitting a photographing instruction included in the photographing instruction set to a corresponding photographing apparatus in case that the photographing start condition is detected to be satisfied includes steps S502 to S504. Wherein:
Step S502, aiming at each target place, if the distance between the vehicle and the target place meets the preset distance value, outputting shooting prompt information through an intelligent voice assistant and/or a central control screen.
The shooting start condition includes a distance start condition, that is, the distance start condition may be that a distance between a current position of the vehicle and the target location meets a certain preset distance, and it is determined that the vehicle meets the distance start condition. The shooting prompt information is used for prompting a user whether to need to shoot at a target place according to a shooting instruction set corresponding to the reference video.
Specifically, when the distance between the current position of the vehicle and the target site is lower than a first preset distance value, the cabin software system can play the shooting prompt information through the intelligent voice assistant, and the cabin software system can display the shooting prompt information through a central control screen or other screens capable of displaying pictures. When the cabin software system inputs the photographing prompt, the vehicle should not yet arrive at the target site.
In step S504, after receiving the feedback information for confirming shooting of the shooting prompt information, if the distance between the vehicle and the target location is further reduced and the distance start condition is satisfied, each shooting instruction in the shooting instruction set corresponding to the target location is sent to the corresponding shooting device.
The feedback information may include feedback information for rejecting photographing and feedback information for confirming photographing.
Specifically, the cabin software system may receive feedback information for the shooting prompt information through interaction means such as an intelligent voice assistant and a central control screen, and if the feedback information is determined to be the feedback information for determining shooting, and the distance between the vehicle and the target location is further reduced compared with a preset distance value, and when the distance is lower than a second preset distance value, it is determined that the distance between the vehicle and the target location meets a distance starting condition. And the cabin software system sends each shooting instruction in the shooting instruction set corresponding to the target place to each corresponding shooting device. The photographing apparatus starts photographing after receiving the instruction.
In this embodiment, whether the distance starting condition is satisfied is determined by the distance between the current position of the vehicle and the target location, and each shooting instruction in the shooting instruction set corresponding to the target location is sent to a plurality of shooting devices when the distance starting condition is satisfied, so that video shooting is accurately performed on the target location, efficiency of acquiring effective video is improved, shooting difficulty of a user is reduced, and experience of shooting video by the user is improved.
In an exemplary embodiment, as shown in fig. 6, the video capturing method during driving further includes steps S602 to S604. Wherein:
step S602, analyzing the imported reference video to obtain shooting mode information of the reference video.
The shooting mode information at least comprises corresponding shooting equipment, shooting machine positions and shooting angles. The shooting equipment can be unmanned aerial vehicle and vehicle-mounted video equipment, a plurality of cameras can be mounted in the unmanned aerial vehicle, and the vehicle-mounted video equipment can comprise a plurality of camera equipment such as a vehicle recorder DVR, a member monitoring system OMS, a reversing image and an external camera of a vehicle. The camera position is used for determining the position relation between the unmanned aerial vehicle and the vehicle, and the camera position is used for determining a certain camera position corresponding to the same video equipment in the vehicle-mounted video equipment, for example, the camera position can determine that the unmanned aerial vehicle flies to a preset position to shoot, and the camera position can determine that a certain camera of the automobile data recorder DVR faces the head to be used as shooting equipment to shoot. The photographing angle may determine an orientation of a camera of the photographing apparatus. After unmanned aerial vehicle or other shooting equipment confirm the position of shooing, thereby can adjust the orientation of camera and adjust shooting angle, unmanned aerial vehicle can change shooting angle through rotatory promptly, and other shooting equipment can adjust shooting angle through rotating the camera.
Specifically, the cockpit software system may receive a reference video uploaded by the user. The cabin software system can analyze the reference video, determine shooting equipment, shooting machine positions and shooting angles corresponding to each video frame in the reference video respectively, and generate an analysis sequence containing the shooting equipment, the shooting machine positions and the shooting angles. The analysis sequence may include analysis information of a plurality of video frames, and analysis information of each frame may include a photographing device, a photographing machine position, and a photographing angle corresponding to the video frame.
In one example, the cockpit software system parses the reference video, and may determine the shooting device, the shooting position and the shooting angle of the reference video in each video frame in a unit of time. For example, if the reference video has 100 frames, the shooting device, the shooting position and the shooting angle of the reference video in each of the 1 st frame to the 100 th frame are obtained by parsing. For each frame of image, the cockpit software system may determine a shooting device of a picture of the current video frame through a first image recognition algorithm and store the predicted shooting device of the current video frame in a parsing sequence. After the shooting device is determined, the cabin software system can determine the shooting machine position corresponding to the shooting device under the current video frame through a second image recognition algorithm and store the shooting machine position in the analysis sequence, the cabin software system can determine the shooting angle corresponding to the current video frame through a third image recognition algorithm and store the shooting angle in the analysis sequence, and at the moment, the analysis sequence can comprise the shooting device, the shooting machine position and the shooting angle corresponding to the current video frame. And repeatedly traversing each video frame to obtain an analysis sequence containing a plurality of video frames and taking the analysis sequence as shooting mode information.
In one example, the cockpit software system may receive the reference video from the user's client through a wireless communication connection, the cockpit software system may circumscribe a storage medium through a communication interface, and transmit the reference video in the storage medium to the cockpit software system.
In one example, the photography mode information may also include camera parameters of the photography device, which may include lens focal length, lens in-focus, and the like.
In step S604, corresponding shooting plan information is generated based on the shooting mode information of the reference video.
Specifically, the cabin software system may segment the shooting mode information of the reference video according to time points, and determine the shooting device, the shooting position and the shooting angle corresponding to each time point. For example, the shooting mode information of 100 frames can determine that the interval 20 frames is a time point, six time points of the 1 st frame, the 20 th frame, the 40 th frame, the 60 th frame, the 80 th frame and the 100 th frame can be acquired, and shooting equipment, a shooting machine position and a shooting angle corresponding to the six time points respectively are determined.
Based on the above, the cabin software system can create an initial shooting instruction based on the shooting device, the camera position and the shooting angle corresponding to the 1 st frame, and can determine whether the shooting device, the camera position and the shooting angle between the current time point and the next time point change or not and the respective changing conditions of the shooting device, the camera position and the shooting angle for each time point. The method comprises the steps of generating a shooting equipment switching instruction corresponding to shooting equipment at the next time point if the shooting equipment changes, generating a shooting machine position switching instruction switching from the current time point to the next time point if the shooting machine position changes, generating an equipment rotating instruction from the current time point to the next time point if the shooting angle changes, and flexibly combining the instructions according to the change condition to obtain a shooting instruction set of a reference video. The cabin software system can select a required shooting starting condition from preset shooting starting conditions, can also configure the shooting starting conditions by itself, and determines a shooting instruction set and the shooting starting conditions as shooting plan information corresponding to a reference video.
In this embodiment, the shooting mode information of each frame in the reference video is obtained by analyzing the imported reference video, and the change process of the shooting mode information corresponding to the plurality of video frames is judged to obtain the shooting instruction set corresponding to the reference video, so as to obtain the shooting plan information corresponding to the reference video. The shooting plan information of the video can be automatically acquired through analyzing the reference video, so that the shooting plan information which meets the requirements of users can be obtained, and the accuracy of generating the shooting plan information is improved.
In an exemplary embodiment, the vehicle-mounted video recording device comprises a vehicle recorder DVR and a member monitoring system OMS, and the specific implementation process of the step of sending the shooting instruction contained in the shooting instruction set to the corresponding shooting device when the shooting start condition is detected to be met under the condition that the shooting instruction set comprises a first shooting instruction of the unmanned aerial vehicle, a second shooting instruction of the DVR and a third shooting instruction of the OMS comprises the following steps:
Under the condition that the shooting starting condition is detected to be met, a first shooting instruction is sent to the unmanned aerial vehicle, a second shooting instruction is sent to the DVR, and a third shooting instruction is sent to the OMS.
Specifically, when the cabin software system detects that the vehicle state meets the set shooting start condition, the cabin software system can determine a first shooting instruction corresponding to the unmanned aerial vehicle, a second shooting instruction corresponding to the DVR, and a third shooting instruction corresponding to the OMS in the shooting instruction set through the shooting device identifier. The cockpit software system sends the first shooting instruction to the T-BOX, and the T-BOX sends the first shooting instruction to the unmanned aerial vehicle. The cabin software system sends a second shooting instruction to the DVR through the CAN bus. The cabin software system sends a third shooting instruction to the DVR through the CAN bus.
Optionally, the vehicle-mounted video recording device further comprises a safety monitoring system, and the specific implementation process of the step of sending the shooting instruction contained in the shooting instruction set to the corresponding shooting device when the shooting start condition is detected to be met in the case that the shooting instruction set further comprises a fourth shooting instruction of the safety monitoring system comprises the following steps:
Under the condition that shooting starting conditions are met, a first shooting instruction is sent to the unmanned aerial vehicle, a second shooting instruction is sent to the DVR, a third shooting instruction is sent to the OMS, and a fourth shooting instruction is sent to the safety monitoring system, wherein the safety monitoring system comprises a front camera, a rear camera and a side view camera of the vehicle.
In this embodiment, by splitting the shooting instructions of each shooting device in the shooting instruction set to obtain multiple groups of shooting instructions, and sending each group of shooting instructions to the corresponding shooting device, the multiple shooting devices are scheduled to perform video shooting at the same time, so that the multiple shooting devices can be linked and shot to obtain the required video, the efficiency of acquiring the video required by the user is improved, and the diversity of the video obtained by shooting is increased.
It will be appreciated that in other embodiments, the shooting instructions included in the shooting instruction set of one target location may be other situations, for example, there may be only one of the first shooting instruction, the second shooting instruction, and the third shooting instruction, or there may be two of the first shooting instruction, the second shooting instruction, and the third shooting instruction, that is, the shooting instruction set of one target location may correspond to multiple situations such as "all shooting together", "DVR shooting", "OMS shooting", "unmanned aerial vehicle shooting together", "DVR shooting together with ONS" of the target location, and when the shooting start condition of the target location is satisfied, each group of shooting instructions in the shooting instruction set may be sent to the corresponding shooting device.
In an exemplary embodiment, as shown in fig. 7, the video capturing method during driving further includes steps S702 to S708. Wherein:
Step S702, receiving the first video and/or the second video, and storing the first video and/or the second video as videos to be edited in the intelligent album.
The intelligent photo album can be used for storing videos to be edited, which are acquired by all shooting devices in the vehicle. The intelligent photo album can comprise a plurality of preset editing templates and templates corresponding to the preset editing templates.
Specifically, the cockpit software system may receive the first video and the second video that are shot by the unmanned aerial vehicle, the DVR and the OMS, and take the first video and the second video as the video to be edited.
Step S704, determining a set number of target clipping templates corresponding to the video preference from preset clipping templates of the intelligent photo album based on the video preference of the current user of the vehicle.
The video preference can be determined through the interaction behavior of the user on each preset clipping template, and the interaction behavior can be opening, collecting, downloading each preset clipping template and the like.
In particular, the cockpit software system may determine video preferences of the user based on the user's historical interaction behavior. And screening and obtaining target clipping templates which are most probably liked by a plurality of users from a plurality of preset clipping templates contained in the intelligent photo album based on video preference and recommendation algorithm.
Step S706, video editing is carried out on the video to be edited based on each target clip template, and a set number of candidate videos are obtained.
Specifically, the cockpit software system can respectively edit the video to be edited through each target clip template in the intelligent album, and each target clip template can generate at least one candidate video so as to obtain a plurality of candidate videos.
Step S708, the set number of candidate videos are displayed through the user client and/or the central control screen so as to be selected by the current user of the vehicle.
Specifically, the cockpit software system may present a plurality of candidate videos through a user client or a central control screen in the vehicle, and the user may select at least one candidate video from the plurality of candidate videos as a desired video. After the cabin software system receives the candidate video screened by the user, the screened candidate video is saved and sent to the user client.
Optionally, the user's video preferences may be updated as the user screens candidate videos to make subsequent recommendations closer to the pre-set clip template for video biasing.
In the embodiment, the shot video is automatically clipped through the preset clipping templates in the intelligent photo album to obtain the candidate video corresponding to the sample of the preset clipping templates, so that the candidate videos generated by a plurality of preset clipping templates are automatically obtained, the efficiency of generating the candidate video can be improved, the time of the user for clipping the video by himself/herself is reduced, the video can be shot and clipped in the driving process, and the shooting experience of the user in the driving process is improved.
In an exemplary embodiment, the video shooting method in the driving process further comprises the step of obtaining shooting instructions set by a user. The cockpit software system may receive the voice control instructions generated by the intelligent voice assistant and generate shooting instructions based on the voice control instructions. The cabin software system can also receive a key response instruction generated by the steering wheel and generate a shooting instruction based on the key response instruction and the preconfigured key function. The cabin software system can receive the click response instruction of the central control screen and determine the shooting instruction corresponding to the click response instruction based on the click response instruction and the preconfigured multiple shooting instructions.
In one exemplary embodiment, as shown in FIG. 8, a video capture system is provided, the system comprising a cockpit software system 801, a T-BOX 804 communicatively coupled to the cockpit software system 801, and a plurality of different capture devices, the plurality of different capture devices comprising an unmanned aerial vehicle 805 and an on-board video device, the on-board video device comprising a tachograph DVR 803 and a membership monitoring system OMS 802, the unmanned aerial vehicle 805 communicatively coupled to the cockpit software system 801 via the T-BOX 804, wherein:
The cabin software system is used for acquiring set shooting plan information, wherein the shooting plan information comprises a shooting instruction set and shooting starting conditions, the shooting instruction set comprises shooting instructions corresponding to at least one of a plurality of different shooting devices, and the shooting instructions contained in the shooting instruction set are sent to the corresponding shooting devices when the shooting starting conditions are detected to be met, so that the unmanned aerial vehicle is controlled to fly and shoot a first video based on the shooting instructions, and/or the vehicle-mounted video recording device is controlled to shoot a second video based on the shooting instructions.
In one example, the cockpit software system may obtain the set shooting plan information from the pre-stored shooting plan information, and the cockpit software system may receive shooting plan information configured in advance by the user. The cabin software system selects desired shooting plan information from a plurality of shooting plan information, and sets the shooting plan information. The shooting instruction set in the shooting plan information and the shooting starting conditions can be modified through the cabin software system.
The cabin software system detects the vehicle state, and when the vehicle state meets the shooting starting condition, the cabin software system can determine shooting instructions corresponding to at least one shooting device in a shooting instruction set, and send the shooting instructions to the shooting devices correspondingly, wherein one or more shooting devices can be arranged in the shooting instruction set correspondingly. The cabin software system can send shooting instructions corresponding to the unmanned aerial vehicle through the T-BOX. The cabin software system CAN send shooting instructions corresponding to the DVR and OMS to the DVR through the CAN bus.
After receiving the shooting instruction, the unmanned aerial vehicle can read the shooting instruction, take-off operation is carried out according to the shooting instruction, and video shooting is carried out in the air according to the shooting instruction. After receiving the shooting instruction, the DVR and OMS can read the shooting instruction and determine the target shooting equipment to shoot the video according to the shooting instruction.
In this embodiment, through a cabin software system included in a video shooting system during driving, preset shooting plan information can be obtained, and a shooting instruction set and shooting starting conditions are determined. In the running process of the vehicle, when the cabin software system detects that the current state of the vehicle meets the shooting starting condition, at least one shooting device corresponding to a plurality of shooting instructions contained in the shooting instruction set is determined, and the plurality of shooting instructions contained in the shooting instruction set are sent to the corresponding shooting device, for example, to an unmanned aerial vehicle, a DVR and/or an OMS, so that the unmanned aerial vehicle, the DVR and/or the OMS can respectively shoot videos according to the shooting instructions. When the cabin software system sends a shooting instruction to the unmanned aerial vehicle, the shooting instruction is sent to the T-BOX, and then the shooting instruction is sent to the unmanned aerial vehicle. The unmanned aerial vehicle can fly according to the shooting instruction and shoot a first video. The vehicle-mounted video recording device can shoot the second video according to the shooting instruction. When the vehicle meets shooting starting conditions in the running process, each shooting device can automatically shoot according to a shooting instruction set, the unmanned aerial vehicle can automatically fly and shoot simultaneously, and a plurality of vehicle-mounted video recording devices can shoot different shooting angles inside and outside the vehicle, so that videos with various positions and various visual angles can be automatically shot through a plurality of shooting devices, shooting by each shooting device is not required to be controlled manually, and therefore efficiency of a user for acquiring expected videos is improved.
In one exemplary embodiment, as shown in FIG. 9, the video capture system further includes a map system 903 and a positioning system 902 communicatively coupled to the cockpit software system 901.
The map system is used for determining at least one target place contained in the driving path based on the driving path between the driving starting point and the driving end point of the vehicle and sending the at least one target place to the cabin software system;
the positioning system is used for determining the distance between the vehicle and the target place and sending the distance to the cabin software system;
The cabin software system is further used for determining shooting instruction sets in shooting plan information corresponding to each target place based on the reference video corresponding to the target place, outputting shooting prompt information under the condition that the distance meets a preset distance value, and sending each shooting instruction in the shooting instruction set corresponding to the target place to corresponding shooting equipment if the distance between the vehicle and the target place is further reduced and the distance starting condition is met after receiving feedback information for confirming shooting of the shooting prompt information.
In one example, the map system and the positioning system may be communicatively connected to the cabin software system via a CAN bus, respectively. The map system may determine a travel path of the vehicle to be traveled by the vehicle through a travel start point and a travel end point of the vehicle. The map system may retrieve whether recommended places exist in the travel path, and if so, send all recommended places to the cabin software system. The cabin software system can screen at least one target place from the recommended places, acquire a reference video corresponding to the target place, and acquire shooting plan information corresponding to the reference video. The positioning system can determine the distance between the current position of the vehicle and the target place, if the distance is smaller than a preset distance value, the cabin software system outputs shooting prompt information, after receiving feedback information for confirming shooting of the shooting prompt information, the distance between the current position of the vehicle and the target place is further reduced, and after the distance starting condition is met, each shooting instruction in a shooting instruction set corresponding to the target place is sent to corresponding shooting equipment.
In this embodiment, the current position of the vehicle and the distance between the current position of the vehicle and the target location may be obtained through the positioning system, so as to assist the cabin software system in determining whether the vehicle meets the distance starting condition. The map system can acquire the driving path of the vehicle to be driven and the recommended place through which the driving path possibly passes, the cabin software system screens the target place and acquires the shooting plan information corresponding to the target place, so that the shooting plan information meeting the requirements of the user is obtained, and the efficiency of acquiring the shooting plan information is improved.
In an exemplary embodiment, the video shooting system in the driving process further comprises an intelligent photo album, wherein the intelligent photo album is used for storing the first video and/or the second video as videos to be edited, determining a set number of target clip templates corresponding to video preferences from preset clip templates based on video preferences of users, and performing video editing on the videos to be edited based on each target clip template to obtain a set number of candidate videos.
In one example, the smart album may store video to be edited acquired by each of the photographing devices in the vehicle, for example, the cockpit software system may receive video photographed by the unmanned aerial vehicle, the DVR, and the OMS and store the video as video to be edited into the smart album. The intelligent photo album can determine the video preference of the user according to the historical interaction behavior of the user. And screening and obtaining target clipping templates which are most probably liked by a plurality of users from a plurality of preset clipping templates contained in the intelligent photo album based on video preference and recommendation algorithm. The intelligent album may call target clip templates for video editing of the video to be edited, each target clip template may generate at least one candidate video.
In this embodiment, a plurality of preset editing templates conforming to video preferences may be provided through the intelligent album, and a target editing template may be determined from the plurality of preset editing templates, so that the intelligent album may edit the video to be edited through the target editing template, thereby automatically obtaining a plurality of candidate videos. Therefore, the intelligent photo album can improve the efficiency of generating candidate videos, reduce the time for the user to clip videos by himself, shoot videos and clip videos in the driving process, and improve the shooting experience of the user in the driving process.
The following describes in detail a specific implementation procedure of the video shooting method in the driving process with reference to a specific embodiment, including the following contents:
First, the connection relationship and the related roles between the components are determined as follows:
(1) The interaction mode that unmanned aerial vehicle, DVR and OMS shot CAN be set up through the cabin software system, for example, set up unmanned aerial vehicle through T-BOX and cabin software system interconnection through the cabin software system, or set up unmanned aerial vehicle and user's customer end interconnection through the high in the clouds information center platform that cabin software system corresponds, or interconnect DVR and OMS through the CAN bus. The interaction mode of the unmanned aerial vehicle, the DVR and the OMS shooting can be determined based on the connection relation.
(2) The cameras of the unmanned aerial vehicle, DVR, OMS and other devices can be controlled to shoot In a mode of intelligent voice assistant, multifunctional steering wheel, IVI (In-vehicle infotainment system) screen touch control and the like.
(3) And the return pictures shot by all shooting devices in real time can be watched in real time through the IVI screen and the owner APP in the user client.
(4) The starting and suspending of each shooting device can be controlled through the IVI screen and the owner APP in the user client, and the machine position and composition can be controlled in a self-defined mode;
(5) Near field communication between the cabin software system and the owner APP can be realized through the Bluetooth system, and video picture transmission of the (3) is realized.
(6) Remote communication between the cabin software system and the owner APP can be realized through TSP (TELEMATICS SERVICE Provider), T-BOX and cloud information center, so that video picture transmission in the step (3) is realized. Among them, the TSP service is a traffic pattern providing connection, management and application services of vehicle-related data.
(7) The intelligent photo album can be used for storing video materials shot by different sites such as unmanned aerial vehicles, DVRs, OMS and the like.
(8) The intelligent photo album can automatically analyze the shooting mode (comprising shooting machine position, shooting angle, moving mirror and the like) of the template video imported by the user, and convert the shooting mode into shooting plan information.
(9) The video material can be automatically clipped into the travel VLOG through the intelligent photo album (if the intelligent photo album has the reference video, the same type is clipped according to the preset clipping template corresponding to the reference video, and if the intelligent photo album does not have the reference video, the clipping is carried out according to the initial clipping template built in the intelligent photo album).
(10) Scenery stuck points (recommended places) on the vehicle travel route can be identified by the map navigation system, and the user is recommended to view photographing.
(11) The vehicle position can be positioned in real time through the positioning system and transmitted to the map system, and the position information of the recommended place is shared to the intelligent album;
(12) The intelligent photo album in the cabin software system can be used for controlling shooting of the unmanned aerial vehicle, the DVR and the OMS, namely, the intelligent photo album can be used for generating a plurality of shooting instructions for controlling shooting of the unmanned aerial vehicle, the DVR and the OMS.
The specific implementation means of the specific embodiment are as follows:
1. The unmanned aerial vehicle establishes interconnection with the vehicle and the DVR system and OMS system of the vehicle.
The unmanned aerial vehicle establishes a signal transmission channel with a cabin software system of the vehicle through the T-BOX, and flight control, shooting picture control and picture transmission of the cabin software system to the unmanned aerial vehicle are kept. The vehicle CAN realize the interconnection of the unmanned aerial vehicle, the DVR system and the OMS system of the vehicle on the basis of the CAN bus through the intelligent cabin software system. When a user sends out a shooting request or a vehicle automatically sends out the shooting request, the shooting request is firstly transmitted to a cabin software system, a shooting instruction corresponding to the shooting request is generated by the cabin software system, and the shooting instruction is sent to a CAN bus. The CAN bus transmits shooting instructions to the DVR system, the OMS system and the T-BOX system respectively, and the T-BOX system transmits the shooting instructions to the unmanned aerial vehicle. DVR, OMS system can shoot according to shooting instruction. And the unmanned aerial vehicle takes off and shoots a video according to the shooting instruction, and establishes real-time communication with the unmanned aerial vehicle through the T-BOX. During shooting, all shooting instructions issued by the user will be transferred from the cabin software system through the CAN bus and the T-BOX.
2. The control mode of shooting unmanned aerial vehicle, DVR system, OMS system is shot to the user.
Specifically, as shown in fig. 10, the user can control the start and end of photographing by way of an intelligent voice assistant 1003, a multifunction steering wheel 1002, an IVI screen 1004, and the like. The user may set which of the three above-described modes to use in the cabin software system for control.
When the user uses the intelligent voice assistant, the intelligent voice assistant can provide a fixed phone to inform the user how to control shooting, such as voice control instructions of 'all shooting together', 'DVR shooting', 'OMS shooting', 'unmanned aerial vehicle and DVR shooting together', 'unmanned aerial vehicle and OMS shooting together', 'DVR and ONS shooting together', and the like. In addition, the user can self-define the voice control instruction of the interactive control in the intelligent voice assistant, and the intelligent voice assistant can also continuously learn the voice control instruction of the user to continuously optimize the recognition accuracy.
In one example, during the travel of the vehicle, the intelligent voice assistant may acquire the voice content in the cabin in real time, and when recognizing the languages such as "java", "nice", "as if it were shot down", etc. or similar, may send a shooting request to the cabin software system, which will prompt the user via the intelligent voice assistant whether to take a shot and in that way.
When the multifunctional steering wheel is used for control, a user can quickly control shooting through a user-defined button on the multifunctional steering wheel.
In one example, the user may set the custom button personalization to control a certain shooting mode, such as "all together," "DVR shooting," "OMS shooting," "drone and DVR together," "drone and OMS together," "DVR and ONS together," and so on. When the user presses the custom button, the cabin software system will send shooting instructions according to the set shooting mode.
When the IVI touch screen control is used, a user can directly control shooting and modes on the IVI screen, four buttons of 'vehicle-mounted unmanned aerial vehicle shooting', 'DVR shooting', 'OMS shooting', 'all shooting together' are displayed on the screen, and the user can select a shooting mode according to the requirement.
3. And hot sticking point reminding and shooting position presetting in map navigation.
Specifically, when the user establishes a driving terminal and a navigation route, the map navigation system recommends a hot click point which is suitable for shooting and is passed by the navigation route or the nearby distance is within 5 km to the user, and synchronously recommends reference videos of other people at the hot click point, so that the user can select the reference videos and acquire shooting plan information corresponding to the reference videos.
In addition, the user can customize the punching point, and can preset the position in the map system, and the target point for planning shooting is increased.
Based on the method, the vehicle is positioned in real time through the positioning system, when the current position of the vehicle is 3 km away from the preset position, the positioning system informs the cabin software system, the cabin software system reminds the user that the shooting position is reached or not, and at the moment, the intelligent voice assistant and the IVI screen display and other modes are used for reminding. When the user selects to take a photograph, the vehicle starts to take a photograph at a current position 1km away from the vicinity of the preset position, and takes a photograph according to the photographing plan information selected by the user.
4. The user generates shooting plan information using a custom video.
As shown in fig. 11, the user may import a favorite video clip through the owner APP in the user client 1103, and the owner APP may transfer the video clip to the cockpit software system through near field communication (bluetooth system) or remote communication (cloud information center 1102, TSP, and T-BOX), or directly import the video clip to the cockpit software system through USB.
After the cockpit software system receives the video clips, the video clips are analyzed and disassembled through the intelligent photo album, and proper shooting mode information of the vehicle is made, such as pictures which the vehicle-mounted unmanned aerial vehicle needs to shoot, pictures which the DVR system needs to shoot, and pictures which the OMS system needs to shoot. The intelligent photo album generates corresponding shooting plan information based on shooting mode information, the shooting plan information is transmitted to a cabin software system, and the cabin software system distributes unmanned aerial vehicle, DVR system and OMS system to execute shooting plan.
5. And (5) making VLOG video for highway tourism.
Specifically, after the pictures of all the positions (unmanned aerial vehicle, DVR and OMS) corresponding to the vehicle end are shot, the materials are transferred and stored in the intelligent photo album. The intelligent photo album provides a plurality of types of dailies, a plurality of dailies contained in the dailies, and a preset clipping template corresponding to each dailies. The user can select and collect favorite styles from the user, and meanwhile, the information of selection and collection is also reserved in the user account corresponding to the user. In addition, the user may upload video with a preference to a smart album, which may analyze the video and determine the user's video preferences.
Based on the above, the intelligent photo album automatically clips candidate videos according to a preset clipping template, and displays a plurality of candidate videos to a user for selection, if the obtained plurality of candidate videos are all selected by the user, the plurality of candidate videos are clipped again through other preset video clipping templates until the user selects favorite candidate videos. In the process, the intelligent photo album can learn the video preference of the user, so that the matching degree of the candidate video and the user requirement is improved.
In this embodiment, through the scheme of the embodiment of the present application, rich video capturing, editing and sharing functions can be provided. Through fusing unmanned aerial vehicle in the vehicle with the inside and outside omnidirectional camera of car, realize the shooting mode of many machine positions many scenes to through intelligent creation of intelligent album, promote the user in the user experience of shooting video and editing video of self-driving play in-process, promote video shooting and preparation efficiency.
It should be understood that, although the steps in the flowcharts related to the above embodiments are sequentially shown as indicated by arrows, these steps are not necessarily sequentially performed in the order indicated by the arrows. The steps are not strictly limited to the order of execution unless explicitly recited herein, and the steps may be executed in other orders. Moreover, at least some of the steps in the flowcharts described in the above embodiments may include a plurality of steps or a plurality of stages, which are not necessarily performed at the same time, but may be performed at different times, and the order of the steps or stages is not necessarily performed sequentially, but may be performed alternately or alternately with at least some of the other steps or stages.
Based on the same inventive concept, the embodiment of the application also provides a video shooting device in the driving process for realizing the video shooting method in the driving process. The implementation scheme of the solution to the problem provided by the device is similar to the implementation scheme described in the above method, so the specific limitation in the embodiments of the video capturing device in one or more driving processes provided below may refer to the limitation of the video capturing method in the driving process, and will not be repeated herein.
In an exemplary embodiment, as shown in fig. 12, there is provided a video capturing apparatus 1200 during driving, including a plan acquisition module 1201 and an instruction sending module 1202, wherein:
The system comprises a plan acquisition module 1201, a program acquisition module, a control module and a control module, wherein the plan acquisition module is used for acquiring the set shooting plan information, the shooting plan information comprises a shooting instruction set and shooting starting conditions, and the shooting instruction set comprises shooting instructions corresponding to at least one of a plurality of different shooting devices;
The instruction sending module 1202 is configured to send, when it is detected that the shooting start condition is met, a shooting instruction included in the shooting instruction set to a corresponding shooting device, so as to control the unmanned aerial vehicle to fly and shoot a first video based on the shooting instruction, and/or control the vehicle-mounted video recording device to shoot a second video based on the shooting instruction.
Further, the plan obtaining module 1201 is specifically configured to determine a plurality of recommended places in a travel path based on the travel path between the travel start point and the travel end point of the vehicle, obtain environmental information corresponding to each recommended place, calculate a recommendation degree of each recommended place based on the environmental information, a view preference of a current user of the vehicle, and respective corresponding influence coefficients, and determine at least one target place from the plurality of recommended places based on the recommendation degree, obtain corresponding shooting plan information for each target place, and a shooting instruction set in the shooting plan information corresponding to each target place is determined based on a reference video corresponding to the target place.
Further, the plan obtaining module 1201 is specifically further configured to obtain a shooting location of each reference video in the cloud video library and/or a shooting location preset by a current user of the vehicle, and determine that the vehicle is a recommended location if the shooting location of the reference video and/or the shooting location preset by the current user of the vehicle matches with a geographic area corresponding to the driving path.
Further, the environment information includes time information, weather information and/or traffic flow information, and the plan obtaining module 1201 is specifically further configured to determine a first recommendation score corresponding to each recommended location based on the time information, the weather information and the traffic flow data corresponding to each recommended location, determine a second recommendation score corresponding to each recommended location based on a view finding preference of a current user of the vehicle, the view finding preference is determined based on a reference video of interest of the current user of the vehicle and a video stored by the current user of the vehicle in an intelligent album and a cloud, determine a recommendation degree of each recommended location according to the first recommendation score, the second recommendation score and an influence coefficient corresponding to each recommended location, and determine a recommended location with the recommendation degree satisfying a preset screening condition as a target location.
Further, the shooting start conditions include a distance start condition, and the instruction sending module 1202 is specifically configured to, for each target location, output shooting prompt information through the intelligent voice assistant and/or the central control screen if a distance between the vehicle and the target location meets a preset distance value, and after receiving feedback information for confirming shooting of the shooting prompt information, send each shooting instruction in a shooting instruction set corresponding to the target location to a corresponding shooting device if the distance between the vehicle and the target location is further reduced and the distance start condition is met.
The device further comprises a plan generation module which is specifically used for analyzing the imported reference video to obtain shooting mode information of the reference video, wherein the shooting mode information at least comprises corresponding shooting equipment, shooting machine positions and shooting angles, and corresponding shooting plan information is generated based on the shooting mode information of the reference video.
Further, the vehicle-mounted video equipment comprises a vehicle recorder DVR and a member monitoring system OMS, and the command sending module is specifically further used for sending the first shooting command to the unmanned aerial vehicle, sending the second shooting command to the DVR and sending the third shooting command to the OMS under the condition that shooting starting conditions are met.
The device further comprises a video generation module, wherein the video generation module is specifically used for receiving the first video and/or the second video, storing the first video and/or the second video as videos to be edited into the intelligent album, determining a set number of target clip templates corresponding to the video preferences from preset clip templates of the intelligent album based on the video preferences of the current user of the vehicle, performing video editing on the videos to be edited based on each target clip template to obtain a set number of candidate videos, and displaying the set number of candidate videos through a user client and/or a central control screen so as to be selected by the current user of the vehicle.
All or part of the modules in the video shooting device in the driving process can be realized by software, hardware and a combination thereof. The modules can be embedded in or independent of a processor in the new energy automobile in a hardware form, and can also be stored in a memory in the new energy automobile in a software form, so that the processor can call and execute operations corresponding to the modules.
In an exemplary embodiment, an electronic device, which may be a server, is provided, and an internal structure thereof may be as shown in fig. 13. The electronic device includes a processor, a memory, an Input/Output interface (I/O) and a communication interface. The processor, the memory and the input/output interface are connected through a system bus, and the communication interface is connected to the system bus through the input/output interface. Wherein the processor of the electronic device is configured to provide computing and control capabilities. The memory of the electronic device includes a non-volatile storage medium and an internal memory. The non-volatile storage medium stores an operating system, computer programs, and a database. The internal memory provides an environment for the operation of the operating system and computer programs in the non-volatile storage media. The database of the electronic equipment is used for storing data such as preset reference videos, shot videos, preset clipping templates and the like. The input/output interface of the electronic device is used to exchange information between the processor and the external device. The communication interface of the electronic device is used for communicating with an external terminal through network connection. The computer program, when executed by the processor, implements a method of video capture during driving.
It will be appreciated by those skilled in the art that the structure shown in fig. 13 is merely a block diagram of a portion of the structure associated with the present application and is not limiting of the electronic device to which the present application is applied, and that a particular electronic device may include more or fewer components than shown, or may combine certain components, or have a different arrangement of components.
In an exemplary embodiment, a new energy automobile is provided, and a structural diagram of the new energy automobile may be shown in fig. 14, where the new energy automobile includes a cabin software system 1401, a motor controller 1402, a driving motor 1403, and a transmission system 1404, and the cabin software system is used to implement steps in the video capturing method embodiments in the driving processes.
In one embodiment, a computer-readable storage medium is provided, on which a computer program is stored which, when executed by a processor, carries out the steps of the method embodiments described above.
In an embodiment, a computer program product is provided, comprising a computer program which, when executed by a processor, implements the steps of the method embodiments described above.
It should be noted that, the user information (including but not limited to user equipment information, user personal information, etc.) and the data (including but not limited to data for analysis, stored data, presented data, etc.) related to the present application are both information and data authorized by the user or sufficiently authorized by each party, and the collection, use and processing of the related data are required to meet the related regulations.
Those skilled in the art will appreciate that implementing all or part of the above described methods may be accomplished by way of a computer program stored on a non-transitory computer readable storage medium, which when executed, may comprise the steps of the embodiments of the methods described above. Any reference to memory, database, or other medium used in embodiments provided herein may include at least one of non-volatile and volatile memory. The nonvolatile Memory may include Read-Only Memory (ROM), magnetic tape, floppy disk, flash Memory, optical Memory, high density embedded nonvolatile Memory, resistive random access Memory (ReRAM), magneto-resistive random access Memory (Magnetoresistive Random Access Memory, MRAM), ferroelectric Memory (Ferroelectric Random Access Memory, FRAM), phase change Memory (PHASE CHANGE Memory, PCM), graphene Memory, and the like. Volatile memory can include random access memory (Random Access Memory, RAM) or external cache memory, and the like. By way of illustration, and not limitation, RAM can be in various forms such as static random access memory (Static Random Access Memory, SRAM) or dynamic random access memory (Dynamic Random Access Memory, DRAM), etc. The databases referred to in the embodiments provided herein may include at least one of a relational database and a non-relational database. The non-relational database may include, but is not limited to, a blockchain-based distributed database, and the like. The processor referred to in the embodiments provided in the present application may be a general-purpose processor, a central processing unit, a graphics processor, a digital signal processor, a programmable logic unit, a data processing logic unit based on quantum computing, or the like, but is not limited thereto.
The technical features of the above embodiments may be arbitrarily combined, and all possible combinations of the technical features in the above embodiments are not described for brevity of description, however, as long as there is no contradiction between the combinations of the technical features, they should be considered as the scope of the description.
The foregoing examples illustrate only a few embodiments of the application and are described in detail herein without thereby limiting the scope of the application. It should be noted that it will be apparent to those skilled in the art that several variations and modifications can be made without departing from the spirit of the application, which are all within the scope of the application. Accordingly, the scope of the application should be assessed as that of the appended claims.