Disclosure of Invention
The embodiment of the application provides an image processing method and device, which are used for solving the technical problems of high cost and low efficiency of image acquisition and manual labeling in the prior art.
A first aspect of an embodiment of the present application provides a method for image processing, including:
acquiring a first two-dimensional image of a vehicle in a driving environment; the first two-dimensional image is marked with a target component of the vehicle;
acquiring a three-dimensional model of the target component and semantic information of the vehicle of the target component in a first state;
generating a second two-dimensional image corresponding to the vehicle when the target component is in the first state by using the first two-dimensional image, the three-dimensional model of the target component and the use rule of the target component in the vehicle;
and establishing an association relation between the second two-dimensional image and the semantic information.
According to the method and the device, a large number of images are not required to be manually collected and marked, and the association relation between the second two-dimensional image of the target component in the first state and semantic information can be obtained based on image processing by combining the three-dimensional model of the target component and the two-dimensional image of the vehicle in the driving environment, so that efficiency and flexibility are improved greatly.
Optionally, the target component is a moving component;
the generating, using the first two-dimensional image, the three-dimensional model of the target component, and the usage rule of the target component in the vehicle, a second two-dimensional image corresponding to the vehicle when the target component is in the first state includes:
rendering by utilizing the three-dimensional model of the target component to obtain a depth map of the target component;
restoring three-dimensional point cloud data of the target component by utilizing the depth map of the target component and the first two-dimensional image;
translating or rotating the target component by utilizing the three-dimensional point cloud data and the motion rule of the moving component in the use of the vehicle, and generating a target three-dimensional image corresponding to the vehicle when the target component is in the first state;
and mapping the target three-dimensional image to obtain the second two-dimensional image.
Optionally, the moving part includes: door, hood and trunk.
Optionally, if there is a target area in the second two-dimensional image that is not visible in the first two-dimensional image, the method further includes:
acquiring an environment map of the first two-dimensional image;
and rendering the target area by using the environment map, and fusing the rendered target area with the second two-dimensional image.
Optionally, the target component is a lighting component;
the generating, using the first two-dimensional image, the three-dimensional model of the target component, and the usage rule of the target component in the vehicle, a second two-dimensional image corresponding to the vehicle when the target component is in the first state includes:
performing two-dimensional projection by using the three-dimensional model of the target component to obtain a projection area of the target component in the first two-dimensional image;
and editing the color of the projection area by utilizing the lighting rules of the lighting component in the use of the vehicle, and generating a second two-dimensional image corresponding to the vehicle.
Optionally, the lighting component includes: vehicle lamp.
Optionally, the method further comprises:
Filling a cavity area of the second two-dimensional image; and smoothing and filtering the filled two-dimensional image. Therefore, the second two-dimensional image is better in effect, and the semantic information is accurately identified by the second two-dimensional image.
Optionally, the semantic information of the vehicle in the first state by the target component includes one or more of:
the semantic information of the vehicle in the open state of the vehicle door is: personnel in the vehicle get off the vehicle;
the semantic information of the vehicle in the open state of the trunk is as follows: personnel in the vehicle need to pick up or load;
the semantic information of the vehicle in the open state of the engine cover is: the vehicle fails;
the semantic information of the vehicle in the bright yellow state of the left car lamp of the vehicle is as follows: the vehicle turns left;
the semantic information of the vehicle in the bright yellow state of the right car lamp of the vehicle is as follows: the vehicle turns right.
A second aspect of an embodiment of the present application provides an apparatus for image processing, including:
the first acquisition module is used for acquiring a first two-dimensional image of the vehicle in the running environment; the first two-dimensional image is marked with a target component of the vehicle;
the second acquisition module is used for acquiring the three-dimensional model of the target component and semantic information of the vehicle in the first state of the target component;
The generation module is used for generating a second two-dimensional image corresponding to the vehicle when the target component is in the first state by using the first two-dimensional image, the three-dimensional model of the target component and the use rule of the target component in the vehicle;
the establishing module is used for establishing the association relation between the second two-dimensional image and the semantic information.
Optionally, the target component is a moving component;
the generating module is specifically configured to:
rendering by utilizing the three-dimensional model of the target component to obtain a depth map of the target component;
restoring three-dimensional point cloud data of the target component by utilizing the depth map of the target component and the first two-dimensional image;
translating or rotating the target component by utilizing the three-dimensional point cloud data and the motion rule of the moving component in the use of the vehicle, and generating a target three-dimensional image corresponding to the vehicle when the target component is in the first state;
and mapping the target three-dimensional image to obtain the second two-dimensional image.
Optionally, the moving part includes: door, hood and trunk.
Optionally, if there is a target area in the second two-dimensional image that is not visible in the first two-dimensional image, the generating module is further configured to:
Acquiring an environment map of the first two-dimensional image;
and rendering the target area by using the environment map, and fusing the rendered target area with the second two-dimensional image.
Optionally, the target component is a lighting component;
the generating module is specifically configured to:
performing two-dimensional projection by using the three-dimensional model of the target component to obtain a projection area of the target component in the first two-dimensional image;
and editing the color of the projection area by utilizing the lighting rules of the lighting component in the use of the vehicle, and generating a second two-dimensional image corresponding to the vehicle.
Optionally, the lighting component includes: vehicle lamp.
Optionally, the method further comprises:
the optimizing module is used for filling the cavity area of the second two-dimensional image; and smoothing and filtering the filled two-dimensional image.
Optionally, the semantic information of the vehicle in the first state by the target component includes one or more of:
the semantic information of the vehicle in the open state of the vehicle door is: personnel in the vehicle get off the vehicle;
the semantic information of the vehicle in the open state of the trunk is as follows: personnel in the vehicle need to pick up or load;
the semantic information of the vehicle in the open state of the engine cover is: the vehicle fails;
The semantic information of the vehicle in the bright yellow state of the left car lamp of the vehicle is as follows: the vehicle turns left;
the semantic information of the vehicle in the bright yellow state of the right car lamp of the vehicle is as follows: the vehicle turns right.
A third aspect of the embodiments of the present application provides an electronic device, including: at least one processor; and a memory communicatively coupled to the at least one processor; wherein the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the method of any one of the preceding first aspects.
A fourth aspect of the embodiments provides a non-transitory computer readable storage medium storing computer instructions for causing a computer to perform the method of any one of the preceding first aspects.
According to a fifth aspect of the present application, there is provided a computer program product comprising: a computer program stored in a readable storage medium, from which it can be read by at least one processor of an electronic device, the at least one processor executing the computer program causing the electronic device to perform the method of the first aspect.
In summary, the beneficial effects of the embodiments of the present application compared with the prior art are:
the embodiment of the application provides an image processing method and device, which can construct the association relation between a second two-dimensional image and semantic information of a target part in a first state according to a first two-dimensional image marked with the target part of a vehicle in a driving environment and a three-dimensional model of the target part, wherein the usage rule of the target part in the vehicle. In the embodiment of the application, a large number of images are not required to be manually collected and marked, and the association relation between the second two-dimensional image of the target component and semantic information in the first state can be obtained based on image processing by combining the three-dimensional model of the target component and the two-dimensional image of the vehicle in the running environment, so that the efficiency and the flexibility are greatly improved.
Detailed Description
Exemplary embodiments of the present application are described below in conjunction with the accompanying drawings, which include various details of the embodiments of the present application to facilitate understanding, and should be considered as merely exemplary. Accordingly, one of ordinary skill in the art will recognize that various changes and modifications of the embodiments described herein can be made without departing from the scope and spirit of the present application. Also, descriptions of well-known functions and constructions are omitted in the following description for clarity and conciseness. The following embodiments and features of the embodiments may be combined with each other without conflict.
The image processing method of the embodiment of the present application may be applied to a device having an image processing capability, for example, may be a terminal or a server, where the terminal may include: electronic devices such as mobile phones, tablet computers, notebook computers, or desktop computers. The embodiment of the present application does not specifically limit the specific device of application.
The driving environment described in the embodiments of the present application may be a real environment in which a vehicle is driven, rather than a virtual scene similar to a game.
The three-dimensional model of the target component described in the embodiments of the present application may be a computer-aided design (computer aided design, CAD) model or the like, in which depth information of the target component or the like may be included.
The target component of the embodiment of the application may be a component in a vehicle, which can reflect semantic information of different vehicles in different states. The semantic information of the vehicle described in the embodiments of the present application may be information reflecting the state of the vehicle. By way of example, the target component may include: the semantic information of the vehicle in the open state of the vehicle door, the trunk, the engine cover or the vehicle lamp is as follows: personnel in the vehicle get off the vehicle; the semantic information of the vehicle in the open state of the trunk is as follows: personnel in the vehicle need to pick up or load; the semantic information of the vehicle in the open state of the engine cover is: the vehicle fails; the semantic information of the vehicle in the bright yellow state of the left car lamp of the vehicle is as follows: the vehicle turns left; the semantic information of the vehicle in the bright yellow state of the right car lamp of the vehicle is as follows: right turn of the vehicle, etc.
The usage rules of the target component in the vehicle described in the embodiments of the present application may be the usage rules that need to be followed when the target component is used in the vehicle. For example, a door in a vehicle may be rotated about an axis, or the like.
As shown in fig. 1, fig. 1 is a flowchart of a method for image processing according to an embodiment of the present application. The method specifically comprises the following steps:
S101: acquiring a first two-dimensional image of a vehicle in a driving environment; the first two-dimensional image is labeled with a target component of the vehicle.
In the embodiment of the application, the two-dimensional image containing the vehicle can be shot in the running environment of the vehicle, and the target component of the vehicle is marked in the image. The target component may be marked in a six-degree-of-freedom pose or any other possible manner.
In a scene such as a game or a graphic production, although there may be a case where images of a vehicle in various states are obtained by using a vehicle model or the like, since the environment of the game or the graphic production is greatly different from the actual running environment, if the images obtained in the game or the graphic production environment are used as the semantic information of the vehicle in actual driving, the recognition may be inaccurate or impossible.
Therefore, in the embodiment of the application, when the two-dimensional image of the vehicle in the running environment is acquired, and the second two-dimensional image is obtained based on the first two-dimensional image of the running environment, the second two-dimensional image is in accordance with the running environment, and when the semantic information of the vehicle is identified by using the second two-dimensional image, a more accurate identification result can be obtained.
S102: and acquiring a three-dimensional model of the target component and semantic information of the vehicle in the first state of the target component.
In this embodiment of the present application, the three-dimensional model of the target component and the semantic information of the vehicle of the target component in the first state may be obtained from a local or network, which is not specifically limited in this embodiment of the present application.
In a specific application, the number of the target components can be one or a plurality of target components. The three-dimensional model of the vehicle, in which the three-dimensional model of the target component is included, may be acquired according to the model of the vehicle or the like. Alternatively, a three-dimensional model of each target member may be acquired separately.
S103: and generating a second two-dimensional image corresponding to the vehicle when the target component is in the first state by using the first two-dimensional image, the three-dimensional model of the target component and the use rule of the target component in the vehicle.
In the embodiment of the application, the target component may include multiple states in the vehicle, for example, when the target component is a vehicle door, the state of the vehicle door may include an open state or a closed state; when the target component is a trunk, the state of the trunk may include an open state or a closed state; when the target component is a hood, the state of the trunk may include an open state or a closed state; when the target component is a vehicle lamp, the state of the vehicle lamp may include that the left vehicle lamp is in a bright yellow state or that the right vehicle lamp is in a bright yellow state; etc.
The different states of the target component may correspond to semantic information of different vehicles, so the first state may be specifically determined in conjunction with a possible state of the target component itself, which is not limited in the embodiments of the present application.
The three-dimensional model of the target component can be utilized to obtain a complete structure of the target component (for example, the three-dimensional model comprises an appearance part and an interior part of the target component), the target component in the vehicle can be in an A state in the first two-dimensional image, the target component is rotated, translated and the like according to the use rule of the target component in the vehicle, the structure possibly added after the rotation, the translation and the like of the target component is operated can be complemented by combining with the three-dimensional model of the target component, and then a second two-dimensional image of the target component in any state different from the A state can be synthesized.
In a specific application, the first two-dimensional image, the three-dimensional model of the target component, and the usage rule of the target component in the vehicle may be used in any manner to generate the second two-dimensional image corresponding to the vehicle when the target component is in the first state, which is not specifically limited in the embodiment of the present application.
S104: and establishing an association relation between the second two-dimensional image and the semantic information.
In the embodiment of the application, after the second two-dimensional image is obtained, the association relationship between the second two-dimensional image and the corresponding semantic information can be established. For example, the second two-dimensional image and the corresponding semantic information may be stored in association. Then, in the automatic driving scene, if the automatic driving vehicle shoots the pictures of the surrounding vehicles, the pictures of the surrounding vehicles can be matched with the second two-dimensional image to obtain semantic information of the surrounding vehicles, and then the implemented automatic driving strategy is executed.
Optionally, a hole area may exist in the second two-dimensional image during synthesis, so that the effect of the second two-dimensional image is poor, and the second two-dimensional image may be inaccurate when semantic information is identified by using the second two-dimensional image, so that the hole area of the second two-dimensional image may be further filled; and smoothing and filtering the filled two-dimensional image.
For example, the contents of the target component in the first two-dimensional image and the second two-dimensional image may be subjected to difference calculation, etc., the region with the larger difference is regarded as a cavity region, the contents of the target component in the first two-dimensional image are combined to be filled, and a filtering algorithm such as bilateral filtering may be adopted to perform smooth filtering on the graph, so as to obtain the second two-dimensional image with better effect.
In summary, the embodiments of the present application provide an image processing method and apparatus, which may construct, according to a first two-dimensional image marked with a target component of a vehicle in a driving environment and a three-dimensional model of the target component, a usage rule of the target component in the vehicle, an association relationship between a second two-dimensional image and semantic information of the target component in a first state. In the embodiment of the application, a large number of images are not required to be manually collected and marked, and the association relation between the second two-dimensional image of the target component and semantic information in the first state can be obtained based on image processing by combining the three-dimensional model of the target component and the two-dimensional image of the vehicle in the running environment, so that the efficiency and the flexibility are greatly improved.
Optionally, the target component is a moving component; the generating, using the first two-dimensional image, the three-dimensional model of the target component, and the usage rule of the target component in the vehicle in S103, a second two-dimensional image corresponding to the vehicle when the target component is in the first state includes:
rendering by utilizing the three-dimensional model of the target component to obtain a depth map of the target component; restoring three-dimensional point cloud data of the target component by utilizing the depth map of the target component and the first two-dimensional image; translating or rotating the target component by utilizing the three-dimensional point cloud data and the motion rule of the moving component in the use of the vehicle, and generating a target three-dimensional image corresponding to the vehicle when the target component is in the first state; and mapping the target three-dimensional image to obtain the second two-dimensional image.
In an embodiment of the present application, the moving part may include: door, hood and trunk.
For the moving part, it may be divided into a visible region, which may be a portion that can be displayed in the first two-dimensional image, and an invisible region, which may be a portion that cannot be displayed in the first two-dimensional image. For example, if the target member is a door, in the first two-dimensional image, the door is in a closed state, and when the door is closed, a visible region is visible outside the vehicle, and an invisible region is invisible outside the vehicle.
For the visible region, a depth map of the target component can be rendered by using a three-dimensional model of the target component, three-dimensional point cloud data of the target component is restored by using the depth map of the target component and the first two-dimensional image, and then the target component is translated or rotated by using the three-dimensional point cloud data and a motion rule of the motion component in the use of the vehicle, so that a target three-dimensional image corresponding to the vehicle when the target component is in the first state is generated. For example, in the first two-dimensional image, the door, the hood, etc. of the vehicle are all in a closed state, and a three-dimensional image of the vehicle in which the door is in an open state or the hood is in an open state, etc. may be generated based on the above steps.
And then the three-dimensional image of the target can be mapped onto the two-dimensional image by utilizing the camera imaging principle to obtain a second two-dimensional image.
The invisible area can be supplemented according to the three-dimensional model of the target component, so that an image of the invisible area can be obtained.
Optionally, if there is a target area in the second two-dimensional image that is not visible in the first two-dimensional image, the method further includes: acquiring an environment map of the first two-dimensional image; and rendering the target area by using the environment map, and fusing the rendered target area with the second two-dimensional image.
Because the invisible target area is complemented according to the three-dimensional model of the target component, the fusion with the environment may be poor, resulting in poor image effect of the second two-dimensional image.
In the embodiment of the application, an environment map of a first two-dimensional image can be acquired; rendering the target area by using the environment map, and fusing the rendered target area with the second two-dimensional image, so that the invisible target area is fused with the running environment of the first two-dimensional image, and a more accurate second two-dimensional image is obtained.
Optionally, the target component is a lighting component; the generating, using the first two-dimensional image, the three-dimensional model of the target component, and the usage rule of the target component in the vehicle in S103, a second two-dimensional image corresponding to the vehicle when the target component is in the first state includes:
Performing two-dimensional projection by using the three-dimensional model of the target component to obtain a projection area of the target component in the first two-dimensional image; and editing the color of the projection area by utilizing the lighting rules of the lighting component in the use of the vehicle, and generating a second two-dimensional image corresponding to the vehicle.
In this embodiment of the present application, the lighting component may be a lamp of a vehicle. For the lighting component, two-dimensional projection can be directly carried out in the first two-dimensional image by the three-dimensional model of the target component to obtain a corresponding region in the first two-dimensional image, and then the region is subjected to color editing according to semantic information of the vehicle, for example, when the semantic information is left turn, the left headlight is edited to be yellow; when the semantic information is parking, the two tail lamps are edited into red; when the semantic information is dangerous alarm, the two tail lamps are edited to be yellow; etc.
By way of example, fig. 2 shows a schematic representation of a second two-dimensional image.
As shown in fig. 2, a two-dimensional image of a target component (door and trunk) photographed in a running environment and labeled with a pose of six degrees of freedom, and a CAD three-dimensional model of the target component may be taken as inputs, and two-dimensional images of the vehicle at the time of door opening and trunk opening may be output. The method comprises the steps of rendering a three-dimensional model of a car door and a trunk to obtain a depth image of the car door and the trunk, recovering three-dimensional point cloud data of the car door and the trunk by using the depth image of the car door and the trunk and an input two-dimensional image, realizing component reconstruction, translating or rotating the car door and the trunk by using the three-dimensional point cloud data and a motion rule of the car door and the trunk in use of a vehicle, generating three-dimensional images of the car door and the trunk, further projecting the three-dimensional images into two-dimensional images, and optimizing the projected two-dimensional images. For invisible areas of the door and trunk, environmental mapping and three-dimensional part rendering may be performed.
In the embodiment of the application, a large number of images are not required to be manually collected and marked, and the association relation between the second two-dimensional image of the target component in the first state and semantic information can be obtained based on image processing by combining the three-dimensional model of the target component and the two-dimensional image of the vehicle in the running environment, so that the efficiency and flexibility are greatly improved, and the cost is lower.
Fig. 3 is a schematic structural diagram of an embodiment of an image processing apparatus provided in the present application. As shown in fig. 3, the apparatus for image processing provided in this embodiment includes:
afirst acquisition module 31 for acquiring a first two-dimensional image of a vehicle in a running environment; the first two-dimensional image is marked with a target component of the vehicle;
a second obtainingmodule 32, configured to obtain a three-dimensional model of the target component, and semantic information of the vehicle in the first state of the target component;
agenerating module 33, configured to generate a second two-dimensional image corresponding to the vehicle when the target component is in the first state, using the first two-dimensional image, the three-dimensional model of the target component, and a usage rule of the target component in the vehicle;
and the establishingmodule 34 is configured to establish an association relationship between the second two-dimensional image and the semantic information.
Optionally, the target component is a moving component;
the generating module is specifically configured to:
rendering by utilizing the three-dimensional model of the target component to obtain a depth map of the target component;
restoring three-dimensional point cloud data of the target component by utilizing the depth map of the target component and the first two-dimensional image;
translating or rotating the target component by utilizing the three-dimensional point cloud data and the motion rule of the moving component in the use of the vehicle, and generating a target three-dimensional image corresponding to the vehicle when the target component is in the first state;
and mapping the target three-dimensional image to obtain the second two-dimensional image.
Optionally, the moving part includes: door, hood and trunk.
Optionally, if there is a target area in the second two-dimensional image that is not visible in the first two-dimensional image, the generating module is further configured to:
acquiring an environment map of the first two-dimensional image;
and rendering the target area by using the environment map, and fusing the rendered target area with the second two-dimensional image.
Optionally, the target component is a lighting component;
the generating module is specifically configured to:
Performing two-dimensional projection by using the three-dimensional model of the target component to obtain a projection area of the target component in the first two-dimensional image;
and editing the color of the projection area by utilizing the lighting rules of the lighting component in the use of the vehicle, and generating a second two-dimensional image corresponding to the vehicle.
Optionally, the lighting component includes: vehicle lamp.
Optionally, the method further comprises:
the optimizing module is used for filling the cavity area of the second two-dimensional image; and smoothing and filtering the filled two-dimensional image.
Optionally, the semantic information of the vehicle in the first state by the target component includes one or more of:
the semantic information of the vehicle in the open state of the vehicle door is: personnel in the vehicle get off the vehicle;
the semantic information of the vehicle in the open state of the trunk is as follows: personnel in the vehicle need to pick up or load;
the semantic information of the vehicle in the open state of the engine cover is: the vehicle fails;
the semantic information of the vehicle in the bright yellow state of the left car lamp of the vehicle is as follows: the vehicle turns left;
the semantic information of the vehicle in the bright yellow state of the right car lamp of the vehicle is as follows: the vehicle turns right.
The embodiment of the application provides an image processing method and device, which can construct the association relation between a second two-dimensional image and semantic information of a target part in a first state according to a first two-dimensional image marked with the target part of a vehicle in a driving environment and a three-dimensional model of the target part, wherein the usage rule of the target part in the vehicle. In the embodiment of the application, a large number of images are not required to be manually collected and marked, and the association relation between the second two-dimensional image of the target component and semantic information in the first state can be obtained based on image processing by combining the three-dimensional model of the target component and the two-dimensional image of the vehicle in the running environment, so that the efficiency and the flexibility are greatly improved.
The image processing device provided in each embodiment of the present application may be used to execute the method shown in each corresponding embodiment, and its implementation manner and principle are the same and will not be repeated.
According to embodiments of the present application, an electronic device and a readable storage medium are also provided.
As shown in fig. 4, a block diagram of an electronic device according to a method of image processing according to an embodiment of the present application. Electronic devices are intended to represent various forms of digital computers, such as laptops, desktops, workstations, personal digital assistants, servers, blade servers, mainframes, and other appropriate computers. The electronic device may also represent various forms of mobile devices, such as personal digital processing, cellular telephones, smartphones, wearable devices, and other similar computing devices. The components shown herein, their connections and relationships, and their functions, are meant to be exemplary only, and are not meant to limit implementations of the application described and/or claimed herein.
As shown in fig. 4, the electronic device includes: one ormore processors 401,memory 402, and interfaces for connecting the components, including a high-speed interface and a low-speed interface. The various components are interconnected using different buses and may be mounted on a common motherboard or in other manners as desired. The processor may process instructions executing within the electronic device, including instructions stored in or on memory to display graphical information of the GUI on an external input/output device, such as a display device coupled to the interface. In other embodiments, multiple processors and/or multiple buses may be used, if desired, along with multiple memories and multiple memories. Also, multiple electronic devices may be connected, each providing a portion of the necessary operations (e.g., as a server array, a set of blade servers, or a multiprocessor system). Oneprocessor 401 is illustrated in fig. 4.
Memory 402 is a non-transitory computer-readable storage medium provided herein. Wherein the memory stores instructions executable by the at least one processor to cause the at least one processor to perform the methods of image processing provided herein. The non-transitory computer readable storage medium of the present application stores computer instructions for causing a computer to perform the method of image processing provided herein.
Thememory 402 is used as a non-transitory computer readable storage medium, and may be used to store non-transitory software programs, non-transitory computer-executable programs, and modules, such as program instructions/modules (e.g., thefirst acquisition module 31, thesecond acquisition module 32, thegeneration module 33, and thecreation module 34 shown in fig. 3) corresponding to the image processing method in the embodiments of the present application. Theprocessor 401 executes various functional applications of the server and data processing, i.e., a method of implementing image processing in the above-described method embodiment, by running non-transitory software programs, instructions, and modules stored in thememory 402.
Memory 402 may include a storage program area that may store an operating system, at least one application program required for functionality, and a storage data area; the storage data area may store data created according to the use of the electronic device for image processing, or the like. In addition,memory 402 may include high-speed random access memory, and may also include non-transitory memory, such as at least one magnetic disk storage device, flash memory device, or other non-transitory solid-state storage device. In some embodiments,memory 402 may optionally include memory located remotely fromprocessor 401, which may be connected to the image processing electronics via a network. Examples of such networks include, but are not limited to, the internet, intranets, local area networks, mobile communication networks, and combinations thereof.
The electronic device of the method of image processing may further include: aninput device 403 and anoutput device 404. Theprocessor 401,memory 402,input device 403, andoutput device 404 may be connected by a bus or otherwise, for example in fig. 4.
Theinput device 403 may receive input numeric or character information and generate key signal inputs related to user settings and function control of the image processing electronic device, such as a touch screen, a keypad, a mouse, a track pad, a touch pad, a pointer stick, one or more mouse buttons, a track ball, a joystick, etc. input devices. Theoutput device 404 may include a display apparatus, auxiliary lighting devices (e.g., LEDs), and haptic feedback devices (e.g., vibration motors), among others. The display device may include, but is not limited to, a Liquid Crystal Display (LCD), a Light Emitting Diode (LED) display, and a plasma display. In some implementations, the display device may be a touch screen.
Various implementations of the systems and techniques described here can be realized in digital electronic circuitry, integrated circuitry, application specific ASIC (application specific integrated circuit), computer hardware, firmware, software, and/or combinations thereof. These various embodiments may include: implemented in one or more computer programs, the one or more computer programs may be executed and/or interpreted on a programmable system including at least one programmable processor, which may be a special purpose or general-purpose programmable processor, that may receive data and instructions from, and transmit data and instructions to, a storage system, at least one input device, and at least one output device.
These computing programs (also referred to as programs, software applications, or code) include machine instructions for a programmable processor, and may be implemented in a high-level procedural and/or object-oriented programming language, and/or in assembly/machine language. As used herein, the terms "machine-readable medium" and "computer-readable medium" refer to any computer program product, apparatus, and/or device (e.g., magnetic discs, optical disks, memory, programmable Logic Devices (PLDs)) used to provide machine instructions and/or data to a programmable processor, including a machine-readable medium that receives machine instructions as a machine-readable signal. The term "machine-readable signal" refers to any signal used to provide machine instructions and/or data to a programmable processor.
To provide for interaction with a user, the systems and techniques described here can be implemented on a computer having: a display device (e.g., a CRT (cathode ray tube) or LCD (liquid crystal display) monitor) for displaying information to a user; and a keyboard and pointing device (e.g., a mouse or trackball) by which a user can provide input to the computer. Other kinds of devices may also be used to provide for interaction with a user; for example, feedback provided to the user may be any form of sensory feedback (e.g., visual feedback, auditory feedback, or tactile feedback); and input from the user may be received in any form, including acoustic input, speech input, or tactile input.
The systems and techniques described here can be implemented in a computing system that includes a background component (e.g., as a data server), or that includes a middleware component (e.g., an application server), or that includes a front-end component (e.g., a user computer having a graphical user interface or a web browser through which a user can interact with an implementation of the systems and techniques described here), or any combination of such background, middleware, or front-end components. The components of the system can be interconnected by any form or medium of digital data communication (e.g., a communication network). Examples of communication networks include: local Area Networks (LANs), wide Area Networks (WANs), and the internet.
The computer system may include a client and a server. The client and server are typically remote from each other and typically interact through a communication network. The relationship of client and server arises by virtue of computer programs running on the respective computers and having a client-server relationship to each other.
According to an embodiment of the present application, there is also provided a computer program product comprising: a computer program stored in a readable storage medium, from which at least one processor of an electronic device can read, the at least one processor executing the computer program causing the electronic device to perform the solution provided by any one of the embodiments described above.
According to the technical scheme of the embodiment of the application, the association relationship between the second two-dimensional image of the target part and the semantic information in the first state can be constructed according to the first two-dimensional image marked with the target part of the vehicle in the driving environment and the three-dimensional model of the target part, and the use rule of the target part in the vehicle. In the embodiment of the application, a large number of images are not required to be manually collected and marked, and the association relation between the second two-dimensional image of the target component and semantic information in the first state can be obtained based on image processing by combining the three-dimensional model of the target component and the two-dimensional image of the vehicle in the running environment, so that the efficiency and the flexibility are greatly improved.
It should be appreciated that various forms of the flows shown above may be used to reorder, add, or delete steps. For example, the steps described in the present application may be performed in parallel, sequentially, or in a different order, provided that the desired results of the technical solutions disclosed in the present application can be achieved, and are not limited herein.
The above embodiments do not limit the scope of the application. It will be apparent to those skilled in the art that various modifications, combinations, sub-combinations and alternatives are possible, depending on design requirements and other factors. Any modifications, equivalent substitutions and improvements made within the spirit and principles of the present application are intended to be included within the scope of the present application.