Movatterモバイル変換


[0]ホーム

URL:


CN113781614A - AR Chinese garment changing method - Google Patents

AR Chinese garment changing method
Download PDF

Info

Publication number
CN113781614A
CN113781614ACN202111135218.4ACN202111135218ACN113781614ACN 113781614 ACN113781614 ACN 113781614ACN 202111135218 ACN202111135218 ACN 202111135218ACN 113781614 ACN113781614 ACN 113781614A
Authority
CN
China
Prior art keywords
human body
human
skeleton
data
chinese
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Withdrawn
Application number
CN202111135218.4A
Other languages
Chinese (zh)
Inventor
郑倩
徐柯妮
李虹
李清明
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Dianji University
Original Assignee
Shanghai Dianji University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Dianji UniversityfiledCriticalShanghai Dianji University
Priority to CN202111135218.4ApriorityCriticalpatent/CN113781614A/en
Publication of CN113781614ApublicationCriticalpatent/CN113781614A/en
Withdrawnlegal-statusCriticalCurrent

Links

Images

Classifications

Landscapes

Abstract

Translated fromChinese

本发明涉及一种AR汉服换装方法,包括:根据汉服图片或者实物建立虚拟汉服模型;侦测人体,追踪人体骨骼关节点与动作,收集人体骨骼数据并根据所述人体骨骼数据控制人体模型;测量体感设备和人体的相对坐标,并将所述相对坐标运用于所述人体模型;基于相对坐标,将虚拟汉服模型与人体骨骼关节点形成对应关系。上述AR汉服换装方法,可以检测用户在大屏前面并且跟踪识别,检测用户的手势选择衣服,并根据用户的体型调整衣服大小将衣服穿上,实现虚拟和现实结合的换装试衣体验,精确度高,对不同的身体部位辨认更清楚,对于手臂和腿部动作能够更好的识别,用户体验感更好。

Figure 202111135218

The invention relates to an AR Hanfu dressing method, comprising: establishing a virtual Hanfu model according to pictures or real objects of Hanfu; detecting human body, tracking joint points and movements of human bones, collecting human bone data, and controlling the human body model according to the human bone data; The relative coordinates of the somatosensory device and the human body are measured, and the relative coordinates are applied to the human body model; based on the relative coordinates, a corresponding relationship is formed between the virtual Hanfu model and the human skeleton joint points. The above AR Hanfu dressing method can detect the user in front of the large screen and track and identify it, detect the user's gesture to select clothes, and adjust the size of the clothes according to the user's body shape to put on the clothes, so as to realize the combination of virtual and reality dressing experience. High accuracy, better identification of different body parts, better identification of arm and leg movements, and better user experience.

Figure 202111135218

Description

AR Chinese garment changing method
Technical Field
The invention relates to an AR virtual dress changing technology, in particular to an AR Chinese clothes changing method.
Background
Apparel-related industries have reached $ 3 trillion worldwide, beginning with the four major elements of "clothing and housing" in human life. With the increasing strength of the country, the influence of the Chinese costume on the world is larger and larger. The Chinese costume embodies the whole Chinese costume culture and is an important carrier of the Chinese culture at the same time. The Chinese clothes have various styles, rich types, low price and inconvenient fitting. The Chinese clothes try-on problem is solved, so that consumers can efficiently and conveniently try on various Chinese clothes, the Chinese clothes are promoted, and in the experience process of Chinese clothes try-on, Chinese clothes lovers can be prompted to learn more about Chinese clothes and know about traditional culture knowledge.
In recent years, magic fitting mirrors are released abroad, the appearance of the magic fitting mirrors is similar to that of ordinary magic fitting mirrors in markets, and in fact, an intelligent chip is embedded into the mirror surface of the magic fitting mirrors, so that the magic fitting mirrors become a display screen capable of realizing human-computer interaction effect. The product utilizes Microsoft's Kinect body feeling technology, and the user can select the clothing style through the virtual button of air-insulated operation, has embodied the convenience of trying on. In China, the novel clothes-fitting mirror also has the similar fitting mirror.
However, the current fitting accuracy is not high enough, different body parts are not clearly identified, the actions of the arms and the legs cannot be better identified, and the user experience is not good.
Disclosure of Invention
In view of the above, it is necessary to provide an AR chinese clothing changing method with high fitting accuracy.
An AR chinese dress change method, the method comprising:
establishing a virtual Chinese clothes model according to the Chinese clothes picture or the real object;
detecting a human body, tracking joints and actions of human bones, collecting human bone data and controlling a human body model according to the human bone data;
measuring relative coordinates of the somatosensory equipment and the human body, and applying the relative coordinates to the human body model;
and forming a corresponding relation between the virtual Chinese uniform model and the human skeleton joint points based on the relative coordinates.
Further, after the virtual chinese clothing model is established according to the chinese clothing picture or the real object, the method includes:
expanding the virtual Chinese uniform model into a plurality of small parts;
respectively pasting and coloring the small parts;
and distributing weight to the virtual Chinese clothing model skin, exporting an FBX file with skeleton information, importing UNITY, and adapting to UNITY skeleton.
Further, the detecting human body, tracking human body skeleton nodes and actions, collecting human body skeleton data and controlling a human body model according to the human body skeleton data includes:
a camera of the used somatosensory equipment carries out bone tracking on images of one or two persons in a visual field, a plurality of bone joint points on a human body are tracked, and human body bone data are collected;
and controlling the human body model according to the human body skeleton data.
Further, the motion sensing device is KINECT 2.0.
Further, the method further comprises:
user data captured by the Kinect device, including Color Image data, Depth data and bone data from Color Image Steam, Depth ImageStream and Skeleton Stream, was acquired in real time in Unity3d using the KinectManger component.
Further, the method further comprises:
the Kinect management function is realized through 3 member functions in KinectManger and respectively comprises CreatConnect, Update and ProcessSkeleton, wherein the CreatConnect is used for detecting whether a computer is connected with Kinect equipment or not, and the Kinect equipment is set to be in a bone capturing state; the Update is used for updating each frame of picture and detecting whether the KinectManger class member variable changes; the ProcessSkeleton function is responsible for reading and smoothing the number of bones from a frame of bones.
Above-mentioned AR chinese clothing changing method can detect the user and in front and the tracking discernment of large-size screen, detects user's gesture and selects the clothes to adjust the clothes size according to user's size and wear the clothes, realize that the dress changing that virtual and reality combine is tried on clothes and is experienced, and the accuracy is high, and is clearer to the recognition that different health positions can be better to arm and shank action, and user experience feels better.
Drawings
FIG. 1 is a flowchart of an AR Han dress changing method according to an embodiment;
fig. 2 is a diagram of a virtual chinese clothing model.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present invention clearer, the technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are some, but not all, embodiments of the present invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
As shown in fig. 1, in one embodiment, an AR chinese clothing changing method includes:
and step S110, establishing a virtual Chinese uniform model according to the Chinese uniform picture or the real object. Firstly, modeling is carried out on an existing Chinese clothes picture or Chinese clothes real object, and the modeling is shown in figure 2. On the basis, the Chinese clothes which are modeled are expanded and divided into small parts to prepare for subsequent charting, and the former two links are mainly used in combination with MAYA (three-dimensional modeling and animation software under Autodesk) and 3D MAX (3D Studio Max, three-dimensional animation rendering and making software based on a PC system). And then, mapping the established model, and realizing the color, the material and the like of the clothes in a professional 3D drawing auxiliary tool. And finally, covering the model with weight, exporting an FBX file with skeleton information, importing UNITY, and adapting to UNITY skeleton to realize the function of tracking the clothing.
Step S120, detecting the human body, tracking joints and actions of the human skeleton, collecting the human skeleton data and controlling the human model according to the human skeleton data. The GRB camera of the motion sensing device KINECT2.0 is used for carrying out skeleton tracking on one or two persons in the visual field of the device, 20 skeleton joint points on a human body can be tracked, and captured human skeleton data is used for controlling a human body model.
And step S130, measuring the relative coordinates of the motion sensing device and the human body, and applying the relative coordinates to the human body model. The relative coordinates of the human body and KINECT2.0 are measured by KINECT2.0, and applied to the human model. The 3D images are detected through the 3D depth sensor, 3D actions of a user are captured in real time, and meanwhile, pictures of an RGB camera of KINECT2.0 are displayed.
And step S140, forming a corresponding relation between the virtual Chinese dress model and human skeleton joint points based on the relative coordinates. In order to realize human-computer interaction between a user and three-dimensional clothing, user information captured by a Kinect can be acquired in real time in the Unity3d, and the Unity3d and the Kinect are docked by using a Kinect for Windows SDK (Kinect software development kit). Before the somatosensory interaction is completed, the virtual clothing model imported into Unity3d needs to form a corresponding relationship with human body skeletal joint points captured by the Kinect.
User data captured by the Kinect device, including Color Image data, Depth data and bone data from Color Image Steam, Depth Image stream and SkeletonStream, is obtained in real time in Unity3d using the KinectManger component.
The management function of the Kinect is realized by 3 member functions in the kinectmanager, which are creat connection (create link), Update and processskexeton, wherein the creat connection (create link) is used for detecting whether the computer is connected with a Kinect device and setting the device in a bone capture state; update is used for updating each frame of picture and detecting whether a member variable of a KinectManger (script attribute) class changes; the ProcessSkeleton function is responsible for reading and smoothing the number of bones from a frame of bones.
According to the AR Chinese clothes changing method, the fact that the user is in front of the large screen and tracks and recognizes can be detected, the user gesture is detected to select clothes, the clothes are put on according to the size of the user body type adjustment clothes, virtual and real combined clothes changing and fitting experience is achieved, the gesture triggers photographing, and the two-dimensional code is generated and is downloaded by the user. The method can be applied to offline activities such as shopping malls, scenic spots, exhibitions, personnel training and the like. The software can be operated in a vertical all-in-one machine, a projection system, an LED display system and the like which are provided with Kinect somatosensory equipment.
The invention mainly realizes man-machine interaction based on AR fitting realized by UNITY3D at a PC end, fits clothes with a real person, adopts a KINECT body sensing camera used by KINECT2.0 equipment, has low delay and strong definition, closely captures human skeleton by skeleton tracking, uses the captured human skeleton data for controlling a character model, measures the relative coordinates of the human body and the KINECT through the KINECT, applies the coordinates to the character model, and simultaneously displays the picture of the RGB camera of KINECT. The accuracy is high, and it is clearer to the recognition of different body parts, to the discernment that arm and shank action can be better, user experience feels better. The delay is low, the bone catching capacity is strong, and the synchronous motion of people and clothes is realized without being detached.
The above-mentioned embodiments only express several embodiments of the present invention, and the description thereof is more specific and detailed, but not construed as limiting the scope of the present invention. It should be noted that, for a person skilled in the art, several variations and modifications can be made without departing from the inventive concept, which falls within the scope of the present invention. Therefore, the protection scope of the present patent shall be subject to the appended claims.

Claims (6)

1. An AR Chinese dress changing method, the method comprising:
establishing a virtual Chinese clothes model according to the Chinese clothes picture or the real object;
detecting a human body, tracking joints and actions of human bones, collecting human bone data and controlling a human body model according to the human bone data;
measuring relative coordinates of the somatosensory equipment and the human body, and applying the relative coordinates to the human body model;
and forming a corresponding relation between the virtual Chinese uniform model and the human skeleton joint points based on the relative coordinates.
2. The AR Han dress re-packing method according to claim 1, wherein after the virtual Han dress model is built according to Han dress pictures or real objects, the method comprises the following steps:
expanding the virtual Chinese uniform model into a plurality of small parts;
respectively pasting and coloring the small parts;
and distributing weight to the virtual Chinese clothing model skin, exporting an FBX file with skeleton information, importing UNITY, and adapting to UNITY skeleton.
3. The AR han dress changing method of claim 1, wherein said detecting human body, tracking human body skeleton nodes and actions, collecting human body skeleton data and controlling human body model according to said human body skeleton data comprises:
a camera of the used somatosensory equipment carries out bone tracking on images of one or two persons in a visual field, a plurality of bone joint points on a human body are tracked, and human body bone data are collected;
and controlling the human body model according to the human body skeleton data.
4. The AR han dress re-fitting method of claim 3, wherein said somatosensory device is KINECT 2.0.
5. The AR han dress re-fitting method of claim 4, wherein the method further comprises:
user data captured by the Kinect device, including Color Image data, Depth data and bone data from Color Image Steam, Depth ImageStream and Skeleton Stream, was acquired in real time in Unity3d using the KinectManger component.
6. The AR han dress re-fitting method of claim 5, wherein the method further comprises:
the Kinect management function is realized through 3 member functions in KinectManger and respectively comprises CreatConnect, Update and ProcessSkeleton, wherein the CreatConnect is used for detecting whether a computer is connected with Kinect equipment or not, and the Kinect equipment is set to be in a bone capturing state; the Update is used for updating each frame of picture and detecting whether the KinectManger class member variable changes; the ProcessSkeleton function is responsible for reading and smoothing the number of bones from a frame of bones.
CN202111135218.4A2021-09-272021-09-27AR Chinese garment changing methodWithdrawnCN113781614A (en)

Priority Applications (1)

Application NumberPriority DateFiling DateTitle
CN202111135218.4ACN113781614A (en)2021-09-272021-09-27AR Chinese garment changing method

Applications Claiming Priority (1)

Application NumberPriority DateFiling DateTitle
CN202111135218.4ACN113781614A (en)2021-09-272021-09-27AR Chinese garment changing method

Publications (1)

Publication NumberPublication Date
CN113781614Atrue CN113781614A (en)2021-12-10

Family

ID=78853597

Family Applications (1)

Application NumberTitlePriority DateFiling Date
CN202111135218.4AWithdrawnCN113781614A (en)2021-09-272021-09-27AR Chinese garment changing method

Country Status (1)

CountryLink
CN (1)CN113781614A (en)

Similar Documents

PublicationPublication DateTitle
US20210177124A1 (en)Information processing apparatus, information processing method, and computer-readable storage medium
US9928411B2 (en)Image processing apparatus, image processing system, image processing method, and computer program product
US10311508B2 (en)Garment modeling simulation system and process
US11640672B2 (en)Method and system for wireless ultra-low footprint body scanning
CN104123753B (en)Three-dimensional virtual fitting method based on garment pictures
TWI682658B (en)Method and system of virtual footwear try-on with improved occlusion
CN103106604A (en)Three dimensional (3D) virtual fitting method based on somatosensory technology
CN102439603B (en) Sampling Methods for 3D Modeling
CN113610612B (en)3D virtual fitting method, system and storage medium
CN109598798B (en) Virtual object fitting method and virtual object fitting service system
KR100722229B1 (en) Apparatus and Method for Instant Generation / Control of Virtual Reality Interactive Human Body Model for User-Centered Interface
Giovanni et al.Virtual try-on using kinect and HD camera
US20130173226A1 (en)Garment modeling simulation system and process
WO2014081394A1 (en)Method, apparatus and system for virtual clothes modelling
Vitali et al.Acquisition of customer’s tailor measurements for 3D clothing design using virtual reality devices
CN104657545A (en) A human body model acquisition method for an electronic fitting system
CN116523579A (en)Display equipment, virtual fitting system and method
CN104881526A (en)Article wearing method and glasses try wearing method based on 3D (three dimensional) technology
CN104933697A (en)Image Processing Apparatus, Image Processing System, And Image Processing Method
Masri et al.Virtual dressing room application
Cha et al.Mobile. Egocentric human body motion reconstruction using only eyeglasses-mounted cameras and a few body-worn inertial sensors
WO2018182938A1 (en)Method and system for wireless ultra-low footprint body scanning
CN112070879B (en) Intelligent fitting system and virtual fitting method
Liu et al.Modern clothing design based on human 3D somatosensory technology
Yolcu et al.Real time virtual mirror using kinect

Legal Events

DateCodeTitleDescription
PB01Publication
PB01Publication
SE01Entry into force of request for substantive examination
SE01Entry into force of request for substantive examination
WW01Invention patent application withdrawn after publication
WW01Invention patent application withdrawn after publication

Application publication date:20211210


[8]ページ先頭

©2009-2025 Movatter.jp