Movatterモバイル変換


[0]ホーム

URL:


KR20010025161A - Method for providing an avatar maker - Google Patents

Method for providing an avatar maker
Download PDF

Info

Publication number
KR20010025161A
KR20010025161AKR1020000030444AKR20000030444AKR20010025161AKR 20010025161 AKR20010025161 AKR 20010025161AKR 1020000030444 AKR1020000030444 AKR 1020000030444AKR 20000030444 AKR20000030444 AKR 20000030444AKR 20010025161 AKR20010025161 AKR 20010025161A
Authority
KR
South Korea
Prior art keywords
avatar
natural language
real
time rendering
module
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Ceased
Application number
KR1020000030444A
Other languages
Korean (ko)
Inventor
오상준
Original Assignee
조양일
주식회사 디지탈에이전트
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 조양일, 주식회사 디지탈에이전트filedCritical조양일
Priority to KR1020000030444ApriorityCriticalpatent/KR20010025161A/en
Publication of KR20010025161ApublicationCriticalpatent/KR20010025161A/en
Ceasedlegal-statusCriticalCurrent

Links

Classifications

Landscapes

Abstract

PURPOSE: A method for forming an avatar maker capable of emotion expression is provided to communicate smoothly by applying various human expressions or gestures at on-line, and to manufacture a drama or a cinema etc. by animating 3D characters in on-line. CONSTITUTION: It is detected whether a predetermined natural language is inputted by a client(S100). If a predetermined natural language is inputted, a natural language processing module interprets the natural language based on an emotions processing data(S102). The pattern of the natural language is analyzed, and an avatar module combines an expression with lip-sync data(S104). A 3D real-time rendering module performs a real-time rendering(S106). An avatar character created by the real-time rendering is displayed through a client terminal(S108).

Description

Translated fromKorean
감정 처리가 가능한 아바타 메이커 구현 방법{METHOD FOR PROVIDING AN AVATAR MAKER}How to Implement Avatar Maker with Emotion Processing {METHOD FOR PROVIDING AN AVATAR MAKER}

본 발명은 아바타 메이커 구현 방법에 관한 것으로, 특히, 감정 처리가 가능한 아바타 메이커 구현 방법에 관한 것이다.The present invention relates to an avatar maker implementation method, and more particularly, to an avatar maker implementation method capable of emotion processing.

종래의 3D 아바타는 특정 키 신호의 조합 또는 특정 명령어의 입력에 의해 구현되었다. 즉, 사전 설정되어진 몇 가지의 표정 데이터들 중에서 원하는 표정 데이터를 사용자가 직접 선택하여 사용자의 감정 상태를 표현하는 아바타를 구현한 것이다. 예를 들어, 아바타 사용 중에 "웃음"이라는 아바타 표정 데이터를 선택하면, 미리 정의되어진 "웃음"이라는 형태의 아바타가 일괄적으로 표현되었다.Conventional 3D avatars have been implemented by combination of specific key signals or input of specific commands. That is, the avatar is implemented to express the emotional state of the user by directly selecting the desired facial expression data from among several preset facial expression data. For example, if avatar facial expression data of "laugh" is selected while the avatar is in use, a predefined avatar of "laugh" is expressed collectively.

이러한 일괄적인 3D 아바타로는 사용자로 하여금 자신의 감정을 나타내기에는 부족하다고 느끼게 되고, 쉽게 싫증을 내게 된다. 또한, 새로운 감정을 부여하기 위해서는 제조업체에게 새로운 감정의 추가를 요구해야만 하며, 그에 따른 비용 부담이 수반된다.With this collective 3D avatar, the user may feel insufficient to express his feelings, and may easily get tired of it. In addition, in order to give new emotions, the manufacturer must require the addition of new emotions, which entails a cost burden.

따라서, 보다 다양한 사용자 요구에 부응하는 아바타 표현 기술이 요망되어 왔다.Accordingly, there has been a demand for an avatar expression technology that meets a variety of user needs.

따라서, 본 발명은 상술한 요망에 의해 안출한 것으로, 입력되는 자연언어를 분석하고 분석된 자연언어에 대응하는 감정 표현이 가능한 아바타 메이커를 구현하도록 한 감정 처리가 가능한 아바타 메이커 구현 방법을 제공하는데 그 목적이 있다.Accordingly, the present invention has been made in accordance with the above-described demands, and provides an avatar maker implementation method capable of emotion processing to analyze an input natural language and implement an avatar maker capable of expressing an emotion corresponding to the analyzed natural language. There is a purpose.

이러한 목적을 달성하기 위하여 본 발명은, 감정 처리가 가능한 아바타 메이커 구현 방법에 있어서, 클라이언트에 의해 소정 자연언어가 입력되는지를 판단하는 단계와; 판단 결과, 소정 자연 언어가 입력되면, 감정 처리 데이터에 기반한 자연 언어를 해석하고, 그 패턴을 분석하는 단계와; 표정 및 립싱크 데이터를 조합하는 단계와; 실시간 렌더링을 수행하여 생성되는 아바타 캐릭터를 디스플레이하는 단계를 포함하는 것을 특징으로 하는 감정 처리가 가능한 아바타 메이커 구현 방법을 제공한다.In order to achieve the above object, the present invention provides a method for implementing an avatar maker, comprising: determining whether a predetermined natural language is input by a client; As a result of the determination, when a predetermined natural language is input, analyzing the natural language based on the emotion processing data and analyzing the pattern; Combining facial expression and lip synch data; It provides a method for implementing an avatar maker capable of emotion processing comprising the step of displaying the avatar character generated by performing a real-time rendering.

도 1은 본 발명에 따른 방법을 수행하기 위한 아바타 메이커의 개략 구성도,1 is a schematic structural diagram of an avatar maker for performing a method according to the present invention;

도 2는 본 발명의 바람직한 실시예에 따른 감정 처리가 가능한 아바타 구현 과정의 흐름도.2 is a flow chart of an avatar implementation process capable of emotion processing according to a preferred embodiment of the present invention.

<도면의 주요 부분에 대한 부호의 설명><Description of the code | symbol about the principal part of drawing>

100 : 3D 실시간 렌더링 모듈100: 3D real-time rendering module

102 : 아바타 모듈102: avatar module

104 : 표정 데이터베이스104: facial expression database

106 : 립싱크 데이터베이스106: Lip Sync Database

108 : 자연언어 처리모듈108: natural language processing module

110 : 감정처리 데이터베이스110: emotion processing database

112 : 개인 감정처리 데이터베이스112: Personal Appraisal Database

114 : 네트웍 모듈114: network module

이하, 첨부된 도면을 참조하여 본 발명의 바람직한 실시예에 대하여 상세하게 설명한다.Hereinafter, with reference to the accompanying drawings will be described in detail a preferred embodiment of the present invention.

설명에 앞서, 본 발명에 따른 3D 아바타는 외부의 사용자로부터 일상적인 실용어에 가까운 수준의 자연언어가 주어지는 경우, 그 자연언어를 해석, 의미를 분석하여 각각의 단어가 가지는 감정의 표현을 행하며, 대화에 의해 느껴지는 상대방의 언어사용, 또는 문맥의 흐름에 따라 아바타의 감정 데이터를 변화시켜, 아바타 캐릭터의 얼굴 표정에 다양한 변화를 부여하는 것을 그 특징으로 한다.Prior to the description, when the 3D avatar according to the present invention is given a natural language close to the everyday practical language from an external user, the 3D avatar interprets the natural language and analyzes the meaning to express the emotions of each word. It is characterized in that the emotion data of the avatar is changed according to the language usage or the flow of context of the other party felt by the conversation, thereby giving various changes to the facial expression of the avatar character.

도 1은 본 발명에 따른 방법을 수행하기 위한 아바타 메이커의 개략 구성도로서, 3D 실시간 렌더링 모듈(100), 아바타 모듈(102), 표정 데이터베이스(104), 립싱크 데이터베이스(106), 자연언어 처리 모듈(108), 감정처리 데이터베이스(110), 개인 감정처리 데이터베이스(112) 및 네트웍 모듈(114)을 포함한다.1 is a schematic structural diagram of an avatar maker for performing a method according to the present invention, which includes a 3D real-time rendering module 100, an avatar module 102, a facial expression database 104, a lip sync database 106, and a natural language processing module. 108, an emotion processing database 110, a personal emotion processing database 112, and a network module 114.

이하에서는, 상술한 구성과 함께 본 발명의 바람직한 실시예에 따른 감정 처리가 가능한 아바타 메이커 구현 과정을 첨부한 도 2의 흐름도를 참조하여 상세하게 설명한다.Hereinafter, with reference to the flow chart of Figure 2 attached to the avatar maker implementation process capable of emotion processing according to a preferred embodiment of the present invention together with the above configuration will be described in detail.

도시한 바와 같이, 단계(S100)에서는 소정 클라이언트에 의해 소정 자연언어가 입력되는지를 판단한다.As shown in step S100, it is determined whether a predetermined natural language is input by a predetermined client.

단계(S100)의 판단 결과, 소정 자연 언어가 입력되면, 자연 언어 처리 모듈(108)에서는 단계(S102)로 진행하여 감정 처리 데이터에 기반한 자연 언어를 해석하며, 그 패턴을 분석하고 단계(S104)로 진행한다.As a result of the determination of step S100, when a predetermined natural language is input, the natural language processing module 108 proceeds to step S102 to interpret the natural language based on the emotion processing data, analyzes the pattern, and then performs step S104. Proceed to

단계(S104)에서 아바타 모듈(102)은 표정 및 립싱크 데이터를 조합하고 단계(S106)로 진행한다.In step S104, the avatar module 102 combines facial expression and lip sync data and proceeds to step S106.

단계(S106)에서 3D 실시간 렌더링 모듈(100)은 실시간 렌더링을 수행하여 생성되는 아바타 캐릭터를 소정 클라이언트 단말을 통해 디스플레이하고 도 2의 과정을 종료한다.In step S106, the 3D real-time rendering module 100 displays an avatar character generated by performing real-time rendering through a predetermined client terminal and ends the process of FIG.

따라서, 본 발명은 인간의 다양한 표정이나 제스처를 적용할 수 있으므로 채팅이나 온라인에서의 의사 전달시 보다 원활한 통신을 수행할 수 있으며, 온라인상에서 3D 캐릭터들이 실시간으로 애니메이션될 수 있으므로 이를 이용하여 여러 가지 영상 콘텐츠, 예컨대, 3D 캐릭터 아나운서, 3D 캐릭터들이 연기하는 시트콤, 드라마, 영화 등을 제작할 수 있다. 또한, 본 발명은 3D 애니메이션 제작시 많은 수작업을 요구하는 립싱크와 표정 작업을 자동화함으로써 3D 애니메이션 제작시 생산성을 증가시키는 효과가 있다.Therefore, the present invention can apply a variety of human facial expressions and gestures, so that more smooth communication can be performed during communication or online communication, and 3D characters can be animate in real time online so that various images can be used. Contents such as 3D character announcers, sitcoms played by 3D characters, dramas, movies, and the like can be produced. In addition, the present invention has the effect of increasing the productivity in the production of 3D animation by automating the lip sync and facial expression work that requires a lot of manual work when producing 3D animation.

Claims (4)

Translated fromKorean
감정 처리가 가능한 아바타 메이커 구현 방법에 있어서,In the avatar maker implementation method capable of emotion processing,클라이언트에 의해 소정 자연언어가 입력되는지를 판단하는 단계와;Determining whether a predetermined natural language is input by the client;상기 판단 결과, 소정 자연 언어가 입력되면, 감정 처리 데이터에 기반한 자연 언어를 해석하고, 그 패턴을 분석하는 단계와;As a result of the determination, when a predetermined natural language is input, analyzing the natural language based on the emotion processing data and analyzing the pattern;표정 및 립싱크 데이터를 조합하는 단계와;Combining facial expression and lip synch data;실시간 렌더링을 수행하여 생성되는 아바타 캐릭터를 디스플레이하는 단계를 포함하는 것을 특징으로 하는 감정 처리가 가능한 아바타 메이커 구현 방법.And displaying an avatar character generated by performing real-time rendering.제 1 항에 있어서,The method of claim 1,상기 패턴 분석은 자연 언어 처리 모듈에 의해 수행되는 것을 특징으로 하는 감정 처리가 가능한 아바타 메이커 구현 방법.And the pattern analysis is performed by a natural language processing module.제 1 항에 있어서,The method of claim 1,상기 표정 및 립싱크 데이터의 조합은 아바타 모듈에 의해 수행되는 것을 특징으로 하는 감정 처리가 가능한 아바타 메이커 구현 방법.The combination of the facial expression and the lip-sync data is performed by the avatar module, the avatar maker implementation method capable of emotion processing.제 1 항에 있어서,The method of claim 1,상기 실시간 렌더링은 3D 실시간 렌더링 모듈에 의해 수행되는 것을 특징으로 하는 감정 처리가 가능한 아바타 메이커 구현 방법.The method of claim 1, wherein the real-time rendering is performed by a 3D real-time rendering module.
KR1020000030444A2000-06-022000-06-02Method for providing an avatar makerCeasedKR20010025161A (en)

Priority Applications (1)

Application NumberPriority DateFiling DateTitle
KR1020000030444AKR20010025161A (en)2000-06-022000-06-02Method for providing an avatar maker

Applications Claiming Priority (1)

Application NumberPriority DateFiling DateTitle
KR1020000030444AKR20010025161A (en)2000-06-022000-06-02Method for providing an avatar maker

Publications (1)

Publication NumberPublication Date
KR20010025161Atrue KR20010025161A (en)2001-04-06

Family

ID=19671112

Family Applications (1)

Application NumberTitlePriority DateFiling Date
KR1020000030444ACeasedKR20010025161A (en)2000-06-022000-06-02Method for providing an avatar maker

Country Status (1)

CountryLink
KR (1)KR20010025161A (en)

Cited By (12)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
KR100362445B1 (en)*2000-07-102002-11-29김정래Method and system for providing enterprise information using character
KR20030021525A (en)*2001-09-062003-03-15유주성Technology of Personal Community Based 3D Character Interface
KR20030066841A (en)*2002-02-052003-08-14보람연구소(주)Avatar agent system
WO2004095308A1 (en)*2003-04-212004-11-04Eulen, Inc.Method and system for expressing avatar that correspond to message and sentence inputted of using natural language processing technology
KR100610199B1 (en)*2004-06-212006-08-10에스케이 텔레콤주식회사 Motion Recognition Avatar Service Method and System
KR100801666B1 (en)*2006-06-202008-02-11뷰모션 (주) Method and system for digital storyboard generation using text-motion conversion
KR101006491B1 (en)*2003-06-102011-01-10윤재민 Natural language-based emotion recognition, emotional expression system and its method
US8226417B2 (en)2004-03-262012-07-24A.G.I. Inc.Will expression model device, psychological effect program, and will expression simulation method
US8396708B2 (en)2009-02-182013-03-12Samsung Electronics Co., Ltd.Facial expression representation apparatus
KR101439212B1 (en)*2012-12-042014-09-12(주)에프엑스기어terminal apparatus and method for displaying talking head
KR20160134883A (en)2015-04-282016-11-24동서대학교산학협력단Digital actor managing method for image contents
KR20180071833A (en)2016-12-202018-06-28박홍식Computer interface management system by 3D digital actor

Cited By (12)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
KR100362445B1 (en)*2000-07-102002-11-29김정래Method and system for providing enterprise information using character
KR20030021525A (en)*2001-09-062003-03-15유주성Technology of Personal Community Based 3D Character Interface
KR20030066841A (en)*2002-02-052003-08-14보람연구소(주)Avatar agent system
WO2004095308A1 (en)*2003-04-212004-11-04Eulen, Inc.Method and system for expressing avatar that correspond to message and sentence inputted of using natural language processing technology
KR101006491B1 (en)*2003-06-102011-01-10윤재민 Natural language-based emotion recognition, emotional expression system and its method
US8226417B2 (en)2004-03-262012-07-24A.G.I. Inc.Will expression model device, psychological effect program, and will expression simulation method
KR100610199B1 (en)*2004-06-212006-08-10에스케이 텔레콤주식회사 Motion Recognition Avatar Service Method and System
KR100801666B1 (en)*2006-06-202008-02-11뷰모션 (주) Method and system for digital storyboard generation using text-motion conversion
US8396708B2 (en)2009-02-182013-03-12Samsung Electronics Co., Ltd.Facial expression representation apparatus
KR101439212B1 (en)*2012-12-042014-09-12(주)에프엑스기어terminal apparatus and method for displaying talking head
KR20160134883A (en)2015-04-282016-11-24동서대학교산학협력단Digital actor managing method for image contents
KR20180071833A (en)2016-12-202018-06-28박홍식Computer interface management system by 3D digital actor

Similar Documents

PublicationPublication DateTitle
US12367640B2 (en)Virtual role-based multimodal interaction method, apparatus and system, storage medium, and terminal
CN1326400C (en)Virtual television telephone device
CN113099298B (en)Method and device for changing virtual image and terminal equipment
CN110400251A (en)Method for processing video frequency, device, terminal device and storage medium
CN112099628A (en)VR interaction method and device based on artificial intelligence, computer equipment and medium
KR101851356B1 (en)Method for providing intelligent user interface by 3D digital actor
US20030149569A1 (en)Character animation
JP2023552854A (en) Human-computer interaction methods, devices, systems, electronic devices, computer-readable media and programs
CN118891616A (en) A virtual digital human driving method, device, equipment and medium
WO2007098560A1 (en)An emotion recognition system and method
KR20030007726A (en)Text to visual speech system and method incorporating facial emotions
KR20010025161A (en)Method for providing an avatar maker
CN109857352A (en)Cartoon display method and human-computer interaction device
CN107808191A (en)The output intent and system of the multi-modal interaction of visual human
KR101981091B1 (en)Device for creating subtitles that visualizes emotion
KR20230102753A (en)Method, computer device, and computer program to translate audio of video into sign language through avatar
CN112652041A (en)Virtual image generation method and device, storage medium and electronic equipment
CN108885555A (en)Exchange method and device based on mood
CN113824982A (en)Live broadcast method and device, computer equipment and storage medium
CN117272432A (en)Device for automatically adding decorative elements in planar design
CN113453027B (en)Live video and virtual make-up image processing method and device and electronic equipment
Čereković et al.Multimodal behavior realization for embodied conversational agents
CN119046441A (en)Information interaction method, device, equipment and medium in virtual reality environment
CN110262867A (en)A kind of long-range control method and device based on onboard system
Tang et al.Exploration of AI and AR Technologies in the Character Design of" Dream of the Red Chamber"

Legal Events

DateCodeTitleDescription
PA0109Patent application

Patent event code:PA01091R01D

Comment text:Patent Application

Patent event date:20000602

A201Request for examination
PA0201Request for examination

Patent event code:PA02012R01D

Patent event date:20000622

Comment text:Request for Examination of Application

Patent event code:PA02011R01I

Patent event date:20000602

Comment text:Patent Application

G15RRequest for early publication
A302Request for accelerated examination
PA0302Request for accelerated examination

Patent event date:20010111

Patent event code:PA03022R01D

Comment text:Request for Accelerated Examination

Patent event date:20000602

Patent event code:PA03021R01I

Comment text:Patent Application

PG1501Laying open of application

Comment text:Request for Early Opening

Patent event code:PG15011R01I

Patent event date:20010110

E902Notification of reason for refusal
PE0902Notice of grounds for rejection

Comment text:Notification of reason for refusal

Patent event date:20010818

Patent event code:PE09021S01D

E601Decision to refuse application
PE0601Decision on rejection of patent

Patent event date:20020410

Comment text:Decision to Refuse Application

Patent event code:PE06012S01D

Patent event date:20010818

Comment text:Notification of reason for refusal

Patent event code:PE06011S01I

J201Request for trial against refusal decision
PJ0201Trial against decision of rejection

Patent event date:20020510

Comment text:Request for Trial against Decision on Refusal

Patent event code:PJ02012R01D

Patent event date:20020410

Comment text:Decision to Refuse Application

Patent event code:PJ02011S01I

Appeal kind category:Appeal against decision to decline refusal

Decision date:20021129

Appeal identifier:2002101001900

Request date:20020510

J801Dismissal of trial

Free format text:REJECTION OF TRIAL FOR APPEAL AGAINST DECISION TO DECLINE REFUSAL REQUESTED 20020510

Effective date:20021129

PJ0801Rejection of trial

Decision date:20021129

Appeal kind category:Appeal against decision to decline refusal

Appeal identifier:2002101001900

Request date:20020510


[8]ページ先頭

©2009-2025 Movatter.jp