Movatterモバイル変換


[0]ホーム

URL:


WO2002097732A1 - Method for producing avatar using image data and agent system with the avatar - Google Patents

Method for producing avatar using image data and agent system with the avatar
Download PDF

Info

Publication number
WO2002097732A1
WO2002097732A1PCT/KR2001/001270KR0101270WWO02097732A1WO 2002097732 A1WO2002097732 A1WO 2002097732A1KR 0101270 WKR0101270 WKR 0101270WWO 02097732 A1WO02097732 A1WO 02097732A1
Authority
WO
WIPO (PCT)
Prior art keywords
avatar
user
image data
elements
producing
Prior art date
Application number
PCT/KR2001/001270
Other languages
French (fr)
Inventor
Young-Ouk Kim
Youn-Geun Sung
Hyeok Shin
Sung-Woo Lee
Tae-Hee Lee
Hee-Yun Jang
Kyoung-Eun Cha
Byung-Jo Hyun
Original Assignee
Gnb Communications Co., Ltd.
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Gnb Communications Co., Ltd.filedCriticalGnb Communications Co., Ltd.
Publication of WO2002097732A1publicationCriticalpatent/WO2002097732A1/en

Links

Classifications

Definitions

Landscapes

Abstract

The present invention relates to a message delivery method and system, and more particularly to a message delivery method and system in a peer-to-peer independent message delivery group without any network server. The message delivery system includes at least one peer-to-peer independent message delivery group on a network using any kind of communication protocol. The message delivery group is composed of at least one member, wherein the member is independent of any members in other message delivery groups, so the former cannot transmit messages to the latter. The member may be a group subscriber who mainly receives messages, and therefore has the role of a client. On the other hand, the member may be a group manager who mainly transmits individual messages to each group subscriber or bundles of messages to all group subscribers, and therefore has the role of a server.

Description

METHOD FOR PRODUCING AVATAR USING IMAGE DATA AND AGENT
SYSTEM WITH THE AVATAR
Technical Field
The present invention relates to a method for producing an avatar and an agent system with the avatar. In particular, it relates to a method for producing an avatar that expresses an individual character to the highest degree and an agent system for managing the individual information and for transmitting data among users.
Background Art
The word "avatar" comes from avatara that is the English equivalent to an ancient Hindu Sanskrit term that generally translates as "incarnation" indicating a man who comes down from a heaven. More specifically, the term refers to a Hindu god that manifests in the bodily form of a human being and descends to Earth in order to aid a person or persons who are in danger. The term has also been used to refer to a
"virtual personality" or "disembodied personality", represented by a visual image and related to self-consciousness.
Recently, the term has been adopted in Internet circles to mean a character indicating a user's incarnation on the World Wide Web. It indicates a user's "disembodied personality" in the virtual world of the Internet, and is a dynamic presence whose features can be changed in this virtual world just as animals, plants and humans. The avatar is a tool to express the personality and characteristics of a user on the Internet. Also, the avatar can be produced and used to portray curiosity, self- contentment and ideal figure of a user. For this reason, a production and use of personal avatars have been gaining popularity with Internet users.
Generally, photographs and moving picture have been used as visual aids to express self-figures on the Internet. However, it is difficult to transmit and process photographs or moving pictures because the amount of data involved is enormous. Moreover, because a user cannot edit or make other changes to either of the above, they are insufficient as visual aids in the quest to express one's personal characteristics in a positive light. Therefore, two-dimensional or three-dimensional avatars have been successfully applied on a network for this purpose, either interchange with other users' avatars or in combination with data
For this reason, numerous and various methods have been developed for producing avatars.
Methods used in the production of avatars can be classified as follows: a method in which a designer creates the avatar by drawing a user directly or drawing from a photograph of the user, a method in which a user selects a desired avatar from among a group of applied avatars and a method in which a user produces an avatar by combining together various separate items from a database. FIG. 1 is a schematic illustration of the procedure used by a designer to produce an avatar from user's image of the prior art.
Referring to FIG. 1, a prior method for producing an avatar transforms a photograph of a user's face or whole body to an image file and transmits the image file to an avatar production system through a network. Generally, an image file indicating a user's photograph may be comprised of a bit map (BMP) which reads and stores the photograph as a unit of bytes, a Graphics Interchange Format (GIF) which is a platform- independent intermediate raster format limited to 8 bit color depth (256 colors), or a Joint Picture Experts Group (JPEG) which has a greater color depth than the GIF file, has a smaller file size and can store 24 bit color (16,700,000 colors). In addition, an image file can have other diverse file formats. When the avatar production system provides a designer with an image file received from a user, the designer generates an avatar based on the user's image. If a designer produces an avatar based on a user's image, it is possible to produce an avatar similar in appearance to a user's photograph. Unfortunately, this method requires a lot of expensive time, human resources and efforts from the designer. Also, a user may be much familiar to an avatar when a designer can capture the characteristic than outward appearance of a person based on a photograph. Thus, it is difficult that the method for producing by the designer consistently provides customer satisfaction.
Therefore, a service exists for the user to directly select an avatar.
FIG. 2 is a screen illustration of a method of avatar production used in a prior art in which the user selects a desired avatar from among pre-existing avatars.
Referring to FIG. 2, the prior art allows for a user to select a desired avatar from a group of pre-existing avatars by connecting to an avatar production system through a network. Generally, the avatar production system stores diverse avatar faces generated by combining facial elements such as hairstyle, facial appearance, eyes, nose and mouth. It provides face avatars in a database to a user on request and assigns an avatar to the user based on his selection. Thus, a user can make use of diverse services using his chosen avatar.
Using the above method, it is easy to have an avatar by selecting an avatar from among a group of pre-existing avatars. However, it is difficult to fulfill every user's request because the number of pre-existing avatars in the avatar production system is limited. Also, according to this method, a plurality of users may be using avatars with identical features.
In order to solve these problems, it is disclosed a method for producing avatars in which diverse avatar elements such as overall facial appearance, hairstyle, eyes, nose, mouth and ears are provided and which produces an avatar by combining the elements selected by a user.
FIG. 3A through FIG.3F are screen illustrations of a prior method for producing an avatar by combining items selected by a user from among a plurality of avatar elements.
FIG. 3A illustrates diverse examples of facial appearance; FIG. 3B illustrates diverse examples of eye appearance; FIG. 3C illustrates diverse examples of hairstyle; FIG. 3D illustrates diverse examples of mouth; and FIG. 3E illustrates diverse examples of clothes. Any of these could be used in a user's avatar. Eventually, a user can select desired examples from among a group of diverse examples of facial appearance, eye appearance, hairstyle, mouth appearance and clothes. Once a user has selected the diverse elements from among a group of diverse examples of avatar elements, they can be combined to produce a unique avatar as illustrated in FIG. 3F. In the method above, the method of producing an avatar by combining of each element enables to generate diverse avatars, because as the number of elements to choose increase, so does the number of unique avatars to be created. Thus, this method can produce avatars as adequate as number of diverse elements it incorporates. However, this method has a limit to produce an avatar because shapes and a number of avatars are restricted. It is likely that many users using this method will be unable to combine available elements in the exact combination of characteristics they are hoping to portray.
In addition to the above method, there are various methods that produce an avatar by altering a basic pre-existing avatar to match a user's desired shape or by using a basic line like the eyes or the nose. Also, there is a method that produces an avatar by starting with the outline of a user's image. Although the outline method is effective, it requires specific methods for detecting image outlines and processing other data on the web.
For example, the prior art provides a method for detecting outlines using image colors consisting of red, green and blue. It detects the outline by adding squares of each color value and then finding the square roots of the added values. However, it is difficult to maintain accuracy when adjacent regions are composed of the same colors or indefinite colors. Also, when working with a black and white image, it is difficult to detect outline effectively because the image must first be transformed into color. Moreover, although there are various methods of producing avatars by detecting outline, an effective service for quickly producing an avatar via this method has not been provided until now.
Disclosure of the Invention The present invention is intended to overcome the above-mentioned disadvantages. Therefore, it is an object of the present invention to provide a method for producing an avatar that closely matches to a user's requests by detecting edges of avatar elements from the user's image and editing the detected elements in accordance with the user's taste. It is another object of the present invention to provide a more effective method of producing an avatar by choosing basic elements only that most closely match the data drawn from the user's image from among various diverse elements.
It is another object of the present invention to provide a server-based method of producing an avatar in order to provide an avatar production module that can be applied to a plurality of users.
It is another object of the present invention to provide a method that will allow for the free and simple production of avatars that express the characteristics chosen by each user, a method in which the basic elements in each avatar are easily detected and changed. It is another object of the present invention to provide an avatar agent system for exchanging data among users with avatars produced via this method.
To achieve the above objects, an avatar production method of the present invention may comprise a step of receiving image data based on a user's face and sent by the user via a network, a step of detecting sequentially elements used for generating an avatar from the image data using Sobel operation and then displaying the detected elements on a screen, a step of altering the shapes and sizes of the elements displayed on a screen according to the user's request, and a step of generating an avatar matched to the user's image data by combining the altered elements. The image data may have at least one format selected from a group consisting ofjpg, gif and png.
The elements for generating an avatar may comprise at least one selected from a group consisting of eyes, eyebrows, nose, mouth, ears, facial appearance and hairstyle.
The step of detecting sequentially elements for generating an avatar may comprise a steps of restricting the image data for producing an avatar to a given area. according to the user's request, a step of reducing noise of the image data restricted to given area, a step of generating x-directional differential data and y-directional differential data from the noise reduced image data using x-directional Sobel mask and y-directional Sobel mask, a step of generating Sobel differential data using the x- directional differential data and y-directional differential data, a step of detecting a plurality of candidate region for each avatar element using the Sobel differential data, a step of labeling for each detected candidate region, a step of verifying the size and shape of the plurality of labeled candidate regions, and a step of detecting elements corresponding to a pre-specified basis from among a plurality of candidate regions using the results of the size and shape verifications. The noise of the image data restricted to a given area may be reduced by calculating the sum of all pixels in a specified region by adding together all products between each component of the matrix of the image data restricted to a given area and each component of Mean mask corresponding to each component of the image data, and then dividing the sum of all pixels in the specified region by the component size of the
Mean mask.
The Mean mask may comprise a series of 1, 1, 1, 1, 1, 1, 1, 1, 1. The x-directional differential data may be generated by calculating the sum of all pixels in a specified region by adding together all products between each component of the matrix of the image data without noise and each component of x-directional
Sobel mask corresponding to each component of the image data, and then dividing the sum of all pixels in a specified region by the component size of the x-directional Sobel mask.
The x-directional Sobel mask may comprise a series of -1, -2, -1, 0, 0, 0, 1, 2, 1. The y-directional differential data may be generated by calculating the sum of all pixels in a specified region by adding together all products between each component of the matrix of the image data without noise and each component of y-directional Sobel mask corresponding to each component of the image data, and then dividing the sum of all pixels in a specified region by the component size of the y-directional Sobel mask. The y-directional Sobel mask may comprise a series of -1, 0, 1, -2, 0, 2, -1, 0, 1. The Sobel differential data may be generated by calculating the square root of the sum of the square of the x-directional differential data and the square of the y- directional differential data. The step of verifying size may comprise a step of calculating the number of pixels and areas within each candidate region, and a step of determining whether each candidate region corresponds to avatar elements by comparing the number of pixels with a number of reference pixels and comparing the number of areas with a number of reference areas. The number of reference pixels may fall between 50 and 300 in the case of detecting eyes.
The reference areas may fall between 200 and 1100 in the case of detecting eyes.
The step of verifying shape may comprise a step of calculating the width and length of the each candidate region, and a step of determining whether each candidate region corresponds to avatar elements by comparing the ratio of the width to length with a reference ratio.
The value of the reference ratio may fall between 1.0 and 3.2 in the case of detecting eyes.
The avatar production method may further comprise a grouping step for detecting pair regions from among a plurality of candidate regions in the case of detecting pair elements.
The step of altering the shapes of elements may comprise a step of providing a plurality of alteration points located on the outlines of elements that are detected by the
Sobel operation, and a step of altering the outlines of elements using the alteration points according to a user's request while supporting the connection between adjacent alteration points.
The avatar production method may further comprise a step of altering transparency of the avatar according to a user's request.
The avatar production method may further comprise a step of incorporating clothes or accessories into the avatar according to a user's request.
Moreover, an avatar production method using image data on a network of the present invention may comprise a step of receiving image data based on a user's face and sent by the user via a network, a step of detecting eyes from the image data and then displaying the detected eyes on a screen, a step of providing pre-existing basic elements sequentially related to each element for producing an avatar according to the eye image displayed on the screen, a step of altering the shapes and sizes of the basic elements according to the user's request, and a step of generating an avatar matched to the user's image data by combining the altered basic elements.
The avatar production method may further comprise a step of moving the detected eye image to coincide with the user's eye image in the user's image data according to the user's request.
The basic elements may be those elements that most closely resemble the user's image data statistically from among a plurality of avatar element in accordance with the location of the eye image, which is detected from the image data and moved to coincide with the user's eye image in the user's image data.
Moreover, an avatar production method using image data on a network of the present invention may comprise a step of receiving image data based on a user's face and sent by the user via a network, a step of providing pre-existing basic elements sequentially related to each element to be used for producing the avatar in the image data displayed on a screen, a step of altering the shapes and sizes of the basic elements according to the user's request, and a step of generating an avatar matched to the user's image data by combining the altered basic elements.
Moreover, an avatar production system using image data on a network of the present invention may comprise an image processor for receiving image data based on a user's face and sent by the user via a network and then displaying the image data on a screen, an element detector for detecting elements sequentially in order to generate an avatar from the image data using Sobel operation, an element controller for altering the shapes and sizes of the elements according to the user's request, and an avatar generator for generating an avatar matched to the user's image data by combining the altered elements. The element detector may comprise a restrictor for restricting the image data to a given area in order to produce an avatar, a noise reducer for reducing noise of the image data with the restricted area using a Mean mask, a directional differential data generator for generating x-directional differential data and y-directional differential data taken from the noise reduced image data using x-directional Sobel mask and y- directional Sobel mask, a differential data generator for generating Sobel differential data using the x-directional differential data and y-directional differential data, a candidate region detector for detecting a plurality of candidate regions for each avatar element using the Sobel differential data, a labeling device for labeling for each detected candidate region, a size verifier for verifying the size of each labeled candidate region, a shape verifier for verifying the shape of each labeled candidate region, and an element detector for detecting elements corresponding to a pre-specified basis from among a plurality of candidate regions using the results of size and shape verifications.
The size verifier may comprise a size calculator for calculating the number of pixels and areas within each candidate region, and a size confirmation device for determining whether each candidate region corresponds to avatar elements by comparing the number of pixels with a number of reference pixel and comparing the number of areas with a number of reference area.
The shape verifier may comprise a shape calculator for calculating the width and length of each candidate region, and a shape confirmation device for determining whether each candidate region corresponds to avatar elements by comparing the ratio of width to length with a reference ratio.
The avatar production system may further comprise a group establisher for detecting pair regions from among a plurality of candidate regions in the case of detecting pair elements.
The element controller may comprise a basic point controller for providing a plurality of basic points located along the outlines of elements that are detected by the
Sobel operation, and for altering each outline of an element using the basic points according to a user's request while supporting the connection between adjacent basic points.
The avatar production system may further comprise a transparency controller for altering transparency of the produced avatar according to a user's request.
The avatar production system may further comprise a secondary element controller used for incorporating clothes or accessories into the avatar according to a user's request.
Moreover, an avatar production system using image data on a network of the present invention may comprise an image processor for receiving image data based on a user's face and sent by the user via a network and then displaying the image data on a screen, a basic eyes detector for detecting eyes from the image data and then displaying the detected eyes on a screen, a basic element controller for providing pre-existing basic elements sequentially related to each element and designed to produce an avatar according to the eye image displayed on the screen, an element alteration device for altering the shapes of the basic elements according to the user's request, and an avatar generator for generating a user-controllable avatar matched to the user's image data by combining the altered basic elements.
The avatar production system may further comprise a basic eyes controller for moving the detected eye image to coincide with the eyes in the user's image data according to a user's request.
Moreover, an avatar production system using image data on a network of the present invention may comprise an image processor for receiving image data based on a user's face and sent by the user via a network and then displaying the image data on a screen, a basic element controller for providing pre-existing basic elements sequentially related to each element and designed to produce an avatar based on the image data displayed on a screen, an element alteration device for altering the shapes and sizes of the basic elements according to the user's request, and an avatar generator for generating an avatar matched to the user's image data by combining the altered basic elements.
An agent system using an avatar produced by an avatar production method of the present invention may comprise a database for storing avatar images received from an avatar production system, an interface for transforming the avatar images to be identified in a user's computer system, a controller for providing the transformed avatar images supplied from said interface to a corresponding output device according to the user's request, and a display unit for displaying the avatar image supplied from said interface on the user's computer screen to be identified by the user. The agent system using an avatar may further comprise a voice recognizer for transforming a user's voice received from the voice input device into data to be identified in said controller.
The agent system using an avatar may further comprise a text/voice transformer for transforming textual data received from a text input device into voice data, and a voice output device for outputting the voice data transformed by said text/voice transformer to a user.
The agent system using an avatar may further comprise an avatar motion controller for detecting avatar images with features corresponding to the voice data supplied from said voice output device from among various avatar data stored in said database and then providing the avatar images on said display.
The agent system using an avatar may further comprise a user information manager for managing individual information about an avatar user as well as other users. The agent system using an avatar may further comprise a message processor for delivering message data between users with an avatar image. The message processor may further comprise an image processor for transmitting a sender's avatar image together with the transmitted message data.
The agent system using an avatar may further comprise a schedule manager for managing a user's work program.
Brief Description of the Drawings
FIG. 1 is a schematic illustration of the procedure used by a designer to produce an avatar from user's image of the prior art.
FIG. 2 is a screen illustration of a method of avatar production used in a prior art in which the user selects a desired avatar from among pre-existing avatars. FIG. 3A through FIG.3F are screen illustrations of a prior method for producing an avatar by combining items selected by a user from among a plurality of avatar elements.
FIG. 4 is a schematic illustration of a whole system for producing an avatar in accordance with one preferred embodiment of the present invention. FIG. 5 A is a schematic illustration of a client-based avatar production system.
FIG. 5B is a schematic illustration of a server-based avatar production system. FIG. 6 illustrates an avatar production server in accordance with one preferred embodiment of the present invention.
FIG. 7 illustrates an inner schematic diagram of the avatar production server in accordance with one preferred embodiment of the present invention. FIGS. 8A and 8B are x-directional Sobel mask and y-directional Sobel mask respectively for detecting edges in accordance with an avatar production method in the present invention.
FIG. 9 is a flow chart illustrating a process used for detecting edges using Sobel mask in an avatar production method in accordance with one preferred embodiment of the present invention.
FIGS. 10A through 10G illustrate program codes used in the production of avatar elements using Sobel operation in the avatar production method in accordance with one preferred embodiment of the present invention. FIGS. 11A through 111 are illustrations of in-screen processes used for the automatic production of avatar faces based on a user's images using edge detection via Sobel operation in the avatar production method in accordance with one preferred embodiment of the present invention.
FIGS. 12A through 12N are illustrations of in-screen processes in the avatar production method that alter each avatar element and produce an avatar using the altered element in accordance with one preferred embodiment of the present invention.
FIGS. 13A through 13C are screen illustrations showing an example wherein clothes are incorporated into a full-length avatar produced from a user's image via the avatar production method in accordance with one preferred embodiment of the present invention. FIG. 14 illustrates a schematic diagram of a web agent system using avatars in accordance with one preferred embodiment of the present invention.
FIG. 15 illustrates an inner schematic diagram of the web agent system including an avatar in accordance with one preferred embodiment of the present invention.
FIG. 16A is a screen illustration of the user's information management function in the avatar web agent system.
FIG. 16B is a screen illustration showing an example wherein the web agent function is used to transmit a message complete with an avatar to another user. FIG. 16C is a screen illustration showing an example wherein a user's task program is managed using the web agent functions of the avatar.
<Designation of Important Components Represented in the Attached
Drawings>
100: avatar production server 10, 20, 30: user's computer system 12, 102: database 110: memory system 112: main memory 114: secondary storage 120: CPU 122: ALU
124: registers 126: control unit
130: input device 140: output device 192: API 202: high level command processor
250: application module controller 251: image processing module mapper
252: image processing module interface 253: edge detection module mapper 254: edge detection module interface 255: element control module mapper
256: element control module interface
257: avatar production module mapper
258: avatar production module interface
260: application module 262: image processing module 264: edge detection module 266: element control module
268: avatar generation module 400: avatar agent system
402: avatar control module 404: interface
406: controller 408: TTS
410: motion controller 412: speech recognizer 414: speaker 416: display
418: microphone
551: user information management module mapper
552: user information management module interface
553: message transmitting receiving module mapper 554: message transmitting/receiving module interface 555: schedule management module mapper 556: schedule management module interface 562: user information management module 564: message transmitting/receiving module 566: schedule management module
Best Modes for carrying out the Invention
Hereinafter, preferred embodiments of the present invention will be described in more detail with reference to the accompanying drawings. FIG. 4 is a schematic illustration of a whole system for producing an avatar in accordance with one preferred embodiment of the present invention.
Referring to FIG. 4, the system for producing an avatar in the present invention may include an avatar production server 100 coupled to a plurality of users 10, 20 and 30 via a network in order to provide a method of producing an avatar simultaneously to the users 10, 20 and 30 who wish to. It is preferred to use on-line service via the
Internet for providing the avatar production service most efficiently to the users 10, 20 and 30. Thus, the avatar production server 100 may be a web server for receiving subscription proposals from the users 10, 20 and 30 who wish to use the avatar production service and for providing the avatar production service to corresponding users. Recently, wireless Internet service using Wireless Application Protocol (WAP) was activated. Also, avatar production service using wireless terminal such as Personal Digital Assistants (PDA), Code Division Multiple Access (CDMA) or International Mobile Telecommunication 2000 (IMT 2000) through a wireless network may be provided. The avatar production services are divided into client-based services and server- based services. The former stores avatar production applications, for example an avatar production program for generating avatars, avatar elements data such as hairstyle, eyes, nose, ear, mouth and facial appearance constituting the avatars, and avatar data produced according to users' requests in a user's computer system. The latter stores the avatar production applications in the avatar production server like a web server, therefore the user can use the service while connected to the server via a network.
FIG. 5A is a schematic illustration of a client-based avatar production system and FIG. 5B is a schematic illustration of a server-based avatar production system. First referring to FIG. 5 A, the client-based avatar production system includes an avatar production module for producing an avatar, avatar elements data and avatar data in database 12 of the user's computer system 10. The user must download a program used for producing an avatar by connecting to the avatar production server 100. Generally, the size of an avatar production program is fairly large because it includes diverse avatar element data, and is able to recognize image data and edit element data. Thus, it takes a long time for users to download the avatar production program from the avatar production server, and it is confusing for users to download the program repeatedly every time the program is upgraded. Also, because the avatar elements such as hairstyle, eyes and nose are the same, data management is inefficient when all users download all avatar element data.
A server-based avatar production system, as illustrated in FIG. 5B, stores the avatar production program, avatar elements data and avatar data in a database 102 in an avatar production server 100. The avatar production server 100 must have a command processing module for recognizing commands transmitted from a user's computer system 10 and for processing the corresponding tasks, because it produces avatars using commands provided from the user's computer system 10 and manages said commands.
In such a server-based avatar production system, efficient data management may be possible, since a user does not have to download a program for producing avatars and data for producing avatars are managed in the avatar production server. Therefore, although an avatar production system can be client-based or server-based, it is preferred to construct a server-based avatar production system in order to insure efficient data management and to maximize the user's convenience.
FIG. 6 illustrates an avatar production server in accordance with one preferred embodiment of the present invention. Referring to FIG. 6, an avatar production server 100 includes a memory system 110, at least one high-speed Central Processing Unit (CPU) 120 in conjunction with the memory system 110, an input device 130 and an output device 140.
The CPU 120 includes an Arithmetic Logic Unit (ALU) 122 for performing computations, a collection of registers 124 for temporary storage of data and instructions, and a control unit 126 for controlling operation of the avatar production server 100. The CPU 120 may be a processor having any of a variety of architectures including Alpha from Digital, MIPS from MIPS Technology, NEC, IDT, Siemens, and others, x86 from Intel and others, including Cyrix, AMD, and Nexgen, and PowerPC from IBM and Motorola.
The memory system 110 generally includes high-speed main memory 112 in the form of a medium such as Random Access Memory (RAM) and Read Only Memory (ROM) semiconductor devices, secondary storage 114 in the form of long term storage mediums such as floppy disks, hard disks, tape, CD-ROM, flash memory, etc. and other devices that store data using electrical, magnetic, optical or other recording media. The main memory 112 can also include video display memory for displaying images through a display device. Those skilled in the art will recognize that the memory system 110 can comprise a variety of alternative components having a variety of storage capacities. The input and output devices 130 and 140 are also familiar. The input device 130 can comprise a keyboard, a mouse, and a physical transducer (e.g., a touch screen or microphone), etc. The output device 140 can comprise a display, a printer, and a transducer (e.g., a speaker), etc. Some devices, such as a network interface or a modem, can be used as input and/or output devices. As is familiar to those skilled in the art, the avatar production server 100 further includes an operating system (OS) and at least one application program. The operating system is the set of software that controls operation of the avatar production server 100 and the allocation of resources. The application program is the set of software that performs a task desired by the user, using computer resources made available through the operating system. Both are resident in the illustrated memory system 110.
In accordance with the practices of persons skilled in the art of computer programming, the present invention is described below with reference to acts and symbolic representations of operations that are performed by the avatar production server 100, unless indicated otherwise. Such acts and operations are sometimes referred to as being computer-executed and may be associated with the operating system or the application system as appropriate. It will be appreciated that the acts and symbolically represented operations include the manipulation by the CPU 120 of electrical signals representing data bits which causes a resulting transformation or reduction of the electrical signal representation, and the maintenance of data bits at memory locations in memory system 110 to thereby reconfigure or otherwise alter operation of the avatar production server 100, as well as other processing of signals. The memory locations where data bits are maintained are physical locations that have particular electrical, magnetic, or optical properties corresponding to the data bits.
FIG. 7 illustrates an inner schematic diagram of the avatar production server in accordance with one preferred embodiment of the present invention. This is described in the below referring to FIG. 7.
The avatar production server 200 can use, for example, windows 98 as an OS in the system. The OS provides high level command to Application Program Interface (API) 192 and controls the operation of each application module for producing avatars.
Application modules 260 for producing avatars include an image processing module 262 for recognizing a user's image data and processing the data, an edge detection module 264 for detecting each element from the user's image data, an element control module 266 for controlling the avatar element data, and an avatar generation module 268 for generating avatars by combining each element according to a user's request. Not limited to the modules of FIG. 7, an agent module for delivering data to users and for managing users' schedules using avatars can also be included.
An image processing module 262 recognizes and displays image data detailing a user's feature or facial appearance. The image data are scanned from a photograph generated by a camera or stored to image files, for example JPEG, GIF or PNG, from a digital camera. The avatar production server stores the received image data when users transmits the their own image data via a network. The image processing module 262 selects the corresponding image data from a database and displays it on a screen according to a user's request. An edge detection module 264 detects avatar elements such as eyes, facial appearance and nose by detecting parts with sudden change of color or density from the user's image data. The edge detection module 264 is used to group pixels into one region for determining construction of the image. The edge detection is performed by first-order or second-order differential and has various methods of operation with diverse mask for x-direction or y-direction. There are operation methods using various masks, for example, Sobel mask which is resistant to noise, Prewitt mask which is not resistant to noise but intensifies vertical or horizontal direction and Robert mask which has a relatively narrow application extent. The present invention uses a Sobel operation using a Sobel mask.
FIGS. 8A and 8B are x-directional Sobel mask and y-directional Sobel mask respectively for detecting edges in accordance with an avatar production method in the present invention.
Referring to FIGS. 8A and 8B, Sobel masks may have a 3 x 3 matrix in each x direction and y direction. In particular, an x-directional Sobel mask may have elements —1, -2, -1, 0, 0, 0, 1, 2, 1 and a y-directional Sobel mask may have elements -1, 0, 1, -2, 0, 2, 1, 0, 1. Although just one example of the Sobel mask with a 3 x 3 matrix is expressed in FIGS. 8A and 8B, the elements can also be altered if necessary.
FIG. 9 is a flow chart illustrating a process used for detecting edges using Sobel mask in an avatar production method in accordance with one preferred embodiment of the present invention. In particular, the operation using the Sobel mask is primarily used for detecting eyes in a user's image. However, it can be used for detecting other diverse elements belonging to a face as well as eyes. Referring to FIG. 9, an avatar production server receives a user's image data via a network (slO). The image data may be composed of diverse formats like jpg, gif or png. The image data transmitted from a user may be displayed on a screen for the user's recognition. The user can use variety of different photographs, for example head and shoulder certificate-size photograph, a full-length photograph or a group photograph in which other faces appear. Thus, it is necessary to restrict an avatar production region so that it matches the size of the face in the displayed image. In order to produce an avatar of the desired size, it is preferred that the specified region be restricted and the avatar production process be promoted within the restricted region according to a user's specifications for the region within the block (sl2). After restricting the region within which an avatar is produced, the avatar production server reduces noise of the image within the restricted region (sl4). The noise reduction may be achieved through mean operation. After the noise reduction, edges are detected using Sobel mask (sl6). Then, the appropriate critical value must be selected via diverse methods for detecting a uniform edge from the user's image, for example the Locally Adaptive Thresholding (LAT) method proposed by Robinson.
After detecting the edge from the user's image, a labeling process for separating the detected edge from isolated regions is performed (sl8). Generally, the labeling process can be divided into a 4-directional or 8-directional connection trace algorithm.
The processes of size verification (s20) and shape verification (s22) are sequentially performed in order to isolate regions by separating them during the labeling process. In the process of size verification, candidate eyes regions are detected from data acquired by the labeling process, a process performed on most approximate squares by measuring the maximum and minimum of each x and y coordinate, and for each independent regions within the plurality of the isolated regions. Because eyes are elliptical, shape verification is performed on candidate eye regions previously detected via size verification. It is effective to detect clear non-eye regions around the eyes in stead of clear eye regions within the candidate eye regions. The shape verification may be comprised of verification by ratio of length to width, verification by ratio of gross area to edge numbers of candidate regions, verification by detection of linear segments and verification by variation of brightness and luminosity. Because humans have two eyes, by grouping (s24) for detecting pair regions among the candidate eyes regions, real eye regions may be detected (s26). The detection of real eyes regions from pairs of candidate eye regions is accomplished through factors like area ratio, edge region ratio, location or statistical measurement. When a side of user's image is in profile instead of frontal view, a frontal avatar face can be produced via slope correction (s28). Generally, a user's facial image can be expressed in 3- dimensions x, y and z by variation of the rotation. Thus, slope correction can be achieved in all three dimensions.
When the detection of eyes is performed, the appearance and location of the detected eyes are displayed on the user's image. Therefore, a process of detecting remaining avatar elements, for example facial appearance, hairstyle, eyebrows, nose, mouth and ears, is performed based on the position and appearance of the eyes. Although the process of detecting eyes is described in detail above, the detection process for the remaining avatar elements can be performed by edge detection using the Sobel mask.
On the other hand, though the processes of detecting avatar elements, including eyes, can be automatically performed, it is preferred that users be able to alter features of the avatar elements in order to produce diverse features according to each user's own personal characteristics. For example, a user should be able to multi-formally change locations and sizes of basic features such as hairstyle, nose, mouth, ears and eyes on a screen. The above process may use a separate edge detection algorithm for each avatar elements. It should also be able to move the detected basic feature up and down as well as right and left. Because the user's characteristics are of primary importance to an avatar, the user's ability to move and alter each avatar element is essential. Therefore, users can move and modify basic features of the avatar elements without using detection algorithms for each avatar element that constructs the facial appearance. The process of detecting and altering avatar elements using detection algorithms can be termed "automatic avatar production" and the process of moving or alteration of basic features for each avatar element by a user can be termed "hand- operated avatar production". The relative importance of operation processes and alteration processes to the accuracy of edge detection in each avatar elements can be controlled for efficient service. As it were, more processing time can be spent on the detection algorithms for each avatar element if the user wants a more realistic avatar that looks more like a photograph. However, if the user is more interested in emphasizing specific characteristics, then more attention can be focused on data processing and alteration of avatar elements though edge detection has a low accuracy.
Nevertheless, programs are necessary for performing processes of noise reduction, edge detection, labeling and grouping in the production of avatars. Hereinafter, for the sake of embodiment of avatar production methods in the present invention, programs for sever-based avatar production using Java will be described. FIGS. 10A through 10G illustrate program codes used in the production of avatar elements using Sobel operation in the avatar production method in accordance with one preferred embodiment of the present invention.
FIG. 10A illustrates a definition of Mean mask used to calculate an average, x- directional Sobel mask (sobelx) and y-directional Sobel mask (sobely) for Sobel operation. For masks of a 3 x 3 matrix, x-directional Sobel mask has components —1, -
2, -1, 0, 0, 0, 1, 2, 1 and y-directional Sobel mask has components -1, 0, 1, -2, 0, 2, 1, 0,
1. Also, Mean mask for averaging has components 1, 1, 1, 1, 1, 1, 1, 1, 1.
FIG. 10B illustrates a function for restricting image data that has been received from users in a specified size. In the present invention, the image data are restricted to
255 bits and each element is detected in the restricted region.
FIG. 10C illustrates a filtering function for generating pixel data (dst) by reducing noise using Mean mask from a user's image data. The filtering function can be used to generate x-directional differential data (gx) or y-directional differential data (gy) from noise-reduced pixel data (dst) using x-directional Sobel mask (sobelx) or y- directional Sobel mask (sobely). For the sake of noise reduction of image data, a user's image data restricted to a specified region are provided to the in data (in[ ]) and Mean mask data for reducing noise are provided to the mask data (mask[ ]). Therefore, output is image data (dst) that does not have noise according to the inner operation. Moreover, in order to generate x-directional differential data, noise-reduced image data (dst) are provided to the in data (in[ ]) and x-directional Sobel mask (sobelx) is provided to the mask data (mask[ j). As a result, x-directional differential data (gx) are generated by operations. In the same way, for the purpose of generating y- directional differential data, inner operations are performed with noise-reduced image data (dst) and y-directional Sobel mask (sobely).
For example, in order to generate x-directional differential data (gx), each element of noise-reduced image data (dst) multiplies by a corresponding element of x- directional Sobel mask and the multiplied components are added up. Then the added result is divided by orders of x-directional Sobel mask (sobelx). The orders of x-directional Sobel mask are not 0 and 1. Equally, in the case of generating y-directional differential data (gy), each element of the dst data multiplies by a corresponding element of y-directional Sobel mask and the multiplied components are added up. Then the added result is divided by corresponding orders of y-directional Sobel mask. The above filtering process is applied to image data (src) within a restricted region and element data of rows and columns of noise-reduced image data
(dst).
FIG. 10D illustrates a function for generating Sobel differential data (ga) using x-directional differential data (gx) and y-directional differential data (gy) that are provided in the filtering process. In the present invention, the Sobel differential data (ga) are generated via the square root for the sum of the square (using double type) of x- directional differential data (gx) and the square (using double type) of y-directional differential data (gy).
In the equation above, double type is forcibly used for the square and square root because they may be provided as decimals lower than the integer. The Sobel differential data (ga) generated by the above operations are used to detect avatar elements through the processes of labeling, size verification, shape verification and grouping.
After the labeling process of the Sobel differential data (ga), information pertaining to the isolated region is acquired. Also, processes of size verification in FIG. 10E and shape verification in FIG. 10F are accomplished sequentially using this information.
The process of size verification is achieved using the number of pixels in isolated regions and areas of maximum approximate quadrangles including the isolated regions. For example, an element can be identified as eyes if the pixel count in the isolated region falls between 50 and 300, otherwise it is considered background.
Because all regions with a width greater than length are identified as eyes based on the size condition and number of isolated regions for eyes, identification is achieved by measuring areas of maximum approximate quadrangles within all isolated regions.
In the present invention, the regions with areas of quadrangles falling between 200 and 1,100 are identified as eyes. However, the number of pixels in the isolated regions and basic areas of quadrangles described in FIG. 10E can be altered based on the size and features of a user's image data. That is, the number of pixels and basic areas can be varied in accordance with the results of experimentation conducted during the processing of image data. The process of shape verification includes the step of taking an experimental measurement for the ratio of width to length of eyes regions taken from a user's facial image, and the step of detecting corresponding regions as eyes by designating the appropriate ratio of width to length based on the result of the above experimental measurement. In the present invention, the ratio of width to length of an isolated region under 1.0 or over 3.2 is not considered to be eyes and so excluded. In this case, it is evident that the basic ratio of width to length can be changed by the result of the experiment.
FIG. 10G illustrates a grouping function used for regions undergone the steps of size verification and shape verification. The grouping process detects pair regions with similar shape or size from the remaining regions through the processes of edge detection, labeling, size verification and shape verification. When an image is a frontal photograph, the pair regions are detected based on the fact that eyes are symmetric.
The processes of detecting eyes from a user's image using the Sobel operation are equally applicable to other avatar elements, for example hairstyle, facial appearance, nose, mouth or ears. However, though the eyes and ears are pairs, other elements are not. Thus the process applied to the latter elements may be different from the grouping process used for detecting pair regions in the former elements.
The elements control module 266 is used for altering avatar elements, which include eyes and other elements detected by the Sobel operation based on the user's image above according to a user's request. The user can produce an avatar by combining elements (e.g.: eyes, hairstyle, facial appearance, nose, mouth and ears) detected by the avatar production server, or by altering and moving features of the avatar according to taste. In particular, various fantastic features or designs have recently gained popularity in the quest to express a user's private characteristics. Therefore, many users are concerned about producing an avatar that represents user's unique characteristic instead of an avatar that precisely matches their actual appearance.
Therefore, it is preferred that such users can alter sizes and features of detected avatar elements. The elements control module 266 can extend or reduce the features of elements vertically, horizontally and diagonally according to the user's request.
The avatar generation module 268 is used for producing avatars of a user's face by combining each avatar element detected by the Sobel operation or by altering detected avatar elements according to a user's request and then combining the altered elements. Avatar production may require combining each avatar element like hairstyle or ears into a whole facial appearance as well as the simple combination of each element, in order that the elements may not be divided.
Each application module can be used as an object by making a component for combining it with other modules, easily dividing and processing each function using interfaces.
The avatar production server 200 includes a high level command processor 202. The high level command processor 202 distinguishes each application module 260 in accordance with a high level command from the API 192, decodes the high level command and transmits said command to a corresponding position. The application module controller 250 controls the operation of each application module 262, 264,
266 and 268 in accordance with commands provided from the high level command processor 202.
The high level command processor 202 decides whether a specified application module is or is not in accordance with a high level command provided from the API, decodes the high level command into a low level command capable of being perceived in the specified application module when the corresponding application module exists, transmits the command to the specified mapper and controls transmission of the message. The application module controller 250 includes a plurality of mappers 251, 253, 255 and 257 and interfaces 252, 254, 256 and 258 for a image processing module 262, an edge detection module 264, an elements control module 266 and an avatar generation module 268.
The image processing module mapper 251, which receives high level commands transmitted from the high level command processor 202 for recognition and processing of a user's image, converts the high level command into a device level command to be processed in the image processing module 262 and provides the device level command to the image processing module 262 via the image processing module interface 252. As it were, the image processing module mapper 251 recognizes images transmitted from users and displays said images for users.
The edge detection module mapper 253 and edge detection module interface 254 are parts used for detecting each avatar element for avatar production using image data provided from users. The edge detection module mapper 253 converts high level commands (for using the edge detection module 264) received from the high level command processor 202 into a device level commands, and provides these device level commands to the edge detection module 264 via the edge detection module interface 254. The function of the edge detection module 264 is certainly necessary for producing avatars using automatically detected elements. However, it may not be needed for producing avatars by altering and combining basic elements according to a user's taste.
The elements control module 266 is part used for altering features and sizes of elements detected from a user's image or basic elements according to a user's requests. Thus, the elements control module mapper 255 receives a high level command from the high level command processor 202 and converts said command into a device level command to be identified by the elements control module 266 which is used for altering the features and size of elements according to a user's request. The device level command is provided to the elements control module 266 via the elements control module interface 256. Thus, users can alter elements to match their desired features through the use of the elements control module 306. In this case, it is preferable to have a boundary with 8 altering points, for example directions of 12 o'clock, 1 o'clock, 3 o'clock, 5 o'clock, 6 o'clock, 7 o'clock, 9 o'clock and 11 o'clock, used for effectively harmonizing alteration performance and processing speed of basic elements or elements detected by the edge detection module.
Once the edge detection process and the alteration of avatar elements are finished, the high level command processor 202 provides a high level command for generating avatars to the avatar generation module mapper 257. The avatar generation module mapper 257 converts the high level command into a device level command to be processed by the avatar generation module 268 and provides the device level command to the avatar generation module 268 via the avatar generation module interface 258. Thus, the avatar generation module 268 can generate avatars corresponding to a user's image by combining elements detected from a user's image or elements altered according to a user's request. Specific member functions of the API 192 used for achieving the above process are described below.
An open API is used for opening the image processing module, edge detection module, elements control module or avatar generation module according to a user's request.
A close API is used for closing used application modules. A copy API is used for copying a user's image or avatar data produced according to a user's request.
A retrieve API is used for retrieving application modules to be accessed in the avatar production server.
A status API is used for determining operation status of the image processing module, edge detection module, elements control module or avatar generation module.
An initialize API is used for initializing each application module prior to access. A list API is used for identifying an image list transmitted from a user and a list of avatars produced according to a user's request or elements that are used for producing avatars.
A register API is used for registering information pertaining to the avatars produced according to a user's request.
An unregeister API is used for excluding registration of a user's image or information pertaining to certain avatars. A read API is used for interpreting image data received from users.
Consequently, a private API is achieved in accordance with the use of an application module and thereby, the avatar production server can detect elements for avatar production, alter each element and produce avatars by combining the elements.
FIGS. 11A through 111 are illustrations of in-screen processes used for the automatic production of avatar faces based on a user's images using edge detection via Sobel operation in the avatar production method in accordance with one preferred embodiment of the present invention. Referring to FIG. 11A, a user connects to an avatar production server via a network and transmits the image he chooses to use for producing an avatar. The transmitted image is stored in a database allocated to the user. The user can identify, modify or delete the image through the avatar production server. The user can store various diverse images and can produce a plurality of avatars. FIG. 11B illustrates the process of isolating a facial region to be used for producing an avatar based on a user's image. Data processing and edge detecting operations can be achieved most effectively by restricting the avatar production region and then producing an avatar within the restricted region. This is because larger sizes require too much data processing. The edge detection process for each avatar element can proceed in the restricted region once the avatar production region has been restricted. The avatar elements may comprise facial appearance, eyes, ears, eyebrows, nose, mouth and hairstyle. The sequence of producing each element can be altered according to a user's request.
FIG. 11C illustrates the process of detecting the edge of a user's facial appearance and displaying the edge on a screen within a restricted region. FIG. 11D illustrates the process of detecting and displaying the edges of ears based on a user's image. Also, FIGS. HE, 11F, 11G, 11H and HI illustrate the processes of displaying eyes, eyebrows, nose, mouth and hairstyle respectively based on a user's image. When the edge of each avatar element detected from a user's image by the Sobel operation is displayed one above the other, an avatar face corresponding to the user's image is automatically produced.
Although a user can produce an avatar that precisely matches the user's actual appearance without altering or moving elements, such alteration will provide the means by which to produce an avatar that emphasizes those characteristics chosen by the user.
FIGS. 12A through 12N are illustrations of in-screen processes in the avatar production method that alter each avatar element and produce an avatar using the altered element in accordance with one preferred embodiment of the present invention.
The avatar production method involves a process by which the size or features of avatar elements are altered according to a user's request. Thus, it is preferable to provide basic elements with ordinary features and thereby enable a user to alter them for data processing, instead of using precisely accurate edge detection. Hereinafter, a process used for altering basic elements of an avatar will be described.
FIG. 12A illustrates the process of displaying an image selected by a user in order to produce an avatar based on that image transmitted from the user. It is easier and therefore preferable to use a frontal photograph of the user for avatar production purposes rather than a profile or backside.
When a user selects the image from which the avatar will be produced, a region selection quadrangle used to restrict the facial region within the selected image is displayed as illustrated in FIG. 12B. When the region selection quadrangle is displayed, a user can choose a restricted region from which to produce the avatar by enlarging or reducing the quadrangle either vertically, horizontally or diagonally.
FIG. 12D illustrates a process of recognizing eye parts using eye image within the restricted image region based on an image provided by a user. In the present invention, an avatar can be produced using the location of eyes indicated by a user. A user can move an eye image to match the location of the eyes in a photograph, and therefore the eye images can be used as the basis for constructing an avatar with a whole face. The space between two eyes can be measured and designated by the user. Thus, a favored facial appearance, and the locations of nose or eyes can be assigned based on the space between the eyes. Therefore, the user can move two basic eye images as illustrated in FIG. 12E to match the eyes in the user's facial image.
FIG. 12F illustrates an example wherein the user's facial appearance is produced using basic eye images that match the location of the user's eyes. The outline of the face to match size and feature of the whole face is determined based on the space between the eyes in the basic eye image.
The user can construct a facial appearance matching his image by controlling changing points located along the basic facial outline. FIG. 12F illustrates an example wherein 8 alteration points are provided along the basic facial outline with an elliptical figure, and the whole facial appearance can be altered by controlling the alteration points according to a user's request. The number of alteration points can be increased or decreased according to the desired accuracy of the alteration.
FIGS. 12G through 12K illustrate an example wherein the ears, eyes, eyebrows, nose and mouth are generated in accordance with a facial appearance altered and controlled by a user. The user can generate each element with a desired appearance well matched to his image or personality by changing the basic outlines of each element.
Each element is detected from the whole facial feature or user's image, and can be made thicker, longer, thinner or narrower, etc. Because each element has, for example, 8 alteration points, the user can freely alter the elements by dragging any of the points in a vertical, horizontal or diagonal direction. For the sake of efficiency of a speed and data processing in avatar production, it is preferable that a user select from a plurality of basic facial outlines, one that closely approximates the desired facial outline. As it were, avatars with diverse features are can be produced by that a user chooses outlines matched to his facial image from a plurality of basic elements such as facial appearance, eyes, eyebrows, ears, nose and mouth.
FIG. 12L illustrates an example of generating hairstyle matched to a user's image. Generally, facial features such as the eyes, nose, mouth and ears, as well as a person's overall facial appearance will remain consistent over time barring disfigurement or plastic surgery. Thus, diverse facial features can be generated using only a few basic outlines. However, because a user's hairstyle may have diverse features and may be changed on a regular basis, this element of an avatar may require a greater number of basic outlines. This is particularly important because many users consider their hairstyle to be a key expression of their personality. Thus, users' avatars can be classified by their elements. Therefore, it is preferable to provide basic outlines via the edge detection process for elements like hairstyle that have an important bearing on the overall character of the avatar.
FIG. 12M illustrates an example of generating a full-facial avatar by constructing each element of the face, for example the facial appearance, eyes, ears, eyebrows, nose, mouth and hairstyle. For the sake of a user's visual pleasure and management, it is preferable to construct an avatar of simple feature for expressing the user's personality instead of accurate and detailed feature like photograph. Thus, transparency can control the plane effect of the facial avatar.
FIG. 12N illustrates an example of storing a facial avatar that has been produced by the above method in a database in the avatar production server. When the facial avatar is stored in the database, users can access the facial avatar at any time and use it in conjunction with various other diverse elements by coupling to the avatar production server. For example, a user can generate a full-length avatar by combining upper body elements and lower body elements in diverse poses on his avatar. Also, a user can develop fashions well matched to him by incorporating various clothes or accessories.
FIGS. 13A through 13C are screen illustrations showing an example wherein clothes are incorporated into a full-length avatar produced from a user's image via the avatar production method in accordance with one preferred embodiment of the present invention.
FIG. 13A illustrates an example wherein an upper body element and lower body element are combined with a user's facial avatar. FIGS. 13B and 13C illustrate examples wherein a jacket and a pair of trousers selected by the user are incorporated into the avatar. As depicted in FIGS. 13A through 13C, a user can choose fashions matched to him by producing an avatar using his facial image and incorporating jackets or a pair of trousers with diverse facial avatars. Moreover, although not depicted in these figures, various accessories like caps, glasses, watches and shoes can be added to the avatar. Eventually, extensive services intended to develop private characteristics on an avatar can be provided to users, by combining the avatar production service with advertising from clothing and accessory merchants.
On the other hand, users can use their unique avatars as a web agent for managing information and transmitting private data to other users on a network. For this process, users can download their avatar produced by the avatar production server as well as the web agent module that uses the avatar.
FIG. 14 illustrates a schematic diagram of a web agent system using avatars in accordance with one preferred embodiment of the present invention. Referring to FIG. 14, the web agent system 400 of the present invention may include an avatar control module 402 developed using programming languages such as C/C++, Visual Basic, Java or JavaScript, in a database in order to control the movement of avatars. A controller 406 interprets the language from the avatar control module 402 through an interface 404 and outputs the interpreted language via avatar motion or an audio signal. The controller 406 controls the movement of an avatar according to a user's commands or pre-specified time interval via the motion controller 410. A display 416 indicates movement of avatars on a screen via commands from the motion controller 410.
Moreover, a Text To Speech (TTS) engine 408 can transform text provided by the controller 406 into speech, and output the speech to an output device such as a speaker 414, when a user inputs text with an input device such as a keyboard. If mouth appearances of the avatars are assigned in accordance with input words, avatars can be given appearance of actually speaking the textual data providing that movements of the mouth elements are controlled to precisely coincide with the audio production of each word in the text.
On the other hand, when a voice recognizer 412 recognizes voices provided by a voice input device such as a microphone 418 and then transforms the voices into textual data, the controller 406 can controls the movement of the avatar creating the appearance that the avatar is reacting to the input voice. Thus, an effect can be obtained as if the avatar operates according to input voices.
Moreover, users can take advantage of the private schedule management function to manage other user's information and deliver messages with avatars to other users.
FIG. 15 illustrates an inner schematic diagram of the web agent system including an avatar in accordance with one preferred embodiment of the present invention.
Referring to FIG. 15, an OS used in the web agent system 500 provides high level commands to the API 492 and controls the operation of each application module, thus controlling the movements of avatars as well as the input/output of data.
The application modules of the web agent 560 include a user information management module 562 for managing the user's own information as well as that of other avatar users, a message transmitting/receiving module 564 for delivering messages between users and a schedule management module 566 for managing individual schedules and task programs. The user information management module
562 is for managing the personal information such as name, birthday, e-mail address and phone number of the user himself or other avatar users.
FIG. 16A is a screen illustration of the user's information management function in the avatar web agent system.
Referring to FIG. 16A, a user can manage individual information using a hot key or by "right-clicking" certain areas on the avatar with his mouse. A user can efficiently manage personal information by registering and storing the personal information about himself and other users using the personal information management function of an avatar located on the background without the use of additional application programs.
A message transmitting/receiving module 564 is used for transmitting messages to other users, who may be registered through the personal information management function or may use an avatar, and is also used for identifying messages received from other users. A user can transmit messages together with his avatar to other users using the message transmission function of the web agent systems, and can receive messages sent by other users. Therefore, because a message recipient can identify transmitted messages as well as the avatar of a message sender, more efficient and effective delivery of messages can be accomplished. FIG. 16B is a screen illustration showing an example wherein the web agent function is used to transmit a message complete with an avatar to another user.
Referring to FIG. 16B, a sender can select the recipient or recipients who will receive the message using the message delivery function of the web agent, and can transmit a message to the selected recipient(s) after entering the contents of the message in a message input window. The web agent system detects the e-mail address of the recipient and transmits the message via a network. Thus, the recipient receives the message transmitted from the sender and can identify the transmitted message in real time by displaying said message on a screen. The recipient can identify who the sender is because the sender transmits his avatar in the message as illustrated in FIG. 21B. The schedule management module 566 has a function similar to the user information management module 562 and is used for managing a user's schedule such as the user's birthday, memorial days, holidays or work program.
FIG. 16C is a screen illustration showing an example wherein a user's task program is managed using the web agent functions of the avatar. A user previously records and stores the contents of each day to be memorized using a function of the schedule management system. The web agent watches dates displayed on a Complementary Metal Oxide Semiconductor (CMOS) in the user's computer system, and displays the task program as the previously registered dates arrive. Thus, users can effectively manage diverse task programs without the aid of additional schedule management applications.
On the other hand, operations and functions of the API 492 used for controlling the application module 560, high level command processor 502 and application module controller 550 used for controlling each application module 560 will be equal to that illustrated in FIG. 7.
The user information management module mapper 551, which receives high level commands transmitted from the high level command processor 502, converts the high level commands into a device level commands in order to process said commands in the user information management module 562 and then provides the device level commands to the user information management module 562 via the user information management module interface 552. Also, the message transmitting/receiving module mapper 553 converts high level commands received from the high level command processor 502 into a device level commands and provides said device level commands to the message transmitting/receiving module 564 via the message transmitting/receiving module interface 554. Also, the schedule management module mapper 555 converts high level commands received from the high level command processor 502 into a device level commands to be identified by the schedule management module 566 and then provides said device level commands to the schedule management module 566 via the schedule management module interface 556. Eventually, users can effectively deal with individual information, task programs and message delivery to other users on a network by combining various functions of prior web applications with the avatar web agent service.
In particular, an avatar produced based on a user's image data can be displayed with various features on a user's monitor according to a user's request. For example, at idle state when the avatar is receiving no input from the user, the avatar counts regular time intervals until the user inputs a command. If a user does not provide any command by specified numbers of the counts, it can change its appearance automatically at certain time intervals giving it an animated quality. Also, when a user inputs a command, the location and features of an avatar can be altered according to the user's request. In particular, various animated scenarios can be achieved by assigning various alterations to various elements at various regular time intervals.
Moreover, the avatar displayed on a screen can be hidden or can be incorporated into a "help" function. Also, the avatar can be moved by using the TTS function to move the mouth elements in union with words.
On the other hand, a user can store the avatar in a wireless terminal such as a cellular phone, PDA or IMT2000. Thus, users can transmit their avatars together with text messages or voice messages. Also, other various wireless communication services can be provided to users through the use of avatars.
Industrial Applicability
As described above, based upon the avatar production method of the present invention, avatar elements can be detected based on a user's image via Sobel operation and the user can alter each detected element. Because the user can develop his own avatar including features that express the user's personal characteristics, various requirement of users can be satisfied.
Moreover, the avatar production method of the present invention can have fewer load in the system and achieve more effective data processing than client-based services because the image processing module and the avatar generation module are stored in the avatar production server and because it provides server-based service for producing avatars according to requests from users who are connected to the avatar production server via a network.
Moreover, the avatar production method of the present invention can achieve more effective avatar production and data processing by combining the process of detecting avatar elements (using Sobel operation) with the process of altering pre- existing basic elements.
Moreover, the agent system included with the avatar of the present invention can manage individual information and schedules using an avatar image produced according to a user's request and can transmit messages together with the avatar image to other users. Although the method of producing an avatar and agent system of the present invention has been described in detail in terms of various embodiments, it is not intended that the present invention be limited to these embodiments. Various modification and change within the spirit of the invention will be apparent to those skilled in the art.

Claims

Claims 1. A method of producing an avatar using image data via a network comprising the steps of: receiving image data based on a user's face from the user via a network; detecting elements sequentially used for generating an avatar from the image data using Sobel operation and then displaying the detected elements on a screen; altering the shapes and sizes of the elements displayed on a screen according to the user's request; and generating an avatar matched to the image data by combining the altered elements.
2. The method in claim 1, wherein the image data has at least one format selected from a group consisting of jpg, gif and png.
3. The method in claim 1, wherein the elements for generating an avatar comprise at least one selected from a group consisting of eyes, eyebrows, nose, mouth, ears, facial appearance and hairstyle.
4. The method in claim 1, wherein said step of detecting elements sequentially for generating an avatar comprises the steps of: restricting the image data for producing an avatar to a given area according to the user's request; reducing a noise of the image data restricted to the given area; generating x-directional differential data and y-directional differential data from the noise-reduced image data using x-directional Sobel mask and y-directional Sobel mask; generating Sobel differential data using the x-directional differential data and y- directional differential data; detecting a plurality of candidate regions for each avatar element using the Sobel differential data; labeling each detected candidate region; verifying the size and shape of the plurality of labeled candidate region; and detecting elements corresponding to pre-specified basis from among a plurality of candidate regions using the results of size and shape verifications.
5. The method in claim 4, wherein the noise of the image data restricted to a given area is reduced by calculating the sum of all pixels in a specified region by adding together all products between each component of the matrix of the image data restricted to the given area and each component of Mean mask corresponding to each component of the image data, and then dividing the sum of all pixels in the specified region by the component size of the Mean mask.
6. The method in claim 5, wherein the Mean mask comprises a series of 1,
1, 1, 1, 1, 1, 1, 1, 1.
7. The method in claim 4, wherein the x-directional differential data is generated by calculating the sum of all pixels in a specified region by adding together all products between each component of the matrix of the image data without noise and each component of x-directional Sobel mask corresponding to each component of the image data, and then dividing the sum of all pixels in the specified region by the component size of the x-directional Sobel mask.
8. The method in claim 7, wherein the x-directional Sobel mask comprises a series of -1, -2, -1, 0, 0, 0, 1, 2, 1.
9. The method in claim 4, wherein the y-directional differential data is generated by calculating the sum of all pixels in a specified region by adding together all products between each component of the matrix of the image data without noise and each component of y-directional Sobel mask corresponding to the each component of the image data, and then dividing the sum of all pixels in a the specified region by the component size of the y-directional Sobel mask.
10. The method in claim 9, wherein the y-directional Sobel mask comprises a series of -1, 0, 1, -2, 0, 2, -1, 0, 1.
11. The method in claim 4, wherein the Sobel differential data is generated by calculating the square root of the sum of the square of the x-directional differential data and the square of the y-directional differential data.
12. The method in claim 4, wherein said step of verifying size comprises the steps of: calculating the number of pixels and area within each candidate region; and determining whether each candidate region corresponds to avatar elements by comparing the number of pixels within each region with a number of a reference pixels and comparing the number of areas within each region with a number of a reference areas.
13. The method in claim 12, wherein the number of reference pixels falls between 50 and 300 in the case of detecting eyes.
14. The method in claim 12, wherein the number of reference areas falls between 200 and 1100 in the case of detecting eyes.
15. The method in claim 4, wherein said step of verifying shape comprises the steps of: calculating widths and lengths of each candidate region; and determining whether each candidate region corresponds to avatar elements by means of results of comparing the ratio of the width to length with a reference ratio.
16. The method in claim 15, wherein the reference ratio falls between 1.0 and 3.2 in the case of detecting eyes.
17. The method in claim 4 further comprising a grouping step for detecting pair regions from among a plurality of candidate regions in the case of detecting pair elements.
18. The method in claim 1, wherein said step of altering the shapes of the elements comprises the steps of: providing a plurality of alteration points located on the outlines of elements that are detected by the Sobel operation; and altering the outlines of elements using the alteration points according to a user's request while supporting the connection between adjacent alteration points.
19. The method in claim 1 further comprising the step of altering the transparency of the avatar according to a user's request.
20. The method in claim 1 further comprising the step of incorporating clothes or accessories into the avatar according to a user's request.
21. A method of producing an avatar using image data on a network comprising the steps of: receiving image data based on a user's face from the user via a network; detecting eyes from the image data and then displaying the detected eyes on a screen; providing pre-existing basic elements sequentially related to each element for producing an avatar according to the eye images displayed on the screen; altering the shapes and sizes of the basic elements according to the user's request; and generating an avatar matched to the image data by combining the altered basic elements.
22. The method in claim 21 further comprising the step of moving the detected eye image to coincide with the eye image in the user's image data according to the user's request.
23. The method in claims 21 or 22, wherein the basic elements are those elements that most closely resemble the user's image data statistically from among a plurality of avatar elements in accordance with the location of the eye image, which is detected from the image data and moved to coincide with the eye image in the user's image data.
24. A method of producing an avatar using image data on a network comprising the steps of: receiving image data based on a user's face from the user via a network; providing pre-existing basic elements sequentially related to each element to be used for producing the avatar in the image data displayed on a screen; altering the shapes and sizes of the basic elements according to the user's request; and generating an avatar matched to the image data by combining the altered basic elements.
25. A system of producing an avatar using image data on a network comprising; an image processor for receiving image data based on a user's face from the user via a network and then displaying the image data on a screen; an element detector for detecting elements sequentially in order to generate an avatar from the image data using Sobel operation; an element controller for altering the shapes and sizes of the elements according to the user's request; and an avatar generator for generating an avatar matched to the image data by combining the altered elements.
26. The system in claim 25, wherein said element detector comprises: a restrictor for restricting the image to a given area to be used for producing an avatar; a noise reducer for reducing noise of the image data with the restricted area using a Mean mask; a directional differential data generator for generating x-directional differential data and y-directional differential data taken from the noise-reduced image data using x- directional Sobel mask and y-directional Sobel mask; a differential data generator for generating Sobel differential data using the x- directional differential data and y-directional differential data; a candidate region detector for detecting a plurality of candidate regions for each avatar element using the Sobel differential data; a labeler for labeling for each detected candidate region; a size verifier for verifying the size of each labeled candidate region; a shape verifier for verifying the shape of each labeled candidate region; and an element detector for detecting elements corresponding to a pre-specified basis from among a plurality of candidate regions using the results of the size and shape verifications.
27. The system in claim 26, wherein said size verifier comprises: a size calculator for calculating the number of pixels and areas within each candidate region; and a size confirmer for determining whether each candidate region corresponds to avatar elements by comparing the number of pixels with a number of reference pixel and comparing the number of areas with a number of reference area.
28. The system in claim 26, wherein said shape verifier comprises: a shape calculator for calculating the width and length of the each candidate region; and a shape confirmer for determining whether each candidate region corresponds to avatar elements by comparing the ratio of width to length with a reference ratio.
29. The system in claim 25 further comprising a group establisher for detecting pair regions from among a plurality of candidate regions in the case of detecting pair elements.
30. The system in claim 25, wherein said element controller comprises a basic point controller for providing a plurality of basic points located along the outline of elements that are detected by the Sobel operation, and for altering each outline of an element using the basic points according to a user's request while supporting the connection between adjacent basic points.
31. The system in claim 25 further comprising a transparency controller for altering the transparency of the avatar according to a user's request.
32. The system in claim 25 further comprising a secondary element controller used for incorporating clothes or accessories into the avatar according to a user's request.
33. A system of producing an avatar using image data on a network comprising: an image processor for receiving image data based on a user's face and sent by the user via a network and then displaying the image data on a screen; a basic eyes detector for detecting eyes from the image data and then displaying the detected eyes on a screen; a basic element controller for providing pre-existing basic elements sequentially related to each element and designed to produce an avatar according to the eye image displayed on the screen; an element alterant for altering the shapes of the basic elements according to the user's request; and an avatar generator for generating a user-controllable avatar matched to the user's image data by combining the altered basic elements.
34. The system in claim 33 further comprising a basic eyes controller for moving the detected eye image to coincide the eyes in the user's image data according to a user's request.
35. A system of producing an avatar using image data on a network comprising: an image processor for receiving image data based on a user's face and sent by the user via a network and then displaying the image data on a screen; a basic element controller for providing pre-existing basic elements sequentially, which are related to each element and designed to produce an avatar based on the image data displayed on a screen; an element alterant for altering the shapes and sizes of the basic elements according to the user's request; and an avatar generator for generating an avatar matched to the user's image data by combining the altered basic elements.
36. A method of producing an avatar using image data on a network comprising the steps of: receiving image data based on a user's face from the user via a network; restricting the image data to given area for producing an avatar according to the user's selection; forming the restricted image data to a pre-specified size for producing an avatar; detecting the edge of each element in order to produce an avatar of a specified size, wherein the elements used for producing the avatar comprises at least one selected from a group consisting of the eyes, nose, mouth, ears, facial appearance, hairstyle and eyebrows; detecting elements of an avatar by verifying the size or shape of the detected edge of each element; changing the shapes and sizes of the detected elements of an avatar according to the user's request; and generating an avatar matched to the user's image data by combining the altered elements of the avatar.
37. An agent system using an avatar at least one selected from a group consisting of claims 1, 21, 24 and 36 comprising; a database for storing avatar images received from an avatar production system; an interface for transforming the avatar images to be identified in a user's computer system; a controller for providing the transformed avatar images supplied from said interface to a corresponding output device according to a user's request; and a display unit for displaying the avatar image supplied from said interface on a user's computer screen to be identified by the user.
38. The agent system in claim 37 further comprising a voice recognition unit for transfoπning a user's voice received from the voice input device into data to be identified in said controller.
39. The agent system in claim 37 further comprising: a text/voice transformer for transforming textual data received from a text input device into voice data; and a voice output device for outputting the voice data transformed by said text/voice transformer to a user.
40. The agent system in claims 37 or 39 further comprising an avatar motion controller for detecting avatar images with features corresponding to the voice data supplied from said voice output device from among various avatar data stored in said database and then providing the avatar images on said display.
41. The agent system in claim 37 further comprising a user information manager for managing individual information about an avatar user as well as other users.
42. The agent system in claim 37 further comprising a message processor for delivering message data between users with an avatar image.
43. The agent system in claim 42, wherein said message processor further comprises an image processor for transmitting a sender's avatar image together with the transmitted message data.
44. The agent system in claim 37 further comprising a schedule manager for managing a user's task program.
45. An agent system using an avatar comprising; a database for storing avatar images produced according to a user's request; an interface for transforming the avatar images to be identified in a user's computer system; a controller for providing the transformed avatar images supplied from said interface to a corresponding output device according to a user's request; and a display unit for displaying the avatar images supplied from said interface on a user's computer screen in order to be identified by the user.
46. A computer-readable medium having stored thereon computer- executable instructions and realized in concrete by a program of instructions, which could be executable by a digital processing unit, for producing an avatar using image data on a network, said avatar production method comprising the steps of: receiving image data based on a user's face from the user via a network; detecting sequentially elements used for generating an avatar from the image data using Sobel operation and then displaying the detected elements on a screen; altering the shapes and sizes of the elements displayed on a screen according to the user's request; and generating an avatar matched to the image data by combining the altered elements.
47. A computer-readable medium having stored thereon computer- executable instructions and realized in concrete by a program of instructions, which could be executable by a digital processing unit, for producing an avatar using image data on a network, said avatar production method comprising the steps of: receiving image data based on a user's face from the user via a network; detecting eyes from the image data and then displaying the detected eyes on a screen; providing pre-existing basic elements sequentially related to each element for producing an avatar according to the eye image displayed on the screen; altering the shapes and sizes of the basic elements according to the user's request; and generating an avatar matched to the image data by combining the altered basic elements.
48. A computer-readable medium having stored thereon computer- executable instructions and realized in concrete by a program of instructions, which could be executable by a digital processing unit, for producing an avatar using image data on a network, said avatar production method comprising the steps of: receiving image data based on a user's face from the user via a network; providing pre-existing basic elements sequentially, which are related to each element used for producing an avatar, on the image data displayed on a screen; altering the shapes and sizes of the basic elements according to the user's request; and generating an avatar matched to the image data by combining the altered basic elements.
49. A computer-readable medium having stored thereon computer- executable instructions and realized in concrete by a program of instructions, which could be executable by a digital processing unit, for producing an avatar using image data on a network, said avatar production method comprising the steps of: receiving image data based on a user's face from the user via a network; restricting the image data to a given area for producing an avatar according to the user's selection; forming the restricted image data to a pre-specified size used for producing an avatar; detecting edges of elements used for producing an avatar according to a specified size, wherein, the elements for producing an avatar comprises at least one selected from a group consisting of the eyes, nose, mouth, ears, facial appearance, hairstyle and eyebrows; detecting elements of an avatar by verifying the size or shape of the detected edge of each element; altering the shapes and sizes of the detected elements of an avatar according to the user's request; and generating an avatar matched to the image data by combining the altered elements of the avatar.
PCT/KR2001/0012702001-05-262001-07-26Method for producing avatar using image data and agent system with the avatarWO2002097732A1 (en)

Applications Claiming Priority (2)

Application NumberPriority DateFiling DateTitle
KR2001/292052001-05-26
KR1020010029205AKR20010082779A (en)2001-05-262001-05-26Method for producing avatar using image data and agent system with the avatar

Publications (1)

Publication NumberPublication Date
WO2002097732A1true WO2002097732A1 (en)2002-12-05

Family

ID=19709996

Family Applications (1)

Application NumberTitlePriority DateFiling Date
PCT/KR2001/001270WO2002097732A1 (en)2001-05-262001-07-26Method for producing avatar using image data and agent system with the avatar

Country Status (2)

CountryLink
KR (1)KR20010082779A (en)
WO (1)WO2002097732A1 (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
EP2016562A4 (en)*2006-05-072010-01-06Sony Computer Entertainment IncMethod for providing affective characteristics to computer generated avatar during gameplay
WO2017219123A1 (en)*2016-06-212017-12-28Robertson John GSystem and method for automatically generating a facial remediation design and application protocol to address observable facial deviations

Families Citing this family (6)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
KR20010114196A (en)*2001-11-302001-12-29오광호Avatar fabrication system and method by online
CN100345164C (en)2004-05-142007-10-24腾讯科技(深圳)有限公司Method for synthesizing dynamic virtual images
KR101538144B1 (en)*2012-12-122015-07-22(주)원더피플Method, terminal and server for presenting avatar element by using multi layer
KR20190037218A (en)2019-03-282019-04-05에스케이플래닛 주식회사Character Support System And Operation Method thereof
KR102534788B1 (en)*2021-11-022023-05-26주식회사 에이아이파크Video service device
WO2024053848A1 (en)*2022-09-062024-03-14Samsung Electronics Co., Ltd.A method and a system for generating an imaginary avatar of an object

Citations (4)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
JPH10319831A (en)*1997-05-211998-12-04Sony CorpClient device, image display control method, shared virtual space providing device and method and providing medium
JP2001016563A (en)*1999-04-162001-01-19Nippon Telegr & Teleph Corp <Ntt> Three-dimensional shared virtual space display method, three-dimensional shared virtual space communication system and method, virtual conference system, and recording medium recording user terminal program therefor
US6215498B1 (en)*1998-09-102001-04-10Lionhearth Technologies, Inc.Virtual command post
KR20010044757A (en)*2001-03-222001-06-05안종선User-based agent, a chatting method using it and system thereof

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
JPH10319831A (en)*1997-05-211998-12-04Sony CorpClient device, image display control method, shared virtual space providing device and method and providing medium
US6215498B1 (en)*1998-09-102001-04-10Lionhearth Technologies, Inc.Virtual command post
JP2001016563A (en)*1999-04-162001-01-19Nippon Telegr & Teleph Corp <Ntt> Three-dimensional shared virtual space display method, three-dimensional shared virtual space communication system and method, virtual conference system, and recording medium recording user terminal program therefor
KR20010044757A (en)*2001-03-222001-06-05안종선User-based agent, a chatting method using it and system thereof

Cited By (3)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
EP2016562A4 (en)*2006-05-072010-01-06Sony Computer Entertainment IncMethod for providing affective characteristics to computer generated avatar during gameplay
US8766983B2 (en)2006-05-072014-07-01Sony Computer Entertainment Inc.Methods and systems for processing an interchange of real time effects during video communication
WO2017219123A1 (en)*2016-06-212017-12-28Robertson John GSystem and method for automatically generating a facial remediation design and application protocol to address observable facial deviations

Also Published As

Publication numberPublication date
KR20010082779A (en)2001-08-31

Similar Documents

PublicationPublication DateTitle
US11830118B2 (en)Virtual clothing try-on
US12380611B2 (en)Image generation using surface-based neural synthesis
US12125147B2 (en)Face animation synthesis
KR20230003555A (en) Texture-based pose validation
US12387447B2 (en)True size eyewear in real time
US12067804B2 (en)True size eyewear experience in real time
US12211166B2 (en)Generating ground truths for machine learning
WO2024086534A1 (en)Stylizing a whole-body of a person
US12079927B2 (en)Light estimation using neural networks
WO2002097732A1 (en)Method for producing avatar using image data and agent system with the avatar
US12374036B2 (en)Single image three-dimensional hair reconstruction
US20250322605A1 (en)Single image three-dimensional hair reconstruction
US20240371085A1 (en)Light estimation using neural networks

Legal Events

DateCodeTitleDescription
AKDesignated states

Kind code of ref document:A1

Designated state(s):AE AG AL AM AT AU AZ BA BB BG BR BY BZ CA CH CN CO CR CU CZ DE DK DM DZ EC EE ES FI GB GD GE GH GM HR HU ID IL IN IS JP KE KG KP KZ LC LK LR LS LT LU LV MA MD MG MK MN MW MX MZ NO NZ PL PT RO RU SD SE SG SI SK SL TJ TM TR TT TZ UA UG US UZ VN YU ZA ZW

ALDesignated countries for regional patents

Kind code of ref document:A1

Designated state(s):GH GM KE LS MW MZ SD SL SZ TZ UG ZW AM AZ BY KG KZ MD RU TJ TM AT BE CH CY DE DK ES FI FR GB GR IE IT LU MC NL PT SE TR BF BJ CF CG CI CM GA GN GQ GW ML MR NE SN TD TG

121Ep: the epo has been informed by wipo that ep was designated in this application
DFPERequest for preliminary examination filed prior to expiration of 19th month from priority date (pct application filed before 20040101)
REGReference to national code

Ref country code:DE

Ref legal event code:8642

32PNEp: public notification in the ep bulletin as address of the adressee cannot be established

Free format text:NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 69(1) EPC, F1205A DATED 08.03.04

122Ep: pct application non-entry in european phase
NENPNon-entry into the national phase

Ref country code:JP


[8]ページ先頭

©2009-2025 Movatter.jp