Movatterモバイル変換


[0]ホーム

URL:


USRE44737E1 - Three-dimensional position calibration of audio sensors and actuators on a distributed computing platform - Google Patents

Three-dimensional position calibration of audio sensors and actuators on a distributed computing platform
Download PDF

Info

Publication number
USRE44737E1
USRE44737E1US12/110,216US11021608AUSRE44737EUS RE44737 E1USRE44737 E1US RE44737E1US 11021608 AUS11021608 AUS 11021608AUS RE44737 EUSRE44737 EUS RE44737E
Authority
US
United States
Prior art keywords
computing device
actuator
sensor
acoustic signal
time
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Lifetime
Application number
US12/110,216
Inventor
Vikas C. Raykar
Igor Kozintsev
Rainer Lienhart
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Marvell International Ltd
Cavium International
Marvell Asia Pte Ltd
Original Assignee
Marvell World Trade Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Marvell World Trade LtdfiledCriticalMarvell World Trade Ltd
Priority to US12/110,216priorityCriticalpatent/USRE44737E1/en
Application grantedgrantedCritical
Publication of USRE44737E1publicationCriticalpatent/USRE44737E1/en
Assigned to MARVELL INTERNATIONAL LTD.reassignmentMARVELL INTERNATIONAL LTD.ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS).Assignors: MARVELL WORLD TRADE LTD.
Assigned to CAVIUM INTERNATIONALreassignmentCAVIUM INTERNATIONALASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS).Assignors: MARVELL INTERNATIONAL LTD.
Assigned to MARVELL ASIA PTE, LTD.reassignmentMARVELL ASIA PTE, LTD.ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS).Assignors: CAVIUM INTERNATIONAL
Anticipated expirationlegal-statusCritical
Expired - Lifetimelegal-statusCriticalCurrent

Links

Images

Classifications

Definitions

Landscapes

Abstract

A first computing device transmitting a wireless signal to a second and third computing devices, the signal requesting an actuator of the second computing device generate an acoustic signal to be received by a sensor of the third computing device, wherein the actuator and sensor are unsynchronized. The first computing device computes, based on an time estimate for the acoustic signal to travel from the actuator of the second computing device to the sensor of the third computing device, a physical location of the actuator of the second computing device and the sensor of the third computing device.

Description

TECHNICAL FIELD
Embodiments described herein relate to position calibration of audio sensors and actuators in a distributed computing platform.
BACKGROUND
Many emerging applications like multi-stream audio/video rendering, hands free voice communication, object localization, and speech enhancement, use multiple sensors and actuators (like multiple microphones/cameras and loudspeakers/displays, respectively). However, much of the current work has focused on setting up all the sensors and actuators on a single platform. Such a setup would require a lot of dedicated hardware. For example, to set up a microphone array on a single general purpose computer, would typically require expensive multichannel sound cards and a central processing unit (CPU) with larger computation power to process all the multiple streams.
Computing devices such as laptops, personal digital assistants (PDAs), tablets, cellular phones, and camcorders have become pervasive. These devices are equipped with audio-visual sensors (such as microphones and cameras) and actuators (such as loudspeakers and displays). The audio/video sensors on different devices can be used to form a distributed network of sensors. Such an ad-hoc network can be used to capture different audio-visual scenes (events such as business meetings, weddings, or public events) in a distributed fashion and then use all the multiple audio-visual streams for an emerging applications. For example, one could imagine using the distributed microphone array formed by laptops of participants during a meeting in place of expensive stand alone speakerphones. Such a network of sensors can also be used to detect, identify, locate and track stationary or moving sources and objects.
To implement a distributed audio-visual I/O platform, includes placing the sensors, actuators and platforms into a space coordinate system, which includes determining the three-dimensional positions of the sensors and actuators.
DESCRIPTION OF DRAWINGS
FIG. 1 illustrates a schematic representation of a distributed computing in accordance with one embodiment.
FIG. 2 is a flow diagram describing the process of generating position ion for audio sensors and actuators in accordance with one embodiment.
FIG. 3 illustrates a computation scheme to generate position coordinates.
DETAILED DESCRIPTION
Embodiments of a three-dimensional position calibration of audio sensors and actuators in a distributed computing platform are disclosed. In the following description, numerous specific details are set forth. However, it is understood that embodiments may be practiced without these specific details. In other instances, well-known circuits, structures and techniques have not been shown in detail in order not to obscure the understanding of this description.
Reference throughout this specification to “one embodiment” or “an embodiment” indicate that a particular feature, structure, or characteristic described in connection with the embodiment is included in at least one embodiment. Thus, the appearances of the phrases “in one embodiment” or “in an embodiment” in various places throughout this specification are not necessarily all referring to the same embodiment. Furthermore, the particular features, structures, or characteristics may be combined in any suitable manner in one or more embodiments.
FIG. 1 illustrates a schematic representation of a distributed computing platform consisting of a set of General-Purpose Computers (GPC)102-106 (sometimes referred to as computing devices).GPC102 is configured to be the master, and performs the location estimation. The GPCs (102-106) shown inFIG. 1 may include a personal computer (PC), laptop, PDA, tablet PC, or other computing devices. In one embodiment, each GPC is equipped with audio sensors108 (e.g., microphones), actuators110 (e.g., loudspeakers), andwireless communication capabilities112, andcameras 114. As is explained in more detail below, the sensors and actuators of the multiple GPCs are used to estimate their respective physical locations.
For example, in one embodiment, given a set of M acoustic sensors and S acoustic actuators in unknown locations, one embodiment estimates their respective three dimensional coordinates. The acoustic actuators are excited using a predetermined calibration signal such as a maximum length sequence or chirp signal, and the time of arrival (TOA) is estimated for each pair of the acoustic actuators and sensors. In one embodiment, the TOA for a given pair of microphone and speakers is defined as the time for the acoustic signal to travel from the speaker to the microphone. Measuring the TOA and knowing the speed of sound in the acoustical medium, the distance between each acoustical signal source and the acoustical sensors can be calculated, thereby determining the three dimensional positions of the actuators and the sensors.
FIG. 2 is a flow diagram describing, in greater detail, the process of generating the three-dimensional position calibration of audio sensors and actuators in a distributed computing platform, according to one embodiment. The process described in the flow diagram ofFIG. 2 periodically references the GPCs of the distributed computer platform illustrated inFIG. 1.
Inblock202, afirst GPC102, which may be considered the master GPC of the distributed platform, transmits a wireless signal to a surrounding set of GPCs in the distributed platform (the actual number of GPCs included in the distributed platform may vary based on implementation). The signal from thefirst GPC102 includes a request that a specific actuator of one of the GPCs (e.g., second GPC103) be excited to generate an acoustic signal to be received by the sensors of the surrounding GPCs (e.g.,GPC102,104-106). In one embodiment, the initial wireless signal from themaster GPC102 identifies thespecific actuator110 to be excited.
In response to the signal from themaster GPC102, inblock204 thesecond GPC103 excites theactuator110 to generate an acoustic signal. In one embodiment, the acoustic signal may be a maximum length sequence or chirp signal, or another predetermined signal. Inblock206, thesecond GPC103 also transmits a first global time stamp to the other GPCs104-106. In one embodiment, the global time stamp identifies when the second GPC103 initiated the actuation of theactuator110 for thesecond GPC103. In block208, the sensors of theGPCs102,104-106, receive the acoustic signal generated by thesecond GPC103.
In block210, the time for the acoustic signal to travel from theactuator110 of thesecond GPC103 to the respective sensors (hereinafter referred to as Time of Arrival (TOA)), is estimated. In one embodiment, the TOA for a given pair of a microphone and speaker is defined as the time taken by the acoustic signal to travel form the speaker to the microphone.
In one embodiment, the GPCs that receive the acoustic signal via their sensors, proceed to estimate the respective TOAs. In one embodiment, there exists a common clock in the distributed platform so that GPCs102-106 are able to determine the time of arrival of audio samples captured by the respective sensors. As a result, the TOA can be estimated based on the difference between the first global time stamp issued by thesecond GPC103 and the time of when the acoustic signal is received by a sensor.
Considering, however, that sensors are distributed on different platforms, the audio stream among the different GPCs are typically not synchronized in time (e.g., analog-to-digital and digital-to-analog converters of actuators and sensors of the different GPCs are unsynchronized). As a result, the estimated TOA does not necessarily correspond to the actual TOA. In particular, the TOA of the acoustic signal may include an emission start time, which is defined as the time after which the sound is actually emitted from the speaker (e.g., actuator110) once the command has been issued from the respective GPC (e.g., GPC103). The actual emission start time is typically never zero and can actually vary in time depending on the sound card and processor load of the respective GPC.
Therefore, to account for the variations in the emission start time, multiple alternatives may be used. For example, in one embodiment, if multiple audio input channels are available on the GPC exciting an actuator, then one of the output channels can be connected directly to one of the input channels forming a loop-back. Source emission start time can then be estimated for a given speaker, and can be globally transmitted to theother GPCs102,104-106 to more accurately determine the respective TOAs. Furthermore, in one embodiment, in the case of using the loop-back, the estimated emission start time will be included in the global time stamp transmitted by the respective GPC.
Once the TOAs for the acoustic signal have been estimated by the receiving GPCs104-106, which may include accounting for the unknown emission start time as described above, the TOAs are transmitted to themaster GPC102. In an alternative embodiment, the TOAs can be computed by themaster GPC102, in which case each sensor of GPCs104-106 generate a second global timestamp of when the acoustic signals arrived, respectively. In the alternative embodiment, themaster GPC102 uses the first global time stamp (identifying when thesecond GPC103 initiated the actuation of the actuator110) and the second global time stamps to estimate the TOAs for the respective pairs of actuators and sensors. In such as case, themaster GPC102 may also estimate the emission start time of the acoustic signal to estimate the TOAs.
Indecision block212, if additional actuators remain in the distributed platform, the processes of blocks202-210 are repeated to have each of the actuators in the platform generate an acoustic signal to determine the TOAs with respective receiving sensors. In an alternative embodiment, multiple separate actuators may be actuated in parallel, wherein the actuator signals are multiplexed by each actuator using a unique signal (e.g., different parameters for chirp or MLS signals). In the case of actuating the multiple separate actuators in parallel, themaster GPC102 identifies to each actuator a unique signal parameters to be used when exciting the actuator.
Once all of the TOAs for the different pairs of actuators and sensors have been computed and transmitted to themaster GPC102, in block214 themaster GPC102 computes the coordinates of the sensors and the actuators. More specifically, as illustrated in the position computation scheme ofFIG. 3, in one embodiment themaster GPC102, utilizes a nonlinear least squares (NLS)computation302 to determine thecoordinates304 of the actuators and/or sensors. In one embodiment, theNLS computation302 considers theTOAs306, the number ofmicrophones308 and the number ofspeakers310 in the platform, along with aninitial estimation312 at the coordinates of the actuators and speakers. The actual computation used by themaster GPC102 to compute the coordinates of the actuators and sensors based on the TOAs may vary based on implementation. For example, in an alternative embodiment to compute the positions of sensors and actuators with unknown emission start times, the NLS procedure is used to jointly estimate the positions and the emission times. Emission times add extra S (number of actuators) variables to the computation procedure.
To provide the initial estimation as used by the NLS, several alternatives are available. For example, if an approximate idea of the microphone and speaker positions is available, then the initialization may be done manually. In another embodiment, the use of one or more cameras may provide a rough estimate to be used as the initial estimation.
An additional embodiment to generate an initial estimation includes assuming that microphones and speakers on a given computing platform are approximately at the same position, and given all estimates of the pairwise distances between the separate GPCs, a multidimensional scaling approach may be used to determine the coordinates from, in one embodiment, the Euclidean distance matrix. The approach involves converting the symmetric pairwise distance matrix to a matrix of scalar products with respect to some origin and then performing a singular value decomposition to obtain the matrix of coordinates. The matrix coordinates in turn, may be used as the initial guess or estimate of the coordinates for the respective GPCs, and the microphones and speakers located on them.
The techniques described above can be stored in the memory of one of the computing devices or GPCs as a set of instructions to be executed. In addition, the instructions to perform the processes described above could alternatively be stored on other forms of computer and/or machine-readable media, including magnetic and optical disks. Further, the instructions can be downloaded into a computing device over a data network in a form of compiled and linked version.
Alternatively, the logic to perform the techniques as discussed above, could be implemented in additional computer and/or machine readable media, such as discrete hardware components as large-scale integrated circuits (LSI's), application-specific integrated circuits (ASIC's), firmware such as electrically erasable programmable read-only memory (EEPROM's); and electrical, optical, acoustical and other forms of propagated signals (e.g., carrier waves, infrared signals, digital signals, etc.); etc.
These embodiments have been described with reference to specific exemplary embodiments thereof. It will, however, be evident to persons having the benefit of this disclosure that various modifications and changes may be made to these embodiments without departing from the broader spirit and scope of the embodiments described herein. The specification and drawings are, accordingly, to be regarded in an illustrative rather than a restrictive sense.

Claims (90)

What is claimed is:
1. A method comprising:
an actuator of a first computing device generating an acoustic signal, wherein a time-varying delay occurs between when the first computing device issues a command to generate the acoustic signal and when the actuator generates the acoustic signal, the time-varying delay at least due to time-varying processor load of the first computing device;
transmitting, with the first computing device, a timestamp indicating when the first computing device issued the command to generate the acoustic signal;
a sensor of a second computing device receiving the acoustic signal;
generating an estimate of a time for the acoustic signal to travel from the actuator of the first computing device to the sensor of the second computing device based on the timestamp indicating when the first computing device issued the command to generate the acoustic signal, wherein the sensor of the second computing device and actuator of the first computing device are unsynchronized;
a third computing device computing, based on the time estimate of the time for the acoustic signal to travel from the actuator of the first computing device to the sensor of the second computing device, a physical location of at least one of a set including the sensor of the second computing device and the actuator of the first computing device, wherein computing the physical location of at least one of the set including the sensor of the second computing device and the actuator of the first computing device includes jointly estimating, using a nonlinear least squares (NLS) computation, (i) the physical location of the at least one of the set including the sensor of the second computing device and the actuator of the first computing device and (ii) the time-varying delay between when the first computing device issues the command to generate the acoustic signal and when the actuator generates the acoustic signal;
a sensor of a fourth computing device receiving the acoustic signal; and
generating a second an estimate of a time for the acoustic signal to travel from the actuator of the first computing device to the sensor of the fourth computing device based on the timestamp indicating when the first computing device issued the command to generate the acoustic signal, wherein the actuator of the first computing device and the sensor of the fourth computing device are unsynchronized; and
the third computing device computing, based on the second estimate of the time for the acoustic signal to travel from the actuator of the first computing device to the sensor of the fourth computing device, a physical location of at least one of a set including the sensor of the fourth computing device and the actuator of the first computing device, wherein computing the physical location of at least one of the set including the sensor of the fourth computing device and the actuator of the first computing device includes jointly estimating, using the NLS computation, (i) the physical location of the at least one of the set including the sensor of the fourth computing device and the actuator of the first computing device and (ii) the time-varying delay between when the first computing device issues the command to generate the acoustic signal and when the actuator generates the acoustic signal.
2. The method ofclaim 1, wherein the method further includes:
an actuator of the second computing device generating a second acoustic signal, wherein a second time-varying delay occurs between when the second computing device issues a command to generate the second acoustic signal and when the actuator of the second computing device generates the second acoustic signal, the second time-varying delay at least due to time-varying processor load of the second computing device;
transmitting, with the second computing device, a second timestamp indicating when the second computing device issued the command to generate the second acoustic signal;
a sensor of a first computing device receiving the second acoustic signal;
generating a third an estimate of a time for the second acoustic signal to travel from the actuator of the second computing device to the sensor of the first computing device based on the second timestamp indicating when the second computing device issued the command to generate the second acoustic signal; and
the third computing device computing, based on the third estimate of the time for the second acoustic signal to travel from the actuator of the second computing device to the sensor of the first computing device, a physical location of at least one of a set including the sensor of the first computing device and the actuator of the second computing device, wherein computing the physical location of at least one of the set including the sensor of the first computing device and the actuator of the second computing device includes jointly estimating, using the NLS computation, (i) the at least one of the set including the sensor of the first computing device and the actuator of the second computing device and (ii) the second time-varying delay between when the second computing device issues the command to generate the second acoustic signal and when the actuator of the second computing device generates the second acoustic signal.
3. The method ofclaim 2, wherein the actuator of the first computing device is a speaker.
4. The method ofclaim 3, wherein the sensor of the second computing device is a microphone.
5. The method ofclaim 4, wherein the acoustic signals generated by the first and second computing devices are selected from a group comprising of maximum length sequence signal and a chirp signal.
6. The method ofclaim 1, further including:
estimating an emission start time of when the acoustic signal was emitted from the actuator.
7. The method ofclaim 6, wherein the estimating of the emission start time includes the first computing device using a loopback device to estimate the emission start time; and
using the emission start time to determine the estimate of a time for the acoustic signal to travel from the actuator of the first computing device to the sensor second computing device.
8. The method ofclaim 1, wherein the method further includes prior to the third computing device computing the physical locations, computing an initial estimation of the physical location of the sensor of the second computer and actuator of the first computer via a video modality.
9. The method ofclaim 1, wherein the method further includes prior to the third computing device computing the physical locations of the sensor of the second computer, computing an initial estimation of the physical location of the sensor of the second computer and actuator of the first computer via multidimensional scaling.
10. A method comprising:
a first computing device transmitting a wireless signal to a second computing device and a third computing devices device, the signal requesting an actuator of the second computing device generate an acoustic signal to be received by a sensor of the third computing device, wherein the actuator and sensor are unsynchronized at least due to a time-varying delay occurs between when the second computing device issues a command to generate the acoustic signal and when the actuator generates the acoustic signal, the time-varying delay at least due to time-varying processor load of the second computing device; and
the first computing device computing, based on a time estimate for the acoustic signal to travel from the actuator of the second computing device to the sensor of the third computing device, a physical location of at least one of a set including the actuator of the second computing device and the sensor of the third computing device;
wherein computing the physical location of at least one of the set including the actuator of the second computing device and the sensor of the third computing device includes jointly estimating, using a nonlinear least squares (NLS) computation, (i) the at least one of the set including the actuator of the second computing device and the sensor of the third computing device and (ii) the time-varying delay between when the second computing device issues the command to generate the acoustic signal and when the actuator generates the acoustic signal.
11. The method ofclaim 10, wherein the method further includes:
the first computing device transmitting the wireless signal to the second computing device and a fourth computing devices device, the signal requesting the actuator of the second computing device generate the acoustic signal to be received by a sensor of the fourth computing device, wherein the actuator and the sensor of the fourth computing device are unsynchronized; and
the first computing device computing, based on an time estimate for the acoustic signal to travel from the actuator of the second computing device to the sensor of the fourth computing device, a physical location of at least one of a set including the actuator of the second computing device and the sensor of the fourth computing device;
wherein computing the physical location of at least one of the set including the actuator of the second computing device and the sensor of the fourth computing device includes jointly estimating, using the NLS computation, (i) the at least one of the set including the actuator of the second computing device and the sensor of the fourth computing device and (ii) the time-varying delay between when the second computing device issues the command to generate the acoustic signal and when the actuator generates the acoustic signal.
12. The method ofclaim 11, wherein the method further includes:
the first computing device transmitting a second wireless signal to the second computing device and the third computing devices device, the signal requesting an actuator of the third computing device generate an a second acoustic signal to be received by a sensor of the second computing device, wherein the actuator of the third computing device and a sensor of the second computing device are unsynchronized at least due to a second time-varying delay occurs between when the third computing device issues a command to generate the second acoustic signal and when the actuator of the third computing device generates the second acoustic signal, the second time-varying delay at least due to time-varying processor load of the third computing device; and
the first computing device computing, based on an a time estimate for the second acoustic signal to travel from the actuator of the third computing device to the sensor of the second computing device, a physical location of at least one of a set including the actuator of the third computing device and the sensor of the second computing device;
wherein computing the physical location of at least one of the set including the actuator of the third computing device and the sensor of the second computing device includes jointly estimating, using the NLS computation, (i) the at least one of the set including the actuator of the third computing device and the sensor of the second computing device and (ii) the second time-varying delay between when the third computing device issues the command to generate the second acoustic signal and when the actuator of the third computing device generates the second acoustic signal.
13. The method ofclaim 12, wherein the actuator of the second computing device is a speaker.
14. The method ofclaim 13, wherein the sensor of the third computing device is a microphone.
15. The method ofclaim 14, wherein the acoustic signals to be generated by the second and third computing devices are selected from a group comprising of maximum length sequence signal, and a chirp signal.
16. The method ofclaim 10, further including: estimating an emission start time of when the acoustic signal was emitted from the actuator of the second computing device.
17. The method ofclaim 16, wherein estimating the emission start time includes the second computing device using a loopback device to estimate the emission start time; using emission start time to determine the estimate of a time for the acoustic signal to travel from the actuator of the second computing device to the sensor third computing device.
18. The method ofclaim 10, wherein the method further includes:
prior to the first computer determining a physical location of at least one of the group including the actuator of the second computing device and the sensor of the third computer, the first computer computing an initial estimation of the physical location of the actuator of the second computer and the sensor of the third computing device via video modality.
19. The method of claim10 11, wherein the method further includes:
prior to the first computer determining a physical location of at least one of the group including the actuator of the second computing device and the sensor of the third fourth computer, the first computer computing an initial estimation of the physical location of the actuator of the second computer and the sensor of the third fourth computing device via video modality.
20. A non-transitory machine readable medium having stored thereon a set of instructions, which when executed, cause the machine to perform a method comprising of:
an actuator of a first computing device generating an acoustic signal, wherein a time-varying delay occurs between when the first computing device issues a command to generate the acoustic signal and when the actuator generates the acoustic signal, the time-varying delay at least due to time-varying processor load of the first computing device;
transmitting, with the first computing device, a timestamp indicating when the first computing device issued the command to generate the acoustic signal;
a sensor of a second computing device receiving the acoustic signal;
generating an estimate of a time for the acoustic signal to travel from the actuator of the first computing device to the sensor of the second computing device based on the timestamp indicating when the first computing device issued the command to generate the acoustic signal, wherein the sensor of the second computing device and the actuator of the first computing device are unsynchronized; and
a third computing device computing, based on the time estimate of the time for the acoustic signal to travel from the actuator of the first computing device to the sensor of the second computing device, a physical location of at least one of a set including the sensor of the second computing device and the actuator of the first computing device, wherein computing the physical location of at least one of the set including the sensor of the second computing device and the actuator of the first computing device includes jointly estimating, using a nonlinear least squares (NLS) computation, (i) the at least one of the set including the sensor of the second computing device and the actuator of the first computing device and (ii) the time-varying delay between when the first computing device issues the command to generate the acoustic signal and when the actuator generates the acoustic signal.
21. The non-transitory machine readable medium ofclaim 20, wherein the method further includes:
a sensor of a fourth computing device receiving the acoustic signal; and
generating a second an estimate of a time for the acoustic signal to travel from the actuator of the first computing device to the sensor of the fourth computing device based on the timestamp indicating when the first computing device issued the command to generate the acoustic signal, wherein the actuator of the first computing device and the sensor of the fourth computing device are unsynchronized; and
the third computing device computing, based on the second estimate of time for the acoustic signal to travel from the actuator of the first computing device to the sensor of the fourth computing device, a physical location of at least one of a set including the sensor of the fourth computing device and the actuator of the first computing device, wherein computing the physical location of at least one of the set including the sensor of the fourth computing device and the actuator of the first computing device includes jointly estimating, using the NLS computation, (i) the at least one of the set including the sensor of the fourth computing device and the actuator of the first computing device and (ii) the time-varying delay between when the first computing device issues the command to generate the acoustic signal and when the actuator generates the acoustic signal.
22. The non-transitory machine readable medium ofclaim 21, wherein the method further includes:
an actuator of the second computing device generating a second acoustic signal, wherein a second time-varying delay occurs between when the second computing device issues a command to generate the second acoustic signal and when the actuator of the second computing device generates the second acoustic signal, the second time-varying delay at least due to time-varying processor load of the second computing device;
transmitting, with the second computing device, a second timestamp indicating when the second computing device issued the command to generate the second acoustic signal;
a sensor of a first computing device receiving the second acoustic signal;
generating a third an estimate of a time for the second acoustic signal to travel from the actuator of the second computing device to the sensor of the first computing device based on the second timestamp indicating when the second computing device issued the command to generate the second acoustic signal; and
the third computing device computing, based on the third estimate of time for the second acoustic signal to travel from the actuator of the second computing device to the sensor of the first computing device, a physical location of at least one of a set including the sensor of the first computing device and the actuator of the second computing device, wherein computing the physical location of at least one of the set including the sensor of the first computing device and the actuator of the second computing device includes jointly estimating, using the NLS computation, (i) the at least one of the set including the sensor of the first computing device and the actuator of the second computing device and (ii) the second time-varying delay between when the second computing device issues the command to generate the second acoustic signal and when the actuator of the second computing device generates the second acoustic signal.
23. The non-transitory machine readable medium ofclaim 22, wherein the actuator of the first computing device is a speaker.
24. The non-transitory machine readable medium ofclaim 23, wherein the sensor of the second computing device is a microphone.
25. The non-transitory machine readable medium ofclaim 24, wherein the acoustic signals generated by the first and second computing devices are selected from a group comprising of maximum length sequence signal and a chirp signal.
26. The machine readable medium ofclaim 20, further including:
estimating an emission start time of when the acoustic signal was emitted from the actuator.
27. The machine readable medium ofclaim 26, wherein the estimating of the emission start time includes the first computing device using a loopback device to estimate the emission start time; and
using the emission start time to determine the estimate of a time for the acoustic signal to travel from the actuator of the first computing device to the sensor second computing device.
28. The non-transitory machine readable medium ofclaim 20, wherein the method further includes:
prior to the third computing device computing the physical locations, computing an initial estimation of the physical location of the sensor of the second computer and actuator of the first computer via a video modality.
29. The non-transitory machine readable medium ofclaim 20, wherein the method further includes:
prior to the third computing device computing the physical locations, computing an initial estimation of the physical location of the sensor of the second computer and actuator of the first computer via multidimensional scaling.
30. A non-transitory machine readable medium having stored thereon a set of instructions, which when executed, cause the machine to perform a method comprising of:
a first computing device transmitting a wireless signal to a second computing device and a third computing devices device, the signal requesting an actuator of the second computing device generate an acoustic signal to be received by a sensor of the third computing device, wherein the actuator and sensor are unsynchronized at least due to a time-varying delay occurs between when the second computing device issues a command to generate the acoustic signal and when the actuator generates the acoustic signal, the time-varying delay at least due to time-varying processor load of the second computing device; and
the first computing device computing, based on a time estimate for the acoustic signal to travel from the actuator of the second computing device to the sensor of the third computing device, a physical location of at least one of a set including the actuator of the second computing device and the sensor of the third computing device;
wherein computing the physical location of at least one of the set including the actuator of the second computing device and the sensor of the third computing device includes jointly estimating, using a nonlinear least squares (NLS) computation, (i) the at least one of the set including the actuator of the second computing device and the sensor of the third computing device and (ii) the time-varying delay between when the second computing device issues the command to generate the acoustic signal and when the actuator generates the acoustic signal.
31. The non-transitory machine readable medium ofclaim 30, wherein the method further includes:
the first computing device transmitting the wireless signal to the second computing device and a fourth computing devices device, the signal requesting the actuator of the second computing device generate the acoustic signal to be received by a sensor of the fourth computing device, wherein the actuator and the sensor of the fourth computing device are unsynchronized; and
the first computing device computing, based on an time estimate for the acoustic signal to travel from the actuator of the second computing device to the sensor of the fourth computing device, a physical location of at least one of a set including the actuator of the second computing device and the sensor of the fourth computing device;
wherein computing the physical location of at least one of the set including the actuator of the second computing device and the sensor of the fourth computing device includes jointly estimating, using the NLS computation, (i) the at least one of the set including the actuator of the second computing device and the sensor of the fourth computing device and (ii) the time-varying delay between when the second computing device issues the command to generate the acoustic signal and when the actuator generates the acoustic signal.
32. The non-transitory machine readable medium ofclaim 31, wherein the method further includes:
the first computing device transmitting a second wireless signal to the second computing device and the third computing devices device, the signal requesting an actuator of the third computing device generate an a second acoustic signal to be received by a sensor of the second computing device, wherein the actuator of the third computing device and a sensor of the second computing device are unsynchronized at least due to a second time-varying delay occurs between when the third computing device issues a command to generate the second acoustic signal and when the actuator of the third computing device generates the second acoustic signal, the second time-varying delay at least due to time-varying processor load of the third computing device; and
the first computing device computing, based on an a time estimate for the second acoustic signal to travel from the actuator of the third computing device to the sensor of the second computing device, a physical location of at least one of a set including the actuator of the third computing device and the sensor of the second computing device;
wherein computing the physical location of at least one of the set including the actuator of the third computing device and the sensor of the second computing device includes jointly estimating, using the NLS computation, (i) the at least one of the set including the actuator of the third computing device and the sensor of the second computing device and (ii) the second time-varying delay between when the third computing device issues the command to generate the second acoustic signal and when the actuator of the third computing device generates the second acoustic signal.
33. The non-transitory machine readable medium ofclaim 32, wherein the actuator of the second computing device is a speaker.
34. The non-transitory machine readable medium ofclaim 33, wherein the sensor of the third computing device is a microphone.
35. The non-transitory machine readable medium ofclaim 34, wherein the acoustic signals to be generated by the second and third computing devices are selected from a group comprising of maximum length sequence signal, and a chirp signal.
36. The machine readable medium ofclaim 30, further including:
estimating an emission start time of when the acoustic signal was emitted from the actuator of the second computing device.
37. The machine readable medium ofclaim 36, wherein estimating the emission start time includes the second computing device using a loopback device to compute the emission start time;
using emission start time to determine the estimate of a time for the acoustic signal to travel from the actuator of the second computing device to the sensor third computing device.
38. The non-transitory machine readable medium ofclaim 30, wherein the method further includes:
prior to the first computer determining a physical location of at least one of the group including the actuator of the second computing device and the sensor of the third computer, the first computer computing an initial estimation of the physical location of the actuator of the second computer and the sensor of the third computing device via video modality.
39. The non-transitory machine readable medium of claim30 31, wherein the method further includes:
prior to the first computer determining a physical location of at least one of the group including the actuator of the second computing device and the sensor of the third fourth computer, the first computer computing an initial estimation of the physical location of the actuator of the second computer and the sensor of the third fourth computing device via video modality.
40. A method comprising:
causing an actuator of a first computing device to generate a first acoustic signal;
receiving a first timestamp indicating when the first computing device issued a command to generate the first acoustic signal, wherein a first time-varying delay occurs between when the first computing device issues the command to generate the first acoustic signal and when the actuator of the first computing device generates the first acoustic signal, the first time-varying delay at least due to time-varying processor load of the first computing device;
receiving a first time of arrival signal from a second computing device having a sensor, wherein the first time of arrival signal is indicative of a time for the first acoustic signal to travel from the actuator of the first computing device to the sensor of the second computing device, wherein the actuator of the first computing device and the sensor of the second computing device are unsynchronized at least due to the first time-varying delay;
receiving a second time of arrival signal from a third computing device having a sensor, wherein the second time of arrival signal is indicative of a time for the first acoustic signal to travel from the actuator of the first computing device to the sensor of the third computing device, wherein the actuator of the first computing device and the sensor of the third computing device are unsynchronized at least due to the first time-varying delay;
computing, based on i) the first time of arrival signal and ii) the first timestamp, at least one of a physical location of the sensor of the second computing device or a physical location of the actuator of the first computing device, wherein computing at least one of the physical location of the sensor of the second computing device or the physical location of the actuator of the first computing device includes jointly estimating, using a nonlinear least squares (NLS) computation, (i) the at least one of the physical location of the sensor of the second computing device or the physical location of the actuator of the first computing device and (ii) the first time-varying delay between when the first computing device issues the command to generate the first acoustic signal and when the actuator of the first computing device generates the first acoustic signal; and
computing, based on i) the second time of arrival signal and ii) the first timestamp, at least one of a physical location of the sensor of the third computing device or the physical location of the actuator of the actuator of the first computing device, wherein computing at least one of the physical location of the sensor of the third computing device or the physical location of the actuator of the first computing device includes jointly estimating, using the NLS computation, (i) the at least one of the physical location of the sensor of the third computing device or the physical location of the actuator of the first computing device and (ii) the first time-varying delay between when the first computing device issues the command to generate the first acoustic signal and when the actuator of the first computing device generates the first acoustic signal.
41. The method of claim 40, wherein causing the actuator of the first computing device to generate the first acoustic signal comprises transmitting a wireless signal to the first computing device that requests that the actuator of the first computing device generate the first acoustic signal.
42. The method of claim 40, wherein the first time of arrival signal includes a time for the first acoustic signal to travel from the actuator of the first computing device to the sensor of the second computing device.
43. The method of claim 40, wherein the first time of arrival signal includes a time at which the first acoustic signal arrived at the sensor of the second computing device.
44. The method of claim 40, wherein the second time of arrival signal includes a time for the first acoustic signal to travel from the actuator of the first computing device to the sensor of the third computing device.
45. The method of claim 40, wherein the second time of arrival signal includes a time at which the first acoustic signal arrived at the sensor of the third computing device.
46. The method of claim 40, wherein the first computing device includes a sensor;
wherein the second computing device includes an actuator;
wherein the method further comprises:
causing the actuator of the second computing device to generate a second acoustic signal;
receiving a second timestamp indicating when the second computing device issued a command to generate the second acoustic signal, wherein a second time-varying delay occurs between when the second computing device issues the command to generate the second acoustic signal and when the actuator of the second computing device generates the first acoustic signal, the second time-varying delay at least due to time-varying processor load of the second computing device;
receiving a third time of arrival signal from the first computing device, wherein the third time of arrival signal is indicative of a time for the second acoustic signal to travel from the actuator of the second computing device to the sensor of the first computing device, wherein the actuator of the second computing device and the sensor of the first computing device are unsynchronized at least due to the second time-varying delay; and
computing, based on i) the third time of arrival signal and ii) the second timestamp, at least one of a physical location of the sensor of the first computing device or a physical location of the actuator of the second computing device, wherein computing at least one of the physical location of the sensor of the first computing device or the physical location of the actuator of the second computing device includes jointly estimating, using the nonlinear least squares (NLS) computation, (i) the at least one of the physical location of the sensor of the first computing device or the physical location of the actuator of the second computing device and (ii) the second time-varying delay between when the second computing device issues the command to generate the second acoustic signal and when the actuator of the second computing device generates the second acoustic signal.
47. The method of claim 40, wherein the actuator of the first computing device is a speaker.
48. The method of claim 47, wherein the sensor of the second computing device is a microphone.
49. The method of claim 40, wherein the first acoustic signal is a maximum length sequence signal or a chirp signal.
50. The method of claim 40, further comprising computing an initial estimate of the physical location of the sensor of the second computing device and an initial estimate of the physical location of the actuator of the first computing device based on information captured by a camera.
51. The method of claim 40, further comprising computing an initial estimate of the physical location of the sensor of the second computing device and an initial estimate of the physical location of the actuator of the first computing device via multidimensional scaling.
52. A method comprising:
causing an actuator of a first computing device to generate a first acoustic signal, the first acoustic signal to be received by a sensor of a second computing device, wherein the actuator of the first computing device and the sensor of the second computing device are unsynchronized at least due to a first time-varying delay that occurs between when the first computing device issues a command to generate the first acoustic signal and when the actuator of the first computing device generates the first acoustic signal, the first time-varying delay at least due to time-varying processor load of the first computing device;
computing at the third computing device, based on a time estimate for the first acoustic signal to travel from the actuator of the first computing device to the sensor of the second computing device, at least one of a physical location of the actuator of the first computing device or a physical location of the sensor of the second computing device, wherein computing at least one of the physical location of the actuator of the first computing device or the physical location of the sensor of the second computing device includes jointly estimating, using a nonlinear least squares (NLS) computation, (i) the at least one of the physical location of the actuator of the first computing device or the physical location of the sensor of the second computing device and (ii) the first time-varying delay between when the first computing device issues the command to generate the first acoustic signal and when the actuator of the first computing device generates the first acoustic signal;
wherein the time estimate for the first acoustic signal to travel from the actuator of the first computing device to the sensor of the second computing device is based on (i) a first timestamp, transmitted by the first computing device, that indicates when the first computing device issues the command to generate the first acoustic signal, and (ii) a time when the second computing device receives the first acoustic signal.
53. The method of claim 52, wherein the first acoustic signal is to be received by a sensor of a fourth computing device, wherein the actuator of the first computing device and the sensor of the fourth computing device are unsynchronized at least due to the first time-varying delay that occurs between when the first computing device issues the command to generate the first acoustic signal and when the actuator of the first computing device generates the first acoustic signal;
wherein the method further comprises the third computing device computing, based on a time estimate for the first acoustic signal to travel from the actuator of the first computing device to the sensor of the fourth computing device, at least one of the physical location of the actuator of the first computing device or a physical location of the sensor of the fourth computing device, wherein computing at least one of the physical location of the actuator of the first computing device or the physical location of the sensor of the fourth computing device includes jointly estimating, using the nonlinear least squares (NLS) computation, (i) the at least one of the physical location of the actuator of the first computing device or the physical location of the sensor of the fourth computing device and (ii) the first time-varying delay between when the first computing device issues the command to generate the first acoustic signal and when the actuator of the first computing device generates the first acoustic signal;
wherein the time estimate for the first acoustic signal to travel from the actuator of the first computing device to the sensor of the fourth computing device is based on (i) the first timestamp that indicates when the first computing device issues the command to generate the first acoustic signal, and (ii) a time when the fourth computing device receives the first acoustic signal.
54. The method of claim 53, further comprising:
causing an actuator of the second computing device to generate a second acoustic signal, the second acoustic signal to be received by a sensor of the first computing device, wherein the actuator of the second computing device and the sensor of the first computing device are unsynchronized at least due to a second time-varying delay that occurs between when the second computing device issues a command to generate the second acoustic signal and when the actuator of the second computing device generates the second acoustic signal, the second time-varying delay at least due to time-varying processor load of the second computing device; and
computing at the third computing device, based on a time estimate for the second acoustic signal to travel from the actuator of the second computing device to the sensor of the first computing device, at least one of a physical location of the actuator of the second computing device or a physical location of the sensor of the first computing device, wherein computing at least one of the physical location of the actuator of the second computing device or the physical location of the sensor of the first computing device includes jointly estimating, using the nonlinear least squares (NLS) computation, (i) the at least one of the physical location of the actuator of the second computing device or the physical location of the sensor of the first computing device and (ii) the second time-varying delay between when the second computing device issues the command to generate the second acoustic signal and when the actuator of the second computing device generates the second acoustic signal;
wherein the time estimate for the second acoustic signal to travel from the actuator of the second computing device to the sensor of the first computing device is based on (i) a third timestamp, transmitted by the second computing device, that indicates when the second computing device issues the command to generate the second acoustic signal, and (ii) a time when the first computing device receives the second acoustic signal.
55. The method of claim 52, wherein the actuator of the first computing device is a speaker.
56. The method of claim 55, wherein the sensor of the second computing device is a microphone.
57. The method of claim 52, wherein the first acoustic signal is a maximum length sequence signal or a chirp signal.
58. The method of claim 52, further comprising computing at the third computing device an initial estimate of the physical location of the sensor of the second computing device and an initial estimate of the physical location of the actuator of the first computing device based on information captured by a camera.
59. The method of claim 52, further comprising computing at the third computing device an initial estimate of the physical location of the sensor of the second computing device and an initial estimate of the physical location of the actuator of the first computing device via multidimensional scaling.
60. A first computing device comprising:
a communication device;
a processor configured to:
cause an actuator of a second computing device to generate a first acoustic signal, wherein a first time-varying delay occurs between when the second computing device issues a command to generate the first acoustic signal and when the actuator of the second computing device generates the first acoustic signal, the first time-varying delay at least due to time-varying processor load of the second computing device,
receive a first time of arrival signal from a third computing device having a sensor, wherein the first time of arrival signal is indicative of a time for the first acoustic signal to travel from the actuator of the second computing device to the sensor of the third computing device, wherein the actuator of the second computing device and the sensor of the third computing device are unsynchronized at least due to the first time-varying delay that occurs between when the second computing device issues the command to generate the first acoustic signal and when the actuator of the second computing device generates the first acoustic signal,
receive a second time of arrival signal from a fourth computing device having a sensor, wherein the second time of arrival signal is indicative of a time for the first acoustic signal to travel from the actuator of the second computing device to the sensor of the fourth computing device, wherein the actuator of the second computing device and the sensor of the fourth computing device are unsynchronized at least due to the first time-varying delay that occurs between when the second computing device issues the command to generate the first acoustic signal and when the actuator of the second computing device generates the first acoustic signal,
compute, based on i) the first time of arrival signal and ii) an estimate of a time at which the first acoustic signal is emitted from the actuator of the second computing device, at least one of a physical location of the actuator of the second computing device or a physical location of the sensor of the third computing device, wherein computing at least one of the physical location of the actuator of the second computing device or the physical location of the sensor of the third computing device includes jointly estimating, using a nonlinear least squares (NLS) computation, (i) the at least one of the physical location of the actuator of the second computing device or the physical location of the sensor of the third computing device and (ii) the first time-varying delay between when the second computing device issues the command to generate the first acoustic signal and when the actuator of the second computing device generates the first acoustic signal, and
compute, based on i) the second time of arrival signal and ii) the estimate of the time at which the first acoustic signal is emitted from the actuator of the second computing device, at least one of the physical location of the actuator of the actuator of the second computing device or a physical location of the sensor of the fourth computing device, wherein computing at least one of the physical location of the actuator of the second computing device or the physical location of the sensor of the fourth computing device includes jointly estimating, using the NLS computation, (i) the at least one of the physical location of the actuator of the second computing device or the physical location of the sensor of the fourth computing device and (ii) the first time-varying delay between when the second computing device issues the command to generate the first acoustic signal and when the actuator of the second computing device generates the first acoustic signal.
61. The first computing device of claim 60, wherein the processor is configured to cause the first computing device to transmit a request signal to the second computing device that requests that the actuator of the second computing device generate the first acoustic signal.
62. The first computing device of claim 61, wherein the first communication device is a wireless communication device, and wherein the request signal is a wireless request signal.
63. The first computing device of claim 60, wherein the first time of arrival signal includes a time for the first acoustic signal to travel from the actuator of the second computing device to the sensor of the third computing device.
64. The first computing device of claim 60, wherein the first time of arrival signal includes a time at which the first acoustic signal arrived at the sensor of the third computing device.
65. The first computing device of claim 60, wherein the second time of arrival signal includes a time for the first acoustic signal to travel from the actuator of the second computing device to the sensor of the fourth computing device.
66. The first computing device of claim 60, wherein the second time of arrival signal includes a time at which the first acoustic signal arrived at the sensor of the fourth computing device.
67. The first computing device of claim 60, wherein the second computing device includes a sensor;
wherein the third computing device includes an actuator;
wherein the processor is configured to:
cause the actuator of the third computing device to generate a second acoustic signal, wherein a second time-varying delay occurs between when the third computing device issues a command to generate the second acoustic signal and when the actuator of the third computing device generates the second acoustic signal, the second time-varying delay at least due to time-varying processor load of the third computing device,
receive a third time of arrival signal from the second computing device, wherein the third time of arrival signal is indicative of a time for the second acoustic signal to travel from the actuator of the third computing device to the sensor of the second computing device, wherein the actuator of the third computing device and the sensor of the second computing device are unsynchronized at least due to the second time-varying delay that occurs between when the third computing device issues the command to generate the second acoustic signal and when the actuator of the third computing device generates the second acoustic signal; and
compute, based on i) the third time of arrival signal and ii) the estimate of the time at which the second acoustic signal is emitted from the actuator of the third computing device, at least one of a physical location of the actuator of the third computing device or a physical location of the sensor of the second computing device, wherein computing at least one of the physical location of the actuator of the third computing device or the physical location of the sensor of the second computing device includes jointly estimating, using the NLS computation, (i) the at least one of the physical location of the actuator of the third computing device or the physical location of the sensor of the second computing device and (ii) the second time-varying delay between when the third computing device issues the command to generate the second acoustic signal and when the actuator of the third computing device generates the second acoustic signal.
68. The first computing device of claim 60, wherein the first acoustic signal is a maximum length sequence signal or a chirp signal.
69. The first computing device of claim 60, wherein the processor is configured to cause the first computing device to compute an initial estimate of the physical location of the sensor of the third computing device and an initial estimate of the physical location of the actuator of the second computing device based on information captured by a camera.
70. The first computing device of claim 60, wherein the processor is configured to cause the first computing device to compute an initial estimate of the physical location of the sensor of the second computing device and an initial estimate of the physical location of the actuator of the first computing device via multidimensional scaling.
71. A first computing device comprising:
a communication device;
a processor configured to:
cause an actuator of a second computing device to generate a first acoustic signal, the first acoustic signal to be received by a sensor of a third computing device, wherein the actuator of the second computing device and the sensor of the third computing device are unsynchronized at least due to a first time-varying delay that occurs between when the second computing device issues a command to generate the first acoustic signal and when the actuator of the second computing device generates the first acoustic signal, the first time-varying delay at least due to time-varying processor load of the second computing device, and
compute, based on a time estimate for the first acoustic signal to travel from the actuator of the second computing device to the sensor of the third computing device, at least one of a physical location of the actuator of the second computing device or a physical location of the sensor of the third computing device, wherein computing at least one of the physical location of the actuator of the second computing device or the physical location of the sensor of the third computing device includes jointly estimating, using a nonlinear least squares (NLS) computation, (i) the at least one of the physical location of the actuator of the second computing device or the physical location of the sensor of the third computing device and (ii) the first time-varying delay between when the second computing device issues the command to generate the first acoustic signal and when the actuator of the second computing device generates the first acoustic signal;
wherein the time estimate for the first acoustic signal to travel from the actuator of the second computing device to the sensor of the third computing device is based on (i) a first timestamp, transmitted by the second computing device, that indicates when the second computing device issues the command to generate the first acoustic signal, and (ii) a time when the third computing device receives the first acoustic signal.
72. The first computing device of claim 71, wherein the first acoustic signal is to be received by a sensor of a fourth computing device, wherein the actuator of the second computing device and the sensor of the fourth computing device are unsynchronized at least due to the first time-varying delay that occurs between when the second computing device issues the command to generate the first acoustic signal and when the actuator of the second computing device generates the first acoustic signal;
wherein the processor is configured to compute, based on a time estimate for the first acoustic signal to travel from the actuator of the second computing device to the sensor of the fourth computing device, at least one of the physical location of the actuator of the second computing device or a physical location of the sensor of the fourth computing device, wherein computing at least one of the physical location of the actuator of the second computing device or the physical location of the sensor of the fourth computing device includes jointly estimating, using the nonlinear least squares (NLS) computation, (i) the at least one of the physical location of the actuator of the second computing device or the physical location of the sensor of the fourth computing device and (ii) the first time-varying delay between when the second computing device issues the command to generate the first acoustic signal and when the actuator of the second computing device generates the first acoustic signal;
wherein the time estimate for the first acoustic signal to travel from the actuator of the second computing device to the sensor of the fourth computing device is based on (i) the first timestamp that indicates when the second computing device issues the command to generate the first acoustic signal, and (ii) a time when the fourth computing device receives the first acoustic signal.
73. The first computing device of claim 72, wherein the processor is configured to:
cause an actuator of the third computing device to generate a second acoustic signal, the second acoustic signal to be received by a sensor of the second computing device, wherein the actuator of the third computing device and the sensor of the second computing device are unsynchronized at least due to a second time-varying delay that occurs between when the third computing device issues a command to generate the second acoustic signal and when the actuator of the second computing device generates the second acoustic signal, the second time-varying delay at least due to time-varying processor load of the third computing device; and
compute, based on a time estimate for the second acoustic signal to travel from the actuator of the third computing device to the sensor of the second computing device, at least one of a physical location of the actuator of the third computing device or a physical location of the sensor of the second computing device, wherein computing at least one of the physical location of the actuator of the third computing device or the physical location of the sensor of the second computing device includes jointly estimating, using the nonlinear least squares (NLS) computation, (i) the at least one of the physical location of the actuator of the third computing device or the physical location of the sensor of the second computing device and (ii) the second time-varying delay between when the third computing device issues the command to generate the second acoustic signal and when the actuator of the third computing device generates the second acoustic signal;
wherein the time estimate for the second acoustic signal to travel from the actuator of the third computing device to the sensor of the second computing device is based on (i) a fourth timestamp, transmitted by the third computing device, that indicates when the third computing device issues the command to generate the second acoustic signal, and (ii) a time when the second computing device receives the second acoustic signal.
74. The first computing device of claim 71, wherein the first acoustic signal is a maximum length sequence signal or a chirp signal.
75. The first computing device of claim 71, wherein the processor is configured to cause the first computing device to compute an initial estimate of the physical location of the sensor of the third computing device and an initial estimate of the physical location of the actuator of the second computing device based on information captured by a camera.
76. The first computing device of claim 71, wherein the processor is configured to compute an initial estimate of the physical location of the sensor of the third computing device and an initial estimate of the physical location of the actuator of the second computing device via multidimensional scaling.
77. The first computing device of claim 71, wherein the processor is configured to cause the first computing device to transmit a request signal to the second computing device that requests that the actuator of the second computing device generate the first acoustic signal.
78. The first computing device of claim 77, wherein the communication device is a wireless communication device, and wherein the request signal is a wireless request signal.
79. A system comprising:
a first computing device having an actuator, wherein the first computing device is configured to cause the actuator of the first computing device to generate a first acoustic signal, wherein a first time-varying delay occurs between when the first computing device issues a command to generate the first acoustic signal and when the actuator of the first computing device generates the first acoustic signal, the first time-varying delay at least due to time-varying processor load of the first computing device;
a second computing device having a sensor;
wherein the sensor of the second computing device and the actuator of the first computing device are unsynchronized at least due to the first time-varying delay that occurs between when the first computing device issues the command to generate the first acoustic signal and when the actuator of the first computing device generates the first acoustic signal;
a third computing device;
a fourth computing device having a sensor;
wherein the sensor of the fourth computing device and the actuator of the first computing device are unsynchronized at least due to the first time-varying delay that occurs between when the first computing device issues the command to generate the first acoustic signal and when the actuator of the first computing device generates the first acoustic signal;
wherein the third computing device is communicatively coupled to the first computing device, the second computing device and the fourth computing device;
wherein the first computing device is configured to transmit a first timestamp that indicates when the first computing device issued the command to generate the first acoustic signal;
wherein one of the second computing device or the third computing device is configured to generate, based on the first timestamp, a first time estimate for the first acoustic signal to travel from the actuator of the first computing device to the sensor of the second computing device:
wherein the third computing device is configured to compute, based on the first time estimate, at least one of a physical location of the sensor of the second computing device or a physical location of the actuator of the first computing device, wherein computing at least one of the physical location of the sensor of the second computing device or the physical location of the actuator of the first computing device includes jointly estimating, using a nonlinear least squares (NLS) computation, (i) the at least one of the physical location of the sensor of the second computing device or the physical location of the actuator of the first computing device, and (ii) the first time-varying delay between when the first computing device issues the command to generate the first acoustic signal and when the actuator of the first computing device generates the first acoustic signal;
wherein one of the fourth computing device or the third computing device is configured to generate, based on the first timestamp, a second time estimate for the first acoustic signal to travel from the actuator of the first computing device to the sensor of the fourth computing device; and
wherein the third computing device is configured to compute, based on the second time estimate, at least one of a physical location of the sensor of the fourth computing device or the physical location of the actuator of the first computing device, wherein computing at least one of the physical location of the sensor of the fourth computing device or the physical location of the actuator of the first computing device includes jointly estimating, using the nonlinear least squares (NLS) computation, (i) the at least one of the physical location of the sensor of the fourth computing device or the physical location of the actuator of the first computing device, and (ii) the first time-varying delay between when the first computing device issues the command to generate the first acoustic signal and when the actuator of the first computing device generates the first acoustic signal.
80. The system of claim 79, wherein the first computing device includes a sensor;
wherein the second computing device includes an actuator, wherein the second computing device is configured to cause the actuator of the second computing device to generate a second acoustic signal, wherein a second time-varying delay occurs between when the second computing device issues a command to generate the second acoustic signal and when the actuator of the second computing device generates the second acoustic signal, the second time-varying delay at least due to time-varying processor load of the second computing device;
wherein the second computing device is configured to transmit a second timestamp that indicates when the second computing device issued the command to generate the second acoustic signal;
wherein one of the first computing device or the third computing device is configured to generate, based on the second timestamp, a third time estimate for the second acoustic signal to travel from the actuator of the second computing device to the sensor of the first computing device; and
wherein the third computing device is configured to compute, based on the third time estimate, at least one of a physical location of the sensor of the first computing device or a physical location of the actuator of the second computing device, wherein computing at least one of the physical location of the sensor of the first computing device or the physical location of the actuator of the second computing device includes jointly estimating, using the nonlinear least squares (NLS) computation, (i) the at least one of the physical location of the sensor of the first computing device or the physical location of the actuator of the second computing device, and (ii) the second time-varying delay between when the second computing device issues the command to generate the second acoustic signal and when the actuator of the second computing device generates the second acoustic signal.
81. The system of claim 80, wherein the actuator of the first computing device is a speaker.
82. The system of claim 81, wherein the sensor of the second computing device is a microphone.
83. The system of claim 82, wherein each of the first acoustic signal and the second acoustic signal is one of a maximum length sequence signal or a chirp signal.
84. The system of claim 79, wherein the third computing device is configured to compute an initial estimation of the physical location of the sensor of the second computing device and an initial estimation of the physical location of the actuator of the first computing device via a video modality prior to the third computing device computing at least one of the physical location of the sensor of the second computing device or the physical location of the actuator of the first computing device.
85. The system of claim 79, wherein the third computing device is configured to compute an initial estimation of the physical location of the sensor of the second computing device and an initial estimation of the physical location of the actuator of the first computing device via multidimensional scaling prior to the third computing device computing at least one of the physical location of the sensor of the second computing device or the physical location of the actuator of the first computing device.
86. A first computing device comprising:
a wireless communication device;
a processor configured to:
cause the wireless communication device to transmit a wireless signal to a second computing device and a third computing device, the wireless signal requesting an actuator of the second computing device to generate a first acoustic signal to be received by a sensor of the third computing device, wherein the actuator of the second computing device and the sensor of the third computing device are unsynchronized at least due to a first time-varying delay that occurs between when the second computing device issues a command to generate the first acoustic signal and when the actuator of the second computing device generates the first acoustic signal, the first time-varying delay at least due to time-varying processor load of the second computing device, and
compute, based on a time estimate for the first acoustic signal to travel from the actuator of the second computing device to the sensor of the third computing device, at least one of a physical location of the actuator of the second computing device or a physical location of the sensor of the third computing device, wherein computing at least one of the physical location of the actuator of the second computing device or the physical location of the sensor of the third computing device includes jointly estimating, using a nonlinear least squares (NLS) computation, (i) the at least one of the physical location of the actuator of the second computing device or the physical location of the sensor of the third computing device, and (ii) the first time-varying delay between when the second computing device issues the command to generate the first acoustic signal and when the actuator of the second computing device generates the first acoustic signal;
wherein the time estimate for the first acoustic signal to travel from the actuator of the second computing device to the sensor of the third computing device is based on (i) a first timestamp that indicates when the second computing device issues the command to generate the first acoustic signal, and (ii) a time when the third computing device receives the first acoustic signal.
87. The first computing device of claim 86, wherein the processor is configured to:
cause the wireless communication device to transmit the wireless signal to the second computing device and a fourth computing device, the wireless signal requesting the actuator of the second computing device to generate the first acoustic signal to be received by a sensor of the fourth computing device, wherein the actuator of the second computing device and the sensor of the fourth computing device are unsynchronized at least due to the first time-varying delay that occurs between when the second computing device issues the command to generate the first acoustic signal and when the actuator of the first computing device generates the first acoustic signal; and
compute, based on a time estimate for the first acoustic signal to travel from the actuator of the second computing device to the sensor of the fourth computing device, at least one of a physical location of the actuator of the second computing device or a physical location of the sensor of the fourth computing device, wherein computing at least one of the physical location of the actuator of the second computing device or the physical location of the sensor of the fourth computing device includes jointly estimating, using the nonlinear least squares (NLS) computation, (i) the at least one of the physical location of the actuator of the second computing device or the physical location of the sensor of the fourth computing device and (ii) the first time-varying delay between when the second computing device issues the command to generate the first acoustic signal and when the actuator of the second computing device generates the first acoustic signal;
wherein the time estimate for the first acoustic signal to travel from the actuator of the second computing device to the sensor of the fourth computing device is based on (i) the first timestamp that indicates when the second computing device issues the command to generate the first acoustic signal, and (ii) a time when the fourth computing device receives the first acoustic signal.
88. The first computing device of claim 87, wherein the processor is configured to:
cause the wireless communication device to transmit a second wireless signal to the second computing device and the third computing device, the second wireless signal requesting an actuator of the third computing device to generate a second acoustic signal to be received by a sensor of the second computing device, wherein the actuator of the third computing device and the sensor of the second computing device are unsynchronized at least due to a second time-varying delay that occurs between when the third computing device issues a command to generate the second acoustic signal and when the actuator of the third computing device generates the second acoustic signal, the second time-varying delay at least due to time-varying processor load of the third computing device; and
compute, based on a time estimate for the second acoustic signal to travel from the actuator of the third computing device to the sensor of the second computing device, at least one of a physical location of the actuator of the third computing device or a physical location of the sensor of the second computing device, wherein computing at least one of the physical location of the actuator of the third computing device or the physical location of the sensor of the second computing device includes jointly estimating, using the nonlinear least squares (NLS) computation, (i) the at least one of the physical location of the actuator of the third computing device or the physical location of the sensor of the second computing device, and (ii) the second time-varying delay between when the third computing device issues the command to generate the second acoustic signal and when the actuator of the third computing device generates the second acoustic signal;
wherein the time estimate for the second acoustic signal to travel from the actuator of the third computing device to the sensor of the second computing device is based on (i) a fourth timestamp that indicates when the third computing device issues the command to generate the second acoustic signal, and (ii) a time when the second computing device receives the second acoustic signal.
89. The first computing device of claim 86, wherein the processor is configured to compute an initial estimation of the physical location of the actuator of the second computing device and an initial estimation of the physical location of the actuator of the sensor of the third computing device via information captured by a camera, prior to computing at least one of the physical location of the actuator of the second computing device or the physical location of the sensor of the third computing device.
90. The first computing device of claim 86, wherein the processor is configured to compute an initial estimation of the physical location of the actuator of the second computing device and an initial estimation of the physical location of the sensor of the third computing device via multidimensional scaling, prior to computing at least one of the physical location of the actuator of the second computing device or the physical location of the sensor of the third computing device.
US12/110,2162003-05-092008-04-25Three-dimensional position calibration of audio sensors and actuators on a distributed computing platformExpired - LifetimeUSRE44737E1 (en)

Priority Applications (1)

Application NumberPriority DateFiling DateTitle
US12/110,216USRE44737E1 (en)2003-05-092008-04-25Three-dimensional position calibration of audio sensors and actuators on a distributed computing platform

Applications Claiming Priority (2)

Application NumberPriority DateFiling DateTitle
US10/435,231US7035757B2 (en)2003-05-092003-05-09Three-dimensional position calibration of audio sensors and actuators on a distributed computing platform
US12/110,216USRE44737E1 (en)2003-05-092008-04-25Three-dimensional position calibration of audio sensors and actuators on a distributed computing platform

Related Parent Applications (1)

Application NumberTitlePriority DateFiling Date
US10/435,231ReissueUS7035757B2 (en)2003-05-092003-05-09Three-dimensional position calibration of audio sensors and actuators on a distributed computing platform

Publications (1)

Publication NumberPublication Date
USRE44737E1true USRE44737E1 (en)2014-01-28

Family

ID=33416901

Family Applications (2)

Application NumberTitlePriority DateFiling Date
US10/435,231CeasedUS7035757B2 (en)2003-05-092003-05-09Three-dimensional position calibration of audio sensors and actuators on a distributed computing platform
US12/110,216Expired - LifetimeUSRE44737E1 (en)2003-05-092008-04-25Three-dimensional position calibration of audio sensors and actuators on a distributed computing platform

Family Applications Before (1)

Application NumberTitlePriority DateFiling Date
US10/435,231CeasedUS7035757B2 (en)2003-05-092003-05-09Three-dimensional position calibration of audio sensors and actuators on a distributed computing platform

Country Status (5)

CountryLink
US (2)US7035757B2 (en)
EP (2)EP1623314B1 (en)
CN (1)CN100538607C (en)
AT (1)ATE557335T1 (en)
WO (1)WO2004102372A2 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
US9635515B1 (en)2015-06-222017-04-25Marvell International Ltd.Apparatus and methods for generating an accurate estimate of a time of receipt of a packet

Families Citing this family (11)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
US7035757B2 (en)2003-05-092006-04-25Intel CorporationThree-dimensional position calibration of audio sensors and actuators on a distributed computing platform
KR100562900B1 (en)*2003-06-192006-03-21삼성전자주식회사 Device and IP address duplication detection method for detecting duplicate IP addresses in mobile ad hoc network environment
JP2005179026A (en)*2003-12-222005-07-07Toshiba Corp Equipment management system
US7580774B2 (en)*2006-05-102009-08-25Honda Motor Co., Ltd.Characterization and classification of pose in low dimension
CN101118280B (en)*2007-08-312011-06-01西安电子科技大学 Node self-localization method in distributed wireless sensor network
US8447329B2 (en)*2011-02-082013-05-21Longsand LimitedMethod for spatially-accurate location of a device using audio-visual information
JP5699749B2 (en)*2011-03-312015-04-15富士通株式会社 Mobile terminal device position determination system and mobile terminal device
US20160321917A1 (en)*2015-04-302016-11-03Board Of Regents, The University Of Texas SystemUtilizing a mobile device as a motion-based controller
CN107328401B (en)*2017-07-262021-02-19Tcl移动通信科技(宁波)有限公司Mobile terminal, data correction processing method of geomagnetic sensor of mobile terminal, and storage medium
JP6916130B2 (en)*2018-03-022021-08-11株式会社日立製作所 Speaker estimation method and speaker estimation device
CN108387873A (en)*2018-03-292018-08-10广州视源电子科技股份有限公司Sound source positioning method and system, sound box system positioning method and sound box system

Citations (30)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
US3559161A (en)1967-07-241971-01-26Honeywell IncAcoustic position reference system
US4555779A (en)1980-12-101985-11-26Chevron Research CompanySubmerged marine streamer locator
US5691922A (en)*1995-11-141997-11-25Airwave Technology, Inc.Golf autoranging system
US5778082A (en)1996-06-141998-07-07Picturetel CorporationMethod and apparatus for localization of an acoustic source
US5859595A (en)*1996-10-311999-01-12Spectracom CorporationSystem for providing paging receivers with accurate time of day information
GB2329778A (en)1997-09-241999-03-31Roke Manor ResearchLocating system
US5959568A (en)1996-06-261999-09-28Par Goverment Systems CorporationMeasuring distance
US5970413A (en)1996-06-061999-10-19Qualcomm IncorporatedUsing a frequency that is unavailable for carrying telephone voice information traffic for determining the position of a mobile subscriber in a CDMA cellular telephone system
US5986971A (en)*1996-11-221999-11-16Korea Advanced Institute Science And TechnologyImage processing system and method for estimating acoustic property of a movable sound source
US6201499B1 (en)1998-02-032001-03-13Consair CommunicationsTime difference of arrival measurement system
WO2001026335A2 (en)1999-10-062001-04-12Sensoria CorporationDistributed signal processing in a network
US6243471B1 (en)1995-03-072001-06-05Brown University Research FoundationMethods and apparatus for source location estimation from microphone-array time-delay estimates
US20020077772A1 (en)*2000-12-192002-06-20Hewlett-Packard CompanyDevice location discovery by sound
US20020097885A1 (en)2000-11-102002-07-25Birchfield Stanley T.Acoustic source localization system and method
US20020150263A1 (en)2001-02-072002-10-17Canon Kabushiki KaishaSignal processing system
US20020155845A1 (en)2001-04-232002-10-24Martorana Marc J.Method and apparatus for high-accuracy position location using search mode ranging techniques
US20020168989A1 (en)2001-03-302002-11-14Koninklijke Philips Electronics N.V.Method of determining position in a cellular communications network
US20030014486A1 (en)2001-07-162003-01-16May Gregory J.Distributed audio network using networked computing devices
US20030012168A1 (en)2001-07-032003-01-16Jeremy ElsonLow-latency multi-hop ad hoc wireless network
US20030114170A1 (en)2001-12-142003-06-19Rick Roland R.Position determination system that uses a cellular communication system
US20030129996A1 (en)1996-05-132003-07-10Ksi Inc.Robust, efficient, localization system
US20030174086A1 (en)2001-12-212003-09-18International Business Machines CorporationDetermining a time of arrival of a sent signal
US6643516B1 (en)1997-07-292003-11-04Gordon M. StewartTelephone system and method with background location response capability
US6661342B2 (en)2001-06-042003-12-09Time Domain CorporationSystem and method for using impulse radio technology to track the movement of athletes and to enable secure communications between the athletes and their teammates, fans or coaches
US20030236866A1 (en)2002-06-242003-12-25Intel CorporationSelf-surveying wireless network
US6677895B1 (en)1999-11-162004-01-13Harris CorporationSystem and method for determining the location of a transmitting mobile unit
US20040109417A1 (en)*2002-12-062004-06-10Microsoft CorporationPractical network node coordinate estimation
US20040170289A1 (en)2003-02-272004-09-02Whan Wen JeaAudio conference system with quality-improving features by compensating sensitivities microphones and the method thereof
WO2004102372A2 (en)2003-05-092004-11-25Intel Corporation (A Delaware Corporation)Three-dimentional position calibration of audio sensors and actuators on a distributed computing platform
US6941246B2 (en)2003-09-182005-09-06Intel CorporationMethod for three-dimensional position calibration of audio sensors and actuators on a distributed computing platform

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
US7046779B2 (en)*2002-02-152006-05-16Multimedia Telesys, Inc.Video conference system and methods for use at multi-station sites

Patent Citations (31)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
US3559161A (en)1967-07-241971-01-26Honeywell IncAcoustic position reference system
US4555779A (en)1980-12-101985-11-26Chevron Research CompanySubmerged marine streamer locator
US6243471B1 (en)1995-03-072001-06-05Brown University Research FoundationMethods and apparatus for source location estimation from microphone-array time-delay estimates
US5691922A (en)*1995-11-141997-11-25Airwave Technology, Inc.Golf autoranging system
US20030129996A1 (en)1996-05-132003-07-10Ksi Inc.Robust, efficient, localization system
US5970413A (en)1996-06-061999-10-19Qualcomm IncorporatedUsing a frequency that is unavailable for carrying telephone voice information traffic for determining the position of a mobile subscriber in a CDMA cellular telephone system
US5778082A (en)1996-06-141998-07-07Picturetel CorporationMethod and apparatus for localization of an acoustic source
US5959568A (en)1996-06-261999-09-28Par Goverment Systems CorporationMeasuring distance
US5859595A (en)*1996-10-311999-01-12Spectracom CorporationSystem for providing paging receivers with accurate time of day information
US5986971A (en)*1996-11-221999-11-16Korea Advanced Institute Science And TechnologyImage processing system and method for estimating acoustic property of a movable sound source
US6643516B1 (en)1997-07-292003-11-04Gordon M. StewartTelephone system and method with background location response capability
GB2329778A (en)1997-09-241999-03-31Roke Manor ResearchLocating system
US6201499B1 (en)1998-02-032001-03-13Consair CommunicationsTime difference of arrival measurement system
WO2001026335A2 (en)1999-10-062001-04-12Sensoria CorporationDistributed signal processing in a network
US6677895B1 (en)1999-11-162004-01-13Harris CorporationSystem and method for determining the location of a transmitting mobile unit
US20020097885A1 (en)2000-11-102002-07-25Birchfield Stanley T.Acoustic source localization system and method
US20020077772A1 (en)*2000-12-192002-06-20Hewlett-Packard CompanyDevice location discovery by sound
US6662137B2 (en)2000-12-192003-12-09Hewlett-Packard Development Company, L.P.Device location discovery by sound
US20020150263A1 (en)2001-02-072002-10-17Canon Kabushiki KaishaSignal processing system
US20020168989A1 (en)2001-03-302002-11-14Koninklijke Philips Electronics N.V.Method of determining position in a cellular communications network
US20020155845A1 (en)2001-04-232002-10-24Martorana Marc J.Method and apparatus for high-accuracy position location using search mode ranging techniques
US6661342B2 (en)2001-06-042003-12-09Time Domain CorporationSystem and method for using impulse radio technology to track the movement of athletes and to enable secure communications between the athletes and their teammates, fans or coaches
US20030012168A1 (en)2001-07-032003-01-16Jeremy ElsonLow-latency multi-hop ad hoc wireless network
US20030014486A1 (en)2001-07-162003-01-16May Gregory J.Distributed audio network using networked computing devices
US20030114170A1 (en)2001-12-142003-06-19Rick Roland R.Position determination system that uses a cellular communication system
US20030174086A1 (en)2001-12-212003-09-18International Business Machines CorporationDetermining a time of arrival of a sent signal
US20030236866A1 (en)2002-06-242003-12-25Intel CorporationSelf-surveying wireless network
US20040109417A1 (en)*2002-12-062004-06-10Microsoft CorporationPractical network node coordinate estimation
US20040170289A1 (en)2003-02-272004-09-02Whan Wen JeaAudio conference system with quality-improving features by compensating sensitivities microphones and the method thereof
WO2004102372A2 (en)2003-05-092004-11-25Intel Corporation (A Delaware Corporation)Three-dimentional position calibration of audio sensors and actuators on a distributed computing platform
US6941246B2 (en)2003-09-182005-09-06Intel CorporationMethod for three-dimensional position calibration of audio sensors and actuators on a distributed computing platform

Non-Patent Citations (22)

* Cited by examiner, † Cited by third party
Title
Bulusu, Nirupama, et al., "Scalable Coordination for Wireless Sensor Networks: Self-Configuring Localization Systems," Proceedings of the 6th International Symposium on Communication Theory and Applications (ISCTA '01), Ambleside, Lake District, UK, Jul. 2001, 6 pages.
Communication Under Rule 73(3) EPC for corresponding European Application No. 04 760 779.1, dated Dec. 19, 2011.
European Search Report for corresponding European Application No. 11 00 9918,1, dated Apr. 23, 2012.
Examination Report in corresponding Singapore application No. 200506757-4, issued by the Austrian Patent Office on Jul. 23, 2009.
First Office Action in corresponding Chinese application No. 2004100347403 issued by the State Intellectual Property Office of the People's Republic of China, Oct. 27, 2006.
Girod, Lewis, et al., "Locating tiny sensors in time and space: A case study," Proceedings of the 2002 IEEE International Conference on Computer Design: VLSI in Computers and Processors (ICCD '02), Freiburg, Germany, Sep. 16-18, 2002, 6 pages.
International Search Report for PCT US2004/008587, dated Jan. 7, 2005.
Morset, "How to measure the initial time delay (distance between the loudspeaker and the microphone)?", Morset Sound Development, Jul. 25, 2002, pp. 1-3, available at http://www.nvo.com/winmls/nss-folder/discussion/How%20to%20measure%20distance%20loudspeaker1miscrophone.doc.
Moses, Randolph L., et al., "A Self-Localization Method for Wireless Sensor Networks," EURASIP Journal on Applied Signal Processing 2003-4, .COPYRGT. 2003 Hindawai Publishing Corporation, pp. 348-358.
Office Action for corresponding European Application No. 04 760 779.1-2212, dated Apr. 11, 2011.
Office Action for corresponding European Application No. 04 760 779.1-2212, dated Dec. 21, 2009.
Office Action for corresponding European Application No. 11 009 918.1, dated Sep. 27, 2013.
Office Action for corresponding European Application No. 11 009 918.1-2212, dated Dec. 20, 2012.
Raykar, Vikas C., et al., "Self Localization of acoustic sensors and actuators on Distributed platforms," Proceedings. 2003 International Workshop on Multimedia Technologies in E-Learning and Collaboration, Nice, France, Oct. 2003, 8 pages.
Result of Consultation for corresponding European Application No. 04 760 779.1, dated Nov. 21, 2011.
Sachar, Joshua M., et al., "Position Calibration Of Lage-Aperture Microphone Arrays," 2002 IEEE International Conference on Acoustics, Speech, and Signal Processing, vol. 2, 2002, pp. 1797-1800.
Savvides, Andreas, et al., "Dynamic Fine-Grained Localization in Ad-Hoc Networks of Sensors," in the proceedings of the International Conference on Mobile Computing and Networking (MobiCom) 2001, Rome, Italy, Jul. 2001, pp. 166-179.
Second Office Action in corresponding Chinese application No. 2004100347403 issued by the State Intellectual Property Office of the People's Republic of China, Jun. 6, 2008.
Walker, "The Truth About Latency," Sound on Sound, Sep. 2002 pp. 1-7, available at http://www.soundonsound.com/sos/Sep02/articles/pcmusician0902.asp?print=yes.
Written Opinion for PCT US2004/008587, dated Jan. 7, 2005.
Written Opinion in corresponding Singapore application No. 200505757-4, issued by the Austrian Patent Office on Jun. 29, 2007.
Written Opinion in corresponding Singapore application No. 200506757-4, issued by the Austrian Patent Office on May 30, 2008.

Cited By (2)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
US9635515B1 (en)2015-06-222017-04-25Marvell International Ltd.Apparatus and methods for generating an accurate estimate of a time of receipt of a packet
US10073169B1 (en)2015-06-222018-09-11Marvell International Ltd.Apparatus and methods for generating an accurate estimate of a time of receipt of a packet

Also Published As

Publication numberPublication date
CN1550790A (en)2004-12-01
WO2004102372A2 (en)2004-11-25
ATE557335T1 (en)2012-05-15
EP2455775A1 (en)2012-05-23
US20040225470A1 (en)2004-11-11
WO2004102372A3 (en)2005-02-24
US7035757B2 (en)2006-04-25
EP1623314B1 (en)2012-05-09
CN100538607C (en)2009-09-09
EP1623314A2 (en)2006-02-08

Similar Documents

PublicationPublication DateTitle
USRE44737E1 (en)Three-dimensional position calibration of audio sensors and actuators on a distributed computing platform
EP3549354B1 (en)Distributed audio capture and mixing controlling
US10200788B2 (en)Spatial audio apparatus
JP5990345B1 (en) Surround sound field generation
US20180310114A1 (en)Distributed Audio Capture and Mixing
CN100370830C (en)Method and apparatus for audio-image speaker detection and location
EP3777235B1 (en)Spatial audio capture
RU2012102700A (en) ELIMINATION OF POSITIONAL UNCERTAINTY IN THE FORMATION OF SPATIAL SOUND
TW201120469A (en)Method, computer readable storage medium and system for localizing acoustic source
US6941246B2 (en)Method for three-dimensional position calibration of audio sensors and actuators on a distributed computing platform
Wehr et al.Synchronization of acoustic sensors for distributed ad-hoc audio networks and its use for blind source separation
KR20090128221A (en) Sound source location estimation method and system according to the method
US12212950B2 (en)Wireless microphone with local storage
Jia et al.Distributed microphone arrays for digital home and office
US10362397B2 (en)Voice enhancement method for distributed system
CN111492668B (en) Method and system for locating the point of origin of an audio signal within a defined space
CN113661538B (en)Apparatus and method for obtaining first order ambisonic signal
CN114220454A (en)Audio noise reduction method, medium and electronic equipment
JP2011071683A (en)Video object detection apparatus, video object detection method and program
KR100831936B1 (en) Sound source position measuring device for humanoid
Yang et al.Efficient and Microphone-Fault-Tolerant 3D Sound Source Localization
Lienhart et al.Providing common time and space in distributed AV-sensor networks by self-calibration
Lienhart¹ et al.Providing Common Time and Space in Distributed AV-Sensor Networks by

Legal Events

DateCodeTitleDescription
MAFPMaintenance fee payment

Free format text:PAYMENT OF MAINTENANCE FEE, 12TH YEAR, LARGE ENTITY (ORIGINAL EVENT CODE: M1553)

Year of fee payment:12

ASAssignment

Owner name:MARVELL INTERNATIONAL LTD., BERMUDA

Free format text:ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:MARVELL WORLD TRADE LTD.;REEL/FRAME:051778/0537

Effective date:20191231

ASAssignment

Owner name:CAVIUM INTERNATIONAL, CAYMAN ISLANDS

Free format text:ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:MARVELL INTERNATIONAL LTD.;REEL/FRAME:052918/0001

Effective date:20191231

ASAssignment

Owner name:MARVELL ASIA PTE, LTD., SINGAPORE

Free format text:ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:CAVIUM INTERNATIONAL;REEL/FRAME:053475/0001

Effective date:20191231


[8]ページ先頭

©2009-2025 Movatter.jp