Movatterモバイル変換


[0]ホーム

URL:


US20210021687A1 - Method and system for providing predictions via artificial intelligence (ai) models using a distributed system - Google Patents

Method and system for providing predictions via artificial intelligence (ai) models using a distributed system
Download PDF

Info

Publication number
US20210021687A1
US20210021687A1US17/061,364US202017061364AUS2021021687A1US 20210021687 A1US20210021687 A1US 20210021687A1US 202017061364 AUS202017061364 AUS 202017061364AUS 2021021687 A1US2021021687 A1US 2021021687A1
Authority
US
United States
Prior art keywords
subsystem
data
recognizer
instruction
vehicle
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Abandoned
Application number
US17/061,364
Inventor
Richard Chia Tsing Tong
Robert Victor Welland
John Hayes Ludwig
John Palmer Cordell
Samuel James McKelvie
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Xevo Inc
Original Assignee
Xevo Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Xevo IncfiledCriticalXevo Inc
Priority to US17/061,364priorityCriticalpatent/US20210021687A1/en
Publication of US20210021687A1publicationCriticalpatent/US20210021687A1/en
Abandonedlegal-statusCriticalCurrent

Links

Images

Classifications

Definitions

Landscapes

Abstract

One embodiment discloses an automobile having multiple distributed subsystems configured to provide communication via a network system. The automobile includes an outward facing camera (“OFC”) subsystem and a vehicle onboard computer (“VOC”). The OFC subsystem, having at least one OFC, OFC processor, and OFC database, is configured to recognize a predefined exterior object from a set of exterior images captured by the OFC based on an OFC query. The VOC includes a VOC central processing unit, VOC database, and network manager, wherein the network manager includes an internal network circuit and an external network circuit. The internal network circuit is used for communicating with the OFC subsystem while the external network circuit is used to interface with a cloud system. In one aspect, the VOC provides a data stream representing a recognized event in accordance with a query retrieved from the VOC database.

Description

Claims (18)

What is claimed is:
1. A method for managing network uploading between a vehicle and a remote computer, the vehicle including a vehicle onboard computer and multiple sensory subsystems, the method comprising:
forwarding a first instruction and a first recognizer from the vehicle onboard computer to a first sensory subsystem in accordance with an inquiry stored in a memory of the vehicle onboard computer;
storing the first instruction and the first recognizer from the vehicle onboard computer in a first current storage of the first sensory subsystem;
activating a set of first sensory elements in the first sensory subsystem based on the first instruction and the first recognizer;
obtaining first sensed information by the first sensory elements;
processing the first sensed information in response to the first recognizer;
generating a first data stream representing a first answer in response to the first instruction and the first recognizer;
forwarding the first data stream to the vehicle onboard computer; and
uploading a result from the first data stream from the vehicle onboard computer to the remote computer.
2. The method ofclaim 1, further comprising:
receiving the inquiry from the cloud system via a wireless communication network; and
generating the first instruction and the first recognizer in response to the inquiry.
3. The method ofclaim 1, further comprising generating a data stream representing a result to the inquiry in response to the first data stream and uploading the data stream from the vehicle onboard computer to the cloud system via a wireless communication network.
4. The method ofclaim 1, further comprising:
forwarding a second instruction and a second recognizer from the vehicle onboard computer to a second sensory subsystem in accordance with the inquiry stored in the vehicle onboard computer memory;
storing the second instruction and second recognizer from the vehicle onboard computer in a second current storage of the second sensory subsystem and activating a set of second sensory elements based on the second instruction and second recognizer.
5. The method ofclaim 4, further comprising:
obtaining second sensed information by the set of second sensory elements and processing the second sensed information in response to the second recognizer;
generating a second data stream representing a second answer in response to the second instruction and the second recognizer; and
forwarding the second data stream to the vehicle onboard computer
uploading a second result from the second data stream from the vehicle onboard computer to the remote computer.
6. The method ofclaim 1, wherein forwarding the first instruction and the first recognizer from the vehicle onboard computer to the first sensory subsystem includes transmitting a visual instruction to a video subsystem for searching a symbol via captured video data from at least one camera or transmitting an audio instruction to an audio subsystem for searching for a sound via observed audio data from at least one microphone.
7. A method for managing network uploading between a vehicle and a remote computer, comprising:
receiving a request by a vehicle onboard computer of the vehicle through a wireless communication network;
generating an outward facing camera inquiry to capture exterior images outside of the vehicle, wherein the outward facing camera inquiry includes an identified recognizer based on the request;
generating an inward facing camera inquiry to capture interior images inside of the vehicle, wherein the inward facing camera inquiry includes an identity of an operator;
forwarding the outward facing camera inquiry from the vehicle onboard computer to an outward facing camera subsystem;
forwarding the inward facing camera inquiry from the vehicle onboard computer to an inward facing camera subsystem;
receiving a result at the vehicle onboard computer from the outward facing camera subsystem and from the inward facing camera subsystem; and
uploading the result to the remote computer.
8. The method ofclaim 7, further comprising:
receiving an outward facing camera result formatted in a data stream sent from the outward facing camera subsystem to the vehicle onboard computer in response to the outward facing camera inquiry; and
receiving an inward facing camera result formatted in a data stream sent from the inward facing camera subsystem to the vehicle onboard computer in response to the inward facing camera inquiry.
9. A system of a vehicle, comprising:
computing device, including:
a memory that stores computer instructions; and
a processor that executes the computer instructions to:
receive a first instruction and a first recognizer; and
a first sensory subsystem including:
a first current memory that stores the first instruction and the first recognizer; and
a first sensory element to:
obtain first sensed information based on the first instruction;
process the first sensed information based on the first recognizer;
generate a first data stream based on the processed first sensed information; and
forward the first data stream to the computing device.
10. The system ofclaim 9, wherein the processor of the computing device receives the first instruction and the first recognizer by executing the computer instructions further to:
receive an inquiry from a remote computing system; and
generate the first instruction and the first recognizer based on the inquiry.
11. The system ofclaim 9, wherein the processor of the computing device executes the computer instructions further to:
forward the first instruction and the first recognizer to the first sensory subsystem in response to a request for processing of sensed information based on the first recognizer.
12. The system ofclaim 9, wherein the processor of the computing device executes the computer instructions further to:
forward the first data stream to a remote computing system.
13. The system ofclaim 9, wherein the processor of the computing device executes the computer instructions further to:
generate an output data stream representing a result to an inquiry based on the first data stream; and
upload the output data stream to a remote computing system via a wireless communication network.
14. The system ofclaim 9, further comprising:
a second sensory subsystem including:
a second current memory that stores a second instruction and a second recognizer; and
a second sensory element to:
obtain second sensed information based on the second instruction;
process the second sensed information based on the second recognizer;
generate a second data stream based on the processed second sensed information; and
forward the second data stream to the computing device.
15. The system ofclaim 9, wherein the first sensory element of the first sensory subsystem processes the first sensed information based on the first recognizer includes:
analyze the first sensed information for a commercial symbol of a business.
16. The system ofclaim 9, wherein the first sensory element of the first sensory subsystem processes the first sensed information based on the first recognizer includes:
analyze the first sensed information for a predefined object.
17. The system ofclaim 9, wherein the first sensory element of the first sensory subsystem processes the first sensed information based on the first recognizer includes:
filter the first sensed information to remove a background item from the first send information.
18. The system ofclaim 9, wherein the first sensory element of the first sensory subsystem processes the first sensed information based on the first recognizer includes:
removing static items from the first sensed information.
US17/061,3642017-03-302020-10-01Method and system for providing predictions via artificial intelligence (ai) models using a distributed systemAbandonedUS20210021687A1 (en)

Priority Applications (1)

Application NumberPriority DateFiling DateTitle
US17/061,364US20210021687A1 (en)2017-03-302020-10-01Method and system for providing predictions via artificial intelligence (ai) models using a distributed system

Applications Claiming Priority (3)

Application NumberPriority DateFiling DateTitle
US201762479229P2017-03-302017-03-30
US15/941,540US10834221B2 (en)2017-03-302018-03-30Method and system for providing predictions via artificial intelligence (AI) models using a distributed system
US17/061,364US20210021687A1 (en)2017-03-302020-10-01Method and system for providing predictions via artificial intelligence (ai) models using a distributed system

Related Parent Applications (1)

Application NumberTitlePriority DateFiling Date
US15/941,540DivisionUS10834221B2 (en)2017-03-302018-03-30Method and system for providing predictions via artificial intelligence (AI) models using a distributed system

Publications (1)

Publication NumberPublication Date
US20210021687A1true US20210021687A1 (en)2021-01-21

Family

ID=63672614

Family Applications (2)

Application NumberTitlePriority DateFiling Date
US15/941,540Expired - Fee RelatedUS10834221B2 (en)2017-03-302018-03-30Method and system for providing predictions via artificial intelligence (AI) models using a distributed system
US17/061,364AbandonedUS20210021687A1 (en)2017-03-302020-10-01Method and system for providing predictions via artificial intelligence (ai) models using a distributed system

Family Applications Before (1)

Application NumberTitlePriority DateFiling Date
US15/941,540Expired - Fee RelatedUS10834221B2 (en)2017-03-302018-03-30Method and system for providing predictions via artificial intelligence (AI) models using a distributed system

Country Status (2)

CountryLink
US (2)US10834221B2 (en)
WO (1)WO2018183870A1 (en)

Families Citing this family (36)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN108883777B (en)*2016-03-282022-02-01日本精机株式会社Driving support system and display device
DE102016213495A1 (en)*2016-07-222018-01-25Robert Bosch Gmbh Driver assistance system, driver assistance system and vehicle
US11573573B2 (en)2017-06-062023-02-07Plusai, Inc.Method and system for distributed learning and adaptation in autonomous driving vehicles
US11392133B2 (en)2017-06-062022-07-19Plusai, Inc.Method and system for object centric stereo in autonomous driving vehicles
US11042155B2 (en)2017-06-062021-06-22Plusai LimitedMethod and system for closed loop perception in autonomous driving vehicles
US10388085B2 (en)*2017-07-142019-08-20Allstate Insurance CompanyDistributed data processing system for processing remotely captured sensor data
US11295235B2 (en)*2017-12-282022-04-05Intel CorporationFiltering training data for models in a data center
US10915101B2 (en)*2018-02-022021-02-09Uatc, LlcContext-dependent alertness monitor in an autonomous vehicle
US10586546B2 (en)2018-04-262020-03-10Qualcomm IncorporatedInversely enumerated pyramid vector quantizers for efficient rate adaptation in audio coding
US10580424B2 (en)*2018-06-012020-03-03Qualcomm IncorporatedPerceptual audio coding as sequential decision-making problems
US10734006B2 (en)2018-06-012020-08-04Qualcomm IncorporatedAudio coding based on audio pattern recognition
US11630817B2 (en)*2018-12-282023-04-18Yahoo Assets LlcMethod and system for data indexing and reporting
CN110008880B (en)*2019-03-272023-09-29深圳前海微众银行股份有限公司Model compression method and device
KR102739206B1 (en)*2019-04-152024-12-05현대자동차주식회사System and method for making engine sound with AI based on driver's condition
US11138433B2 (en)2019-06-072021-10-05The Boeing CompanyCabin experience network with a sensor processing unit
IT201900011403A1 (en)*2019-07-102021-01-10Ambarella Int Lp DETECTING ILLEGAL USE OF PHONE TO PREVENT THE DRIVER FROM GETTING A FINE
CN110782029B (en)*2019-10-252022-11-22阿波罗智能技术(北京)有限公司Neural network prediction method and device, electronic equipment and automatic driving system
CN111143577B (en)*2019-12-272023-06-16北京百度网讯科技有限公司 Data labeling method, device and system
US11687778B2 (en)2020-01-062023-06-27The Research Foundation For The State University Of New YorkFakecatcher: detection of synthetic portrait videos using biological signals
US11373447B2 (en)*2020-02-192022-06-28Toyota Motor Engineering & Manufacturing North America, Inc.Systems including image detection to inhibit vehicle operation
JP7415686B2 (en)*2020-03-102024-01-17株式会社デンソー Accident pattern determination device, accident pattern determination method, and accident pattern determination program
US10733463B1 (en)*2020-03-312020-08-04Lyft, Inc.Systems and methods for augmenting perception data with supplemental information
US11115482B1 (en)*2020-03-312021-09-07Xevo Inc.System and method for correlating keep-alive connection communications with unary connection communications
US11908132B2 (en)2020-05-022024-02-20Blaize, Inc.Method and systems for predicting medical conditions and forecasting rate of infection of medical conditions via artificial intelligence models using graph stream processors
US12001781B2 (en)2020-09-232024-06-04Evernorth Strategic Development, Inc.Query selection system
US11352013B1 (en)*2020-11-132022-06-07Samsara Inc.Refining event triggers using machine learning model feedback
US11341786B1 (en)2020-11-132022-05-24Samsara Inc.Dynamic delivery of vehicle event data
CN112419722B (en)*2020-11-182022-08-30百度(中国)有限公司Traffic abnormal event detection method, traffic control method, device and medium
US11643102B1 (en)2020-11-232023-05-09Samsara Inc.Dash cam with artificial intelligence safety event detection
US11461300B2 (en)*2021-01-062022-10-04Sap SeDynamic model server for multi-model machine learning inference services
US11657345B2 (en)*2021-03-242023-05-23International Business Machines CorporationImplementing machine learning to identify, monitor and safely allocate resources to perform a current activity
US11386325B1 (en)2021-11-122022-07-12Samsara Inc.Ensemble neural network state machine for detecting distractions
US11352014B1 (en)*2021-11-122022-06-07Samsara Inc.Tuning layers of a modular neural network
CN114358244B (en)*2021-12-202023-02-07淮阴工学院Big data intelligent detection system of pressure based on thing networking
US12315604B2 (en)2022-06-022025-05-27Evernorth Stragic Development, Inc.Recurring remote monitoring with real-time exchange to analyze health data and generate action plans
US12096119B2 (en)2022-08-042024-09-17Ford Global Technologies, LlcLocal compute camera calibration

Family Cites Families (12)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
US8868288B2 (en)*2006-11-092014-10-21Smartdrive Systems, Inc.Vehicle exception event management systems
US9227568B1 (en)*2012-01-042016-01-05Spirited Eagle Enterprises LLCSystem and method for managing driver sensory communication devices in a transportation vehicle
US20130201316A1 (en)2012-01-092013-08-08May Patents Ltd.System and method for server based control
US20160086391A1 (en)*2012-03-142016-03-24Autoconnect Holdings LlcFleetwide vehicle telematics systems and methods
US8886399B2 (en)*2013-03-152014-11-11Honda Motor Co., Ltd.System and method for controlling a vehicle user interface based on gesture angle
GB2516698B (en)*2013-07-302017-03-22Jaguar Land Rover LtdVehicle distributed network providing feedback to a user
US9361650B2 (en)*2013-10-182016-06-07State Farm Mutual Automobile Insurance CompanySynchronization of vehicle sensor information
US9282110B2 (en)2013-11-272016-03-08Cisco Technology, Inc.Cloud-assisted threat defense for connected vehicles
EP2892020A1 (en)*2014-01-062015-07-08Harman International Industries, IncorporatedContinuous identity monitoring for classifying driving data for driving performance analysis
US9493157B2 (en)*2015-01-292016-11-15Toyota Motor Engineering & Manufacturing North America, Inc.Autonomous vehicle operation in obstructed occupant view and sensor detection environments
US9844981B2 (en)*2015-06-022017-12-19Karma Automotive LlcSystems and methods for use in a vehicle for detecting external events
US10031523B2 (en)*2016-11-072018-07-24Nio Usa, Inc.Method and system for behavioral sharing in autonomous vehicles

Also Published As

Publication numberPublication date
US20180288182A1 (en)2018-10-04
WO2018183870A1 (en)2018-10-04
US10834221B2 (en)2020-11-10

Similar Documents

PublicationPublication DateTitle
US10834221B2 (en)Method and system for providing predictions via artificial intelligence (AI) models using a distributed system
US11335200B2 (en)Method and system for providing artificial intelligence analytic (AIA) services using operator fingerprints and cloud data
US20210334558A1 (en)Method and system for providing behavior of vehicle operator using virtuous cycle
US10540557B2 (en)Method and apparatus for providing driver information via audio and video metadata extraction
US11853741B2 (en)Federated learning for connected camera
US12354420B2 (en)Systems and methods for an automobile status recorder

Legal Events

DateCodeTitleDescription
STPPInformation on status: patent application and granting procedure in general

Free format text:APPLICATION DISPATCHED FROM PREEXAM, NOT YET DOCKETED

STPPInformation on status: patent application and granting procedure in general

Free format text:DOCKETED NEW CASE - READY FOR EXAMINATION

STPPInformation on status: patent application and granting procedure in general

Free format text:NON FINAL ACTION MAILED

STPPInformation on status: patent application and granting procedure in general

Free format text:NON FINAL ACTION MAILED

STCBInformation on status: application discontinuation

Free format text:ABANDONED -- FAILURE TO RESPOND TO AN OFFICE ACTION


[8]ページ先頭

©2009-2025 Movatter.jp