Movatterモバイル変換


[0]ホーム

URL:


US20180217932A1 - Data processing apparatus with snoop request address alignment and snoop response time alignment - Google Patents

Data processing apparatus with snoop request address alignment and snoop response time alignment
Download PDF

Info

Publication number
US20180217932A1
US20180217932A1US15/422,691US201715422691AUS2018217932A1US 20180217932 A1US20180217932 A1US 20180217932A1US 201715422691 AUS201715422691 AUS 201715422691AUS 2018217932 A1US2018217932 A1US 2018217932A1
Authority
US
United States
Prior art keywords
data
bus
address
beat
bus width
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
US15/422,691
Other versions
US10042766B1 (en
Inventor
Tushar P. Ringe
Jamshed Jalal
Klas Magnus Bruce
Phanindra Kumar Mannava
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
ARM Ltd
Original Assignee
ARM Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by ARM LtdfiledCriticalARM Ltd
Priority to US15/422,691priorityCriticalpatent/US10042766B1/en
Assigned to ARM LTDreassignmentARM LTDASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS).Assignors: BRUCE, KLAS MAGNUS, JALAL, JAMSHED, RINGE, TUSHAR P., MANNAVA, PHANINDRA KUMAR
Publication of US20180217932A1publicationCriticalpatent/US20180217932A1/en
Application grantedgrantedCritical
Publication of US10042766B1publicationCriticalpatent/US10042766B1/en
Activelegal-statusCriticalCurrent
Anticipated expirationlegal-statusCritical

Links

Images

Classifications

Definitions

Landscapes

Abstract

A home node of a data processing apparatus that includes a number of devices coupled via an interconnect system is configured to provide efficient transfer of data to a first device from a second device. The home node is configured dependent upon data bus widths of the first and second devices and the data bus width of the interconnect system. Data is transferred as a cache line serialized into a number of data beats. The home node may be configured to minimize the number of data transfers on the third data bus or to minimize latency in the transfer of the critical beat of the cache line.

Description

Claims (26)

What is claimed is:
1. A data processing apparatus comprising:
a first device comprising a first data bus having a first bus width;
a second device comprising:
a cache operable to store data in one or more cache lines, the data in each cache line comprising one or more data beats; and
a second data bus having a second bus width, the second data bus able to transmit a data in a cache line as a series of data beats; and
an interconnect system comprising a home node and a third data bus having a third bus width, the interconnect system configured to transfer data between the first device and the second device;
where the home node is configured to:
receive a read request, from the first device, for data associated with a first address;
when a critical data beat containing the data associated with the first address is present in a cache line at the second device:
send a snoop message to the second device for data at a second address in the cache line; and
when the second bus width is less than the third bus width and either the first bus width is the same as the third bus width or the data processing apparatus is configured to minimize the time taken to transfer the critical data beat from the second device to the first device:
align the first address to a boundary of the third data bus to provide the second address;
otherwise:
provide the first address as the second address.
2. The data processing apparatus ofclaim 1, where when the second bus width is less than the third bus width, the second data bus and the third data bus are coupled together via a data combiner of the data processing apparatus.
3. The data processing apparatus ofclaim 1, where, responsive to the snoop message, the second device is configured to send, to the home node, a data beat containing data associated with the second data address first, followed by the remaining data beats of the cache line containing data associated with the second address.
4. The data processing apparatus ofclaim 1, where, when the first bus width is less than the third bus width and the data processing apparatus is configured to minimize the time taken to transfer the critical data beat from the second device to the first device, the home node is configured to forward a critical data beat, received from the second device, to the first device without waiting to determine if it can be combined with a subsequent data beat received from the second device.
5. The data processing apparatus ofclaim 1, where, when the data processing apparatus is configured to minimize the time taken to transfer the critical data beat from the second device to the first device, the home node is configured to forward a critical data beat, received from the second device, to the first device without waiting to determine if it can be combined with a subsequent data beat received from the second device.
6. The data processing apparatus ofclaim 1, where data transferred on the third bus is divided into a plurality of chunks and where the data is transferred together with a plurality of validity-bits, each validity-bit associated with a chunk of the plurality of chunks and being indicative of the validity of that chunk.
7. The data processing apparatus ofclaim 1, where, when the first bus width is less than the third bus width, the first data bus and the third data bus are coupled together via a data splitter of the data processing apparatus.
8. The data processing apparatus ofclaim 1, further comprising one or more third devices operable to send requests to the home node, each third device having a data bus width.
9. The data processing apparatus ofclaim 1, further comprising one or more fourth devices operable to receive snoop requests from the home node, each fourth device having a data bus width.
10. The data processing apparatus ofclaim 1, where the first bus width is selected from a group of bus-widths consisting of 32-bits, 64-bits, 128-bits and 256-bits.
11. The data processing apparatus ofclaim 1, where the second bus width is selected from a group of bus-widths consisting of 32-bits, 64-bits, 128-bits and 256-bits.
12. A System-on-a-Chip (SoC) comprising the data processing apparatus ofclaim 1.
13. A method of data transfer in a data processing apparatus, the method comprising:
receiving, by a home node of an interconnect system that couples at least a first device and a second device, a request from the first device to access data associated with a first address, where the data associated with the first address is present in a critical beat of a cache line stored in a local cache of the second device; and
sending a snoop message from the home node to the second device for data in the cache line associated with a second address,
where:
the first device comprises a first data bus having a first bus width;
the second device comprises a second data bus having a second bus width; and
the interconnect system comprises a third data bus having a third bus width,
the method further comprising:
aligning the first address to a boundary of the third data bus to provide the second address when the second bus width is less than the third bus width and either the first bus width is the same as the third bus width or the data processing apparatus is configured to minimize the time taken to transfer the critical data beat from the second device to the first device, and providing the first address as the second address otherwise.
14. The method ofclaim 13, where the second data bus and the third data bus are coupled via a data combiner, the method further comprising:
combining one or more data beats from the second data bus to provide data beats for the third data bus, when the second bus width is less than the third bus width.
15. The method ofclaim 13, where, responsive to the snoop message, the second device is configured to send, to the home node, a data beat containing data associated with the second data address first, followed by the remaining data beats of the cache line containing data associated with the second data address.
16. The method ofclaim 13, further comprising, when the first bus width is less than the third bus width and the data processing apparatus is configured to minimize the time taken to transfer the critical data beat from the second device to the first device:
forwarding, by the home node, a critical data beat, received from the second device, to the first device without waiting to determine if the critical data beat can be combined with a subsequent data beat received from the second device.
17. The method ofclaim 13, where the first data bus and the third data bus are coupled via a data splitter, the method further comprising:
splitting a data beat received from the third data bus to provide a plurality of narrower data beats for the first data bus when the first bus width is less than the third bus width; and
when a narrower beat of the plurality of narrower beats is a critical beat, sending the critical beat first, followed by remaining narrower beats.
18. A method of data transfer in a data processing apparatus, the method comprising:
receiving, by a home node of an interconnect system that couples at least a first device and a second device, a request from the first device to access data associated with a first address, where the data associated with the first address is present in a critical beat of a cache line stored in a local cache of the second device; and
sending a snoop message from the home node to the second device for data in the cache line associated with a second address,
where:
the first device comprises a first data bus having a first bus width;
the second device comprises a second data bus having a second bus width; and
the interconnect system comprises a third data bus having a third bus width,
the method further comprising:
receiving, by the home node, responsive to the snoop request, a sequence of data beats corresponding to the cache line associated with the second address, the sequence of data beats including the critical data beat; and
when the first bus width is less than the third bus width and the data processing apparatus is configured to minimize the time taken to transfer the critical data beat from the second device to the first device:
forwarding, by the home node, the critical data beat, received from the second device, to the first device without waiting to determine if the critical data beat can be combined with a subsequent data beat of the sequence of data beats received from the second device.
19. The method ofclaim 18, further comprising:
aligning the first address to a boundary of the third data bus to provide the second address when the second bus width is less than the third bus width and either the first bus width is the same as the third bus width or the data processing apparatus is configured to minimize the time taken to transfer the critical data beat from the second device to the first device; otherwise,
providing the first address as the second address.
20. A method of data transfer in a data processing apparatus, the method comprising:
receiving, by a home node of an interconnect system that couples at least first and second devices, a request from the first device to access data associated with a first address;
determining if data associated with a first address is present in a cache line stored in a local cache of the second device;
sending a snoop request from the home node to the second device for data associated with a second address in the cache line when data associated with the first address is present in the cache line stored in the local cache of the second device;
receiving, by the home node, a response to the snoop request, the response comprising a first sequence of data beats corresponding to the cache line; and
forwarding the response to the first device as a second sequence of data beats;
where:
the first device comprises a first data bus having a first bus width;
the second device comprises a second data bus having a second bus width; and
the interconnect system comprises a third data bus having a third bus width,
the method further comprising:
determining the second address by aligning the first address to a boundary of the third data bus when the second bus width is less than the third bus width and either the first bus width is the same as the third bus width or the data processing apparatus is configured to minimize the time taken to transfer the critical data beat from the second device to the first device; otherwise,
determining the second address to be the first address.
21. The method ofclaim 20, where, when the first bus width is less than the third bus width and the data processing apparatus is configured to minimize a time taken to transfer the critical data beat from the second device to the first device, forwarding the response to the first device as the second sequence of data beats comprises:
forwarding a critical data beat of the first sequence of data beats to the first device without waiting to determine if the critical data beat can be combined with a subsequent data beat of the first sequence of data beats.
22. A method of data transfer in a data processing apparatus, the method comprising:
receiving, by a home node of an interconnect system that couples at least a first device and a second device, a request from the first device to access data associated with a first address;
determining if data associated with a first address is present in a cache line stored in a local cache of the second device;
sending a snoop request from the home node to the second device for data in the cache line associated with a second address when data associated with the first address is present in the cache line stored in the local cache of the second device;
receiving, by the home node, a response to the snoop request, the response comprising a first sequence of data beats corresponding to the cache line; and
forwarding the response to the first device as a second sequence of data beats;
where:
the first device comprises a first data bus having a first bus width;
the second device comprises a second data bus having a second bus width; and
the interconnect system comprises a third data bus having a third bus width;
the method further comprising configuring operation of the home node dependent upon the first, second and third bus widths.
23. The method ofclaim 22, where operation of the home node is configured to minimize the number of data transfers on the third data bus.
24. The method ofclaim 22, where operation of the home node is configured to minimize a time between sending, from the first device, a request for data associated with the first address and receiving, by the first device, the data associated with the first address.
25. A data processing apparatus comprising:
a first device comprising a first data bus having a first bus width;
a second device comprising:
a cache operable to store data in one or more cache lines, the data in each cache line comprising one or more data beats; and
a second data bus having a second bus width, the second data bus able to transmit a data in a cache line as a series of data beats; and
an interconnect system comprising a home node and a third data bus having a third bus width, the interconnect system configured to transfer data between the first device and the second device;
where the home node comprises a cache line buffer and is configured to:
receive a request from the first device to access data associated with a first address;
determine if data associated with a first address is present in a cache line stored in a local cache of the second device;
send a snoop request to the second device for data in the cache line associated with a second address when data associated with the first address is present in the cache line stored in the local cache of the second device;
receive a data beat of the cache in response to the snoop request, where the data beat is data beat a sequence of data beats corresponding to the cache line;
merge the received data beat with any data existing in the cache line buffer;
when the data processing system is optimized for minimum latency and when all of the data associated with the first address is available in the cache line buffer:
send the data associated with the first address to the first device in a data beat on the third bus;
otherwise:
when the cache line buffer contains sufficient data to fully populate a data beat on the third data bus, send the data to the first device in a data beat on the third bus.
26. The data processing apparatus ofclaim 25, where a physical channel of the interconnect comprises the third data bus and a plurality of additional lines, where data on the third data bus comprises a plurality of data chunks and where the additional lines are configured to transfer validity-bits, each validity-bit associated with a data chunk of the plurality of data chunks and being indicative of the validity of the associated data chunk.
US15/422,6912017-02-022017-02-02Data processing apparatus with snoop request address alignment and snoop response time alignmentActiveUS10042766B1 (en)

Priority Applications (1)

Application NumberPriority DateFiling DateTitle
US15/422,691US10042766B1 (en)2017-02-022017-02-02Data processing apparatus with snoop request address alignment and snoop response time alignment

Applications Claiming Priority (1)

Application NumberPriority DateFiling DateTitle
US15/422,691US10042766B1 (en)2017-02-022017-02-02Data processing apparatus with snoop request address alignment and snoop response time alignment

Publications (2)

Publication NumberPublication Date
US20180217932A1true US20180217932A1 (en)2018-08-02
US10042766B1 US10042766B1 (en)2018-08-07

Family

ID=62979865

Family Applications (1)

Application NumberTitlePriority DateFiling Date
US15/422,691ActiveUS10042766B1 (en)2017-02-022017-02-02Data processing apparatus with snoop request address alignment and snoop response time alignment

Country Status (1)

CountryLink
US (1)US10042766B1 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN113316772A (en)*2019-02-082021-08-27Arm有限公司System, method and apparatus for enabling partial data transmission with indicator

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
US11550720B2 (en)2020-11-242023-01-10Arm LimitedConfigurable cache coherency controller
US11520722B2 (en)*2021-04-122022-12-06Microsoft Technology Licensing, LlcOn-chip non-power of two data transactions

Citations (10)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
US6128707A (en)*1996-12-232000-10-03International Business Machines CorporationAdaptive writeback of cache line data in a computer operated with burst mode transfer cycles
US6298424B1 (en)*1997-12-022001-10-02Advanced Micro Devices, Inc.Computer system including priorities for memory operations and allowing a higher priority memory operation to interrupt a lower priority memory operation
US20020184460A1 (en)*1999-06-042002-12-05Marc TremblayMethods and apparatus for combining a plurality of memory access transactions
US20030105933A1 (en)*1998-09-172003-06-05Sun Microsystems, Inc.Programmable memory controller
US20030115385A1 (en)*2001-12-132003-06-19International Business Machines CorporationI/O stress test
US20060136680A1 (en)*2004-12-172006-06-22International Business Machines CorporationCapacity on demand using signaling bus control
US20080120466A1 (en)*2006-11-202008-05-22Klaus OberlaenderDual access for single port cache
US20120198156A1 (en)*2011-01-282012-08-02Freescale Semiconductor, Inc.Selective cache access control apparatus and method thereof
US20140317357A1 (en)*2013-04-172014-10-23Advanced Micro Devices, Inc.Promoting transactions hitting critical beat of cache line load requests
US20160216912A1 (en)*2010-01-282016-07-28Hewlett Packard Enterprise Development LpMemory Access Methods And Apparatus

Family Cites Families (100)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
US5966729A (en)1997-06-301999-10-12Sun Microsystems, Inc.Snoop filter for use in multiprocessor computer systems
US6480927B1 (en)1997-12-312002-11-12Unisys CorporationHigh-performance modular memory system with crossbar connections
US6519685B1 (en)1999-12-222003-02-11Intel CorporationCache states for multiprocessor cache coherency protocols
US6546447B1 (en)2000-03-302003-04-08International Business Machines CorporationMethod and apparatus for dynamic PCI combining for PCI bridges
US6598123B1 (en)2000-06-282003-07-22Intel CorporationSnoop filter line replacement for reduction of back invalidates in multi-node architectures
US6810467B1 (en)2000-08-212004-10-26Intel CorporationMethod and apparatus for centralized snoop filtering
US6745297B2 (en)2000-10-062004-06-01Broadcom CorporationCache coherent protocol in which exclusive and modified data is transferred to requesting agent from snooping agent
US6868481B1 (en)2000-10-312005-03-15Hewlett-Packard Development Company, L.P.Cache coherence protocol for a multiple bus multiprocessor system
US7234029B2 (en)2000-12-282007-06-19Intel CorporationMethod and apparatus for reducing memory latency in a cache coherent multi-node architecture
US6859864B2 (en)2000-12-292005-02-22Intel CorporationMechanism for initiating an implicit write-back in response to a read or snoop of a modified cache line
US6996674B2 (en)2001-05-072006-02-07International Business Machines CorporationMethod and apparatus for a global cache directory in a storage cluster
US6823409B2 (en)2001-09-282004-11-23Hewlett-Packard Development Company, L.P.Coherency control module for maintaining cache coherency in a multi-processor-bus system
US7117311B1 (en)2001-12-192006-10-03Intel CorporationHot plug cache coherent interface method and apparatus
US7673090B2 (en)2001-12-192010-03-02Intel CorporationHot plug interface control method and apparatus
US6775748B2 (en)2002-01-242004-08-10Intel CorporationMethods and apparatus for transferring cache block ownership
US7047374B2 (en)2002-02-252006-05-16Intel CorporationMemory read/write reordering
US6959364B2 (en)2002-06-282005-10-25Intel CorporationPartially inclusive snoop filter
US7093079B2 (en)2002-12-172006-08-15Intel CorporationSnoop filter bypass
US6976132B2 (en)2003-03-282005-12-13International Business Machines CorporationReducing latency of a snoop tenure
GB2403561A (en)2003-07-022005-01-05Advanced Risc Mach LtdPower control within a coherent multi-processor system
US7325102B1 (en)2003-11-172008-01-29Sun Microsystems, Inc.Mechanism and method for cache snoop filtering
US7117312B1 (en)2003-11-172006-10-03Sun Microsystems, Inc.Mechanism and method employing a plurality of hash functions for cache snoop filtering
US7240165B2 (en)2004-01-152007-07-03Hewlett-Packard Development Company, L.P.System and method for providing parallel data requests
US7962696B2 (en)2004-01-152011-06-14Hewlett-Packard Development Company, L.P.System and method for updating owner predictors
US7974191B2 (en)2004-03-102011-07-05Alcatel-Lucent Usa Inc.Method, apparatus and system for the synchronized combining of packet data
US7698509B1 (en)2004-07-132010-04-13Oracle America, Inc.Snooping-based cache-coherence filter for a point-to-point connected multiprocessing node
US8332592B2 (en)2004-10-082012-12-11International Business Machines CorporationGraphics processor with snoop filter
US7305524B2 (en)2004-10-082007-12-04International Business Machines CorporationSnoop filter directory mechanism in coherency shared memory system
US7392351B2 (en)2005-03-292008-06-24International Business Machines CorporationMethod and apparatus for filtering snoop requests using stream registers
US7383397B2 (en)2005-03-292008-06-03International Business Machines CorporationMethod and apparatus for filtering snoop requests using a scoreboard
US7380071B2 (en)2005-03-292008-05-27International Business Machines CorporationSnoop filtering system in a multiprocessor system
US7373462B2 (en)2005-03-292008-05-13International Business Machines CorporationSnoop filter for filtering snoop requests
US20070005899A1 (en)2005-06-302007-01-04Sistla Krishnakanth VProcessing multicore evictions in a CMP multiprocessor
TWI416334B (en)2005-07-112013-11-21Nvidia CorpMethod, bus interface device and processor for transmitting data transfer requests from a plurality of clients as packets on a bus
US8250229B2 (en)2005-09-292012-08-21International Business Machines CorporationInternet protocol security (IPSEC) packet processing for multiple clients sharing a single network address
US7366847B2 (en)2006-02-062008-04-29Azul Systems, Inc.Distributed cache coherence at scalable requestor filter pipes that accumulate invalidation acknowledgements from other requestor filter pipes using ordering messages from central snoop tag
US7383398B2 (en)2006-03-312008-06-03Intel CorporationPreselecting E/M line replacement technique for a snoop filter
US20080005486A1 (en)2006-06-292008-01-03Mannava Phanindra KCoordination of snoop responses in a multi-processor system
US7581068B2 (en)2006-06-292009-08-25Intel CorporationExclusive ownership snoop filter
US7836144B2 (en)2006-12-292010-11-16Intel CorporationSystem and method for a 3-hop cache coherency protocol
US7613882B1 (en)2007-01-292009-11-033 Leaf SystemsFast invalidation for cache coherency in distributed shared memory system
US7685409B2 (en)2007-02-212010-03-23Qualcomm IncorporatedOn-demand multi-thread multimedia processor
US7937535B2 (en)2007-02-222011-05-03Arm LimitedManaging cache coherency in a data processing apparatus
US7640401B2 (en)2007-03-262009-12-29Advanced Micro Devices, Inc.Remote hit predictor
US7730266B2 (en)2007-03-312010-06-01Intel CorporationAdaptive range snoop filtering methods and apparatuses
US7769957B2 (en)2007-06-222010-08-03Mips Technologies, Inc.Preventing writeback race in multiple core processors
US20080320233A1 (en)2007-06-222008-12-25Mips Technologies Inc.Reduced Handling of Writeback Data
US7996626B2 (en)2007-12-132011-08-09Dell Products L.P.Snoop filter optimization
US9058272B1 (en)2008-04-252015-06-16Marvell International Ltd.Method and apparatus having a snoop filter decoupled from an associated cache and a buffer for replacement line addresses
US8015365B2 (en)2008-05-302011-09-06Intel CorporationReducing back invalidation transactions from a snoop filter
US7925840B2 (en)2008-09-052011-04-12Arm LimitedData processing apparatus and method for managing snoop operations
JP2011150422A (en)2010-01-192011-08-04Renesas Electronics CorpData processor
US8423736B2 (en)2010-06-162013-04-16International Business Machines CorporationMaintaining cache coherence in a multi-node, symmetric multiprocessing computer
US8392665B2 (en)2010-09-252013-03-05Intel CorporationAllocation and write policy for a glueless area-efficient directory cache for hotly contested cache lines
JP5543894B2 (en)2010-10-212014-07-09ルネサスエレクトロニクス株式会社 NoC system and input switching device
JP5913912B2 (en)2010-11-052016-04-27インテル コーポレイション Innovative Adaptive Routing in Dragonfly Processor Interconnect Network
US9477600B2 (en)2011-08-082016-10-25Arm LimitedApparatus and method for shared cache control including cache lines selectively operable in inclusive or non-inclusive mode
US8935485B2 (en)2011-08-082015-01-13Arm LimitedSnoop filter and non-inclusive shared cache memory
US8514889B2 (en)2011-08-262013-08-20Sonics, Inc.Use of common data format to facilitate link width conversion in a router with flexible link widths
US8638789B1 (en)2012-05-042014-01-28Google Inc.Optimal multicast forwarding in OpenFlow based networks
WO2014014452A1 (en)2012-07-182014-01-23Empire Technology Development LlcVirtual cache directory in multi-processor architectures
US9235519B2 (en)2012-07-302016-01-12Futurewei Technologies, Inc.Method for peer to peer cache forwarding
US9304924B2 (en)2012-08-172016-04-05Futurewei Technologies, Inc.Cache coherent handshake protocol for in-order and out-of-order networks
WO2014042649A1 (en)2012-09-142014-03-20Empire Technology Development, LlcCache coherence directory in multi-processor architectures
US9639469B2 (en)2012-09-282017-05-02Qualcomm Technologies, Inc.Coherency controller with reduced data buffer
US20140095801A1 (en)2012-09-282014-04-03Devadatta V. BodasSystem and method for retaining coherent cache contents during deep power-down operations
US20140095807A1 (en)2012-09-282014-04-03Qualcomm Technologies, Inc.Adaptive tuning of snoops
US20140095806A1 (en)2012-09-292014-04-03Carlos A. Flores FajardoConfigurable snoop filter architecture
US9129071B2 (en)2012-10-242015-09-08Texas Instruments IncorporatedCoherence controller slot architecture allowing zero latency write commit
WO2014065802A2 (en)2012-10-252014-05-01Empire Technology Development LlcMulti-granular cache coherence
US9170946B2 (en)2012-12-212015-10-27Intel CorporationDirectory cache supporting non-atomic input/output operations
US10073779B2 (en)2012-12-282018-09-11Intel CorporationProcessors having virtually clustered cores and cache slices
US9304923B2 (en)2013-03-122016-04-05Arm LimitedData coherency management
US8935453B2 (en)2013-03-152015-01-13Intel CorporationCompletion combining to improve effective link bandwidth by disposing at end of two-end link a matching engine for outstanding non-posted transactions
CN105009101B (en)2013-03-152018-03-13英特尔公司The monitoring filtering associated with data buffer is provided
US9361236B2 (en)2013-06-182016-06-07Arm LimitedHandling write requests for a data array
US20150074357A1 (en)2013-09-092015-03-12Qualcomm IncorporatedDirect snoop intervention
US20150103822A1 (en)2013-10-152015-04-16Netspeed SystemsNoc interface protocol adaptive to varied host interface protocols
US9405687B2 (en)2013-11-042016-08-02Intel CorporationMethod, apparatus and system for handling cache misses in a processor
US9830265B2 (en)2013-11-202017-11-28Netspeed Systems, Inc.Reuse of directory entries for holding state information through use of multiple formats
GB2522057B (en)2014-01-132021-02-24Advanced Risc Mach LtdA data processing system and method for handling multiple transactions
US9870209B2 (en)2014-03-282018-01-16Intel CorporationInstruction and logic for reducing data cache evictions in an out-of-order processor
US9244845B2 (en)2014-05-122016-01-26Netspeed SystemsSystem and method for improving snoop performance
US9166936B1 (en)2014-06-262015-10-20Parlant Technology, Inc.Message customization
KR102173089B1 (en)2014-08-082020-11-04삼성전자주식회사Interface circuit and packet transmission method thereof
US9311244B2 (en)2014-08-252016-04-12Arm LimitedEnforcing ordering of snoop transactions in an interconnect for an integrated circuit
US9507716B2 (en)2014-08-262016-11-29Arm LimitedCoherency checking of invalidate transactions caused by snoop filter eviction in an integrated circuit
US9727466B2 (en)2014-08-262017-08-08Arm LimitedInterconnect and method of managing a snoop filter for an interconnect
US9639470B2 (en)2014-08-262017-05-02Arm LimitedCoherency checking of invalidate transactions caused by snoop filter eviction in an integrated circuit
US9575893B2 (en)2014-10-222017-02-21Mediatek Inc.Snoop filter for multi-processor system and related snoop filtering method
US9727464B2 (en)2014-11-202017-08-08International Business Machines CorporationNested cache coherency protocol in a tiered multi-node computer system
US9886382B2 (en)2014-11-202018-02-06International Business Machines CorporationConfiguration based cache coherency protocol selection
US11237965B2 (en)2014-12-312022-02-01Arteris, Inc.Configurable snoop filters for cache coherent systems
US20160210231A1 (en)2015-01-212016-07-21Mediatek Singapore Pte. Ltd.Heterogeneous system architecture for shared memory
US9720838B2 (en)2015-03-272017-08-01Intel CorporationShared buffered memory routing
US9542316B1 (en)2015-07-232017-01-10Arteris, Inc.System and method for adaptation of coherence models between agents
US10157133B2 (en)2015-12-102018-12-18Arm LimitedSnoop filter for cache coherency in a data processing system
US20170091101A1 (en)2015-12-112017-03-30Mediatek Inc.Snoop Mechanism And Snoop Filter Structure For Multi-Port Processors
US20170185515A1 (en)2015-12-262017-06-29Bahaa FahimCpu remote snoop filtering mechanism for field programmable gate array
US9817760B2 (en)2016-03-072017-11-14Qualcomm IncorporatedSelf-healing coarse-grained snoop filter

Patent Citations (10)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
US6128707A (en)*1996-12-232000-10-03International Business Machines CorporationAdaptive writeback of cache line data in a computer operated with burst mode transfer cycles
US6298424B1 (en)*1997-12-022001-10-02Advanced Micro Devices, Inc.Computer system including priorities for memory operations and allowing a higher priority memory operation to interrupt a lower priority memory operation
US20030105933A1 (en)*1998-09-172003-06-05Sun Microsystems, Inc.Programmable memory controller
US20020184460A1 (en)*1999-06-042002-12-05Marc TremblayMethods and apparatus for combining a plurality of memory access transactions
US20030115385A1 (en)*2001-12-132003-06-19International Business Machines CorporationI/O stress test
US20060136680A1 (en)*2004-12-172006-06-22International Business Machines CorporationCapacity on demand using signaling bus control
US20080120466A1 (en)*2006-11-202008-05-22Klaus OberlaenderDual access for single port cache
US20160216912A1 (en)*2010-01-282016-07-28Hewlett Packard Enterprise Development LpMemory Access Methods And Apparatus
US20120198156A1 (en)*2011-01-282012-08-02Freescale Semiconductor, Inc.Selective cache access control apparatus and method thereof
US20140317357A1 (en)*2013-04-172014-10-23Advanced Micro Devices, Inc.Promoting transactions hitting critical beat of cache line load requests

Cited By (1)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN113316772A (en)*2019-02-082021-08-27Arm有限公司System, method and apparatus for enabling partial data transmission with indicator

Also Published As

Publication numberPublication date
US10042766B1 (en)2018-08-07

Similar Documents

PublicationPublication DateTitle
US9294403B2 (en)Mechanism to control resource utilization with adaptive routing
US7761631B2 (en)Data processing system, method and interconnect fabric supporting destination data tagging
US7627738B2 (en)Request and combined response broadcasting to processors coupled to other processors within node and coupled to respective processors in another node
US11483260B2 (en)Data processing network with flow compaction for streaming data transfer
US9208110B2 (en)Raw memory transaction support
US8169850B2 (en)Forming multiprocessor systems using dual processors
US10042766B1 (en)Data processing apparatus with snoop request address alignment and snoop response time alignment
EP3234783B1 (en)Pointer chasing across distributed memory
US8103791B2 (en)Synchronized communication in a data processing system
US20080175272A1 (en)Data processing system, method and interconnect fabric for selective link information allocation in a data processing system
US10489315B2 (en)Dynamic adaptation of direct memory transfer in a data processing system with mismatched data-bus widths
US7944932B2 (en)Interconnect fabric for a data processing system
US7809004B2 (en)Data processing system and processing unit having an address-based launch governor
US20060179253A1 (en)Data processing system, method and interconnect fabric that protect ownership transfer with a protection window extension
KR102839435B1 (en) Coherent block read implementation
US20060179197A1 (en)Data processing system, method and interconnect fabric having a partial response rebroadcast
US12242753B2 (en)Reduced network load with combined put or get and receiver-managed offset
US7483428B2 (en)Data processing system, method and interconnect fabric supporting a node-only broadcast
US8254411B2 (en)Data processing system, method and interconnect fabric having a flow governor
US7254694B2 (en)Processors interconnect fabric with relay broadcasting and accumulation of partial responses
US11487695B1 (en)Scalable peer to peer data routing for servers

Legal Events

DateCodeTitleDescription
ASAssignment

Owner name:ARM LTD, UNITED KINGDOM

Free format text:ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:RINGE, TUSHAR P.;JALAL, JAMSHED;BRUCE, KLAS MAGNUS;AND OTHERS;SIGNING DATES FROM 20170126 TO 20170201;REEL/FRAME:041592/0736

STCFInformation on status: patent grant

Free format text:PATENTED CASE

MAFPMaintenance fee payment

Free format text:PAYMENT OF MAINTENANCE FEE, 4TH YEAR, LARGE ENTITY (ORIGINAL EVENT CODE: M1551); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY

Year of fee payment:4


[8]ページ先頭

©2009-2025 Movatter.jp