Accepted Extended Abstracts

IDAuthorsTitle
3Kenjiro TakazawaPure Nash Equilibria in Weighted Congestion Games with Complementarities and Beyond
8David Milec, Ondřej Kubíček and Viliam LisyContinual Depth-limited Responses for Computing Counter-strategies in Sequential Games
12Haipeng Zhang, Zhiwen Wang and Na LiMATLight: Traffic Signal Coordinated Control Algorithm Based on Heterogeneous-Agent Mirror Learning With Transformer
13Ben Armstrong and Kate LarsonLiquid Democracy for Low-Cost Ensemble Pruning
14Abhijat Biswas, Badal Arun Pardhi, Caleb Chuck, Jarrett Holtz, Scott Niekum, Henny Admoni and Alessandro AllieviGaze Supervision for Mitigating Causal Confusion in Driving Agents
21Ram Rachum, Yonatan Nakar, Bill Tomlinson, Nitay Alon and Reuth MirskyEmergent Dominance Hierarchies in Reinforcement Learning Agents
32Mathieu Mari, Michał Pawłowski, Runtian Ren and Piotr SankowskiMulti-level aggregation with delays and stochastic arrivals
41Alexander W. Goodall and Francesco BelardinelliLeveraging Approximate Model-based Shielding for Probabilistic Safety Guarantees in Continuous Environments
50Kazi Ashik Islam, Da Qi Chen, Madhav Marathe, Henning Mortveit, Samarth Swarup and Anil VullikantiStrategic Routing and Scheduling for Evacuations
67Daniel Melcer, Christopher Amato and Stavros TripakisShield Decentralization for Safe Reinforcement Learning in General Partially Observable Multi-Agent Environments
77Thayne T. Walker, Nathan Sturtevant and Ariel FelnerClique Analysis and Bypassing in Continuous-Time Conflict-Based Search
85Jiafei Lyu, Le Wan, Xiu Li and Zongqing LuTowards Understanding How to Reduce Generalization Gap in Visual Reinforcement Learning
94Dave de Jonge and Laura Rodriguez CimaAttila: a Negotiating Agent for the Game of Diplomacy, Based on Purely Symbolic A.I.
107Dapeng Li, Zhiwei Xu, Bin Zhang, Guangchong Zhou, Zeren Zhang and Guoliang FanFrom Explicit Communication to Tacit Cooperation: A Novel Paradigm for Cooperative MARL
108Pankaj Deoli, Rohit Kumar, Axel Vierling and Karsten BernsEvaluation of Robustness of Off-Road Autonomous Driving Segmentation against Adversarial Attacks: A Dataset-Centric Study
113Glareh Mir and Michael BeetzSimulated Robotic Soft Body Manipulation
114Yu Niu, Hengxu Zhao and Lei YuMA-MIX: Value Function Decomposition for Cooperative Multiagent Reinforcement Learning Based on Multi-Head Attention Mechanism
115Ayşe Mutlu DeryaA Comparison of the Myerson Value and the Position Value
117Tamara C.P. Florijn, Pinar Yolum and Tim BaarslagA Negotiator’s Backup Plan: Optimal Concessions with a Reservation Value
140Jérôme Botoko Ekila, Jens Nevens, Lara Verheyen, Katrien Beuls and Paul Van EeckeDecentralised Emergence of Robust and Adaptive Linguistic Conventions in Populations of Autonomous Agents Grounded in Continuous Worlds
142Minghong Geng, Shubham Pateria, Budhitama Subagdja and Ah-Hwee TanBenchmarking MARL on Long Horizon Sequential Multi-Objective Tasks
143Yatharth Kumar, Sarfaraz Equbal, Rohit Gurjar, Swaprava Nath and Rohit VaishFair Scheduling of Indivisible Chores
152Qitong Kang, Fuyong Wang, Zhongxin Liu and Zengqiang ChenTIMAT: Temporal Information Multi-Agent Transformer
154Huihui ZhangBellman Momentum on Deep Reinforcement Learning
158Yasushi Kawase, Bodhayan Roy and Mohammad Azharuddin SanpuiContiguous Allocation of Binary Valued Indivisible Items on a Path
166Nicolas Bessone, Payam Zahadat and Kasper StoyDecentralized Control of Distributed Manipulators: An Information Diffusion Approach
175Ramsundar Anandanarayanan, Swaprava Nath and Rohit VaishCharging Electric Vehicles Fairly and Efficiently
180Kaifeng Zhang, Rui Zhao, Ziming Zhang and Yang GaoAuto-Encoding Adversarial Imitation Learning
190Mihail Stojanovski, Nadjet Bourdache, Grégory Bonnet and Mouaddib Abdel-IllahEthical Markov Decision Processes with Moral Worth as Rewards
197Emanuel Tewolde and Vincent ConitzerGame Transformations That Preserve Nash Equilibria or Best Response Sets
213Yihong Chen, Cong Wang, Tianpei Yang, Meng Wang, Yingfeng Chen, Jifei Zhou, Chaoyi Zhao, Xinfeng Zhang, Zeng Zhao, Changjie Fan, Zhipeng Hu, Rong Xiong and Long ZengMastering Robot Control through Point-based Reinforcement Learning with Pre-training
215Chenxu Wang, Zilong Chen and Huaping LiuOn the Utility of External Agent Intention Predictor for Human-AI Coordination
219Jean Marie Lagniez, Emmanuel Lonca and Jean-Guy MaillyA SAT-based Approach for Argumentation Dynamics
221Yao Zhang, Shanshan Zheng and Dengji ZhaoOptimal Diffusion Auctions
222Erwan Escudie, Laetitia Matignon and Jacques SaraydaryanAttention Graph for Multi-Robot Social Navigation with Deep Reinforcement Learning
225Pranavi Pathakota, Hardik Meisheri and Harshad KhadilkarDCT: Dual Channel Training of Action Embeddings for Reinforcement Learning with Large Discrete Action Spaces
226Wenlong Wang and Thomas PfeifferDecision Market Based Learning For Multi-agent Contextual Bandit Problems
235Kai Zhao, Jianye Hao, Yi Ma, Jinyi Liu, Yan Zheng and Zhaopeng MengENOTO: Improving Offline-to-Online Reinforcement Learning with Q-Ensembles
238Emre Erdogan, Rineke Verbrugge and Pinar YolumComputational Theory of Mind with Abstractions for Effective Human-Agent Collaboration
239Pedro P. Santos, Diogo Carvalho, Miguel Vasco, Alberto Sardinha, Pedro A. Santos, Ana Paiva and Francisco MeloCentralized Training with Hybrid Execution in Multi-Agent Reinforcement Learning
241Márton Benedek, Péter Biró, Gergely Csáji, Matthew Johnson, Daniël Paulusma and Xin YeComputing Balanced Solutions for Large International Kidney Exchange Schemes When Cycle Length Is Unbounded
249Yongsheng Mei, Hanhan Zhou and Tian LanProjection-Optimal Monotonic Value Function Factorization in Multi-Agent Reinforcement Learning
255Maxime Toquebiau, Nicolas Bredeche, Faïz Ben Amar and Jae-Yun JunJoint Intrinsic Motivation for Coordinated Exploration in Multi-Agent Deep Reinforcement Learning
256Saad Khan, Mayank Baranwal and Srikant SukumarDecentralized Safe Control for Multi-Robot Navigation in Dynamic Environments with Limited Sensing
257Rustam Galimullin and Louwe B. KuijerSynthesizing social laws with ATL conditions
261Sai Srivatsa Ravindranath, Zhe Feng, Shira Li, Jonathan Ma, Scott Kominers and David ParkesDeep Learning for Two-Sided Matching Markets
262Tesfay Zemuy Gebrekidan, Sebastian Stein and Timothy NormanCombinatorial Client-Master Multiagent Deep Reinforcement Learning for Task Offloading in Mobile Edge Computing
268Alexander Mendelsohn, Donald Sofge and Michael OtteEnhancing Search and Rescue Capabilities in Hazardous Communication-Denied Environments through Path-Based Sensors with Backtracking
272Ayhan Alp Aydeniz, Enrico Marchesini, Christopher Amato and Kagan TumerEntropy Seeking Constrained Multiagent Reinforcement Learning
283Everardo Gonzalez, Siddarth Viswanathan and Kagan TumerIndirect Credit Assignment in a Multiagent System
290Andrew Festa, Gaurav Dixit and Kagan TumerInfluence-Focused Asymmetric Island Model
298Hao Zhang, Tianpei Yang, Yan Zheng, Jianye Hao and Matthew E. TaylorPADDLE: Logic Program Guided Policy Reuse in Deep Reinforcement Learning
301Joanna Kaczmarek and Jörg RotheNP^PP-Completeness of Control by Adding Players to Change the Penrose–Banzhaf Power Index in Weighted Voting Games
305Archit Sood, Shweta Jain and Sujit GujarFairness of Exposure in Online Restless Multi-armed Bandits
307Sankarshan Damle and Sujit GujarAnalyzing Crowdfunding of Public Projects Under Dynamic Beliefs
312Sankarshan Damle, Varul Srivastava and Sujit GujarNo Transaction Fees? No Problem! Achieving Fairness in Transaction Fee Mechanism Design
317Varul Srivastava and Sujit GujarDecent-BRM: Decentralization through Block Reward Mechanisms
321Sambhav Solanki, Sujit Gujar and Shweta JainFairness and Privacy Guarantees in Federated Contextual Bandits
323Zixuan Chen, Ze Ji, Shuyang Liu, Jing Huo, Yiyu Chen and Yang GaoCognizing and Imitating Robotic Skills via a Dual Cognition-Action Architecture
335Ashish Rana, Michael Oesterle and Jannik BrinkmannGOV-REK: Governed Reward Engineering Kernels for Designing Robust Multi-Agent Reinforcement Learning Systems
337Jhih-Ching Yeh and Von-Wun SooToward Socially Friendly Autonomous Driving Using Multi-agent Deep Reinforcement Learning
343Kazunori Terada, Yasuo Noma and Masanori HattoriPersuasion by Shaping Beliefs about Multidimensional Features of a Thing
348Binghan Wu, Wei Bao and Bing ZhouCompetitive Analysis of Online Facility Open Problem
351Igor KuznetsovGuided Exploration in Reinforcement Learning via Monte Carlo Critic Optimization
355Xin Zhao, Jiaxin Li, Zhiwei Fang, Yuchen Guo, Jinyuan Zhao, Jie He, Wenlong Chen, Changping Peng and Guiguang DingJDRec: Practical Actor-Critic Framework for Online Combinatorial Recommender System
356Xinrun Wang, Chang Yang, Shuxin Li, Pengdeng Li, Xiao Huang, Hau Chan and Bo AnReinforcement Nash Equilibrium Solver
357Hao Yin, Fan Chen and Hongjie HeSolving Offline 3D Bin Packing Problem with Large-sized Bin via Two-stage Deep Reinforcement Learning
362Chen Wang, Sarah Erfani, Tansu Alpcan and Christopher LeckieDetecting Anomalous Agent Decision Sequences Based on Offline Imitation Learning
366Khaing Phyo Wai, Minghong Geng, Shubham Pateria, Budhitama Subagdja and Ah-Hwee TanExplaining Sequences of Actions in Multi-agent Deep Reinforcement Learning Models
372Junning Shao, Siwei Wang and Zhixuan FangBalanced and Incentivized Learning with  Limited Shared Information in Multi-agent Multi-armed Bandit
417Zifan Gong, Minming Li and Houyu ZhouFacility location games with task allocation
418Jiarui Gan, Rupak Majumdar, Debmalya Mandal and Goran RadanovicSequential principal-agent problems with communication: efficient computation and learning
428Yael Sabato, Amos Azaria and Noam HazonSource Detection in Networks using the Stationary Distribution of a Markov Chain
433Saar Cohen and Noa AgmonNear-Optimal Online Resource Allocation in the Random-Order Model
447Michael Tarlton, Gustavo Mello and Anis YazidiNeurological Based Timing Mechanism for Reinforcement Learning
462Stephen Cranefield, Sriashalya Srivathsan and Jeremy PittInferring Lewisian common knowledge using theory of mind reasoning in a forward-chaining rule engine
463Lukasz Pelcner, Matheus Do Carmo Alves, Leandro Soriano Marcolino, Paula Harrison and Peter AtkinsonIncentive-based MARL Approach for Commons Dilemmas in Property-based Environments
474Moumita Choudhury, Sandhya Saisubramanian, Hao Zhang and Shlomo ZilbersteinMinimizing Negative Side Effects in Cooperative Multi-Agent Systems using Distributed Coordination
478Yu Quan Chong, Jiaoyang Li and Katia SycaraOptimal Task Assignment and Path Planning using Conflict-Based Search with Precedence and Temporal Constraints
479Zida Wu, Mathieu Lauriere, Samuel Jia Cong Chua, Matthieu Geist, Olivier Pietquin and Ankur MehtaPopulation-aware Online Mirror Descent for Mean-Field Games by Deep Reinforcement Learning
480Weibo Jiang, Shaohui Li, Zhi Li, Yuxin Ke, Zhizhuo Jiang, Yaowen Li and Yu LiuDual-Policy-Guided Offline Reinforcement Learning with Optimal Stopping
492Ridhima Bector, Abhay Aradhya, Chai Quek and Zinovi RabinovichAdaptive Discounting of Training Time Attacks
497Yiwen Zhu, Jinyi Liu, Wenya Wei, Qianyi Fu, Yujing Hu, Zhou Fang, Bo An, Jianye Hao, Tangjie Lv and Changjie FanvMFER: von Mises-Fisher Experience Resampling Based on Uncertainty of Gradient Directions for Policy Improvement of Actor-Critic Algorithms
510Haochen Shi, Zhiyuan Sun, Xingdi Yuan, Marc-Alexandre Côté and Bang LiuOPEx: A Large Language Model-Powered Framework for Embodied Instruction Following
528Shiqi Lei, Kanghoon Lee, Linjing Li, Jinkyoo Park and Jiachen LiELA: Exploited Level Augmentation for Offline Learning in Zero-Sum Games
543Huijie Tang, Federico Berto, Zihan Ma, Chuanbo Hua, Kyuree Ahn and Jinkyoo ParkHiMAP: Learning Heuristics-Informed Policies for Large-Scale Multi-Agent Pathfinding
548Shiyao Zhang, Yuji Dong, Yichuan Zhang, Terry Payne and Jie ZhangLarge Language Model Assissted Multi-Agent Dialogue for Ontology Alignment
552Iosif Apostolakis, Zeynep G. Saribatur and Johannes P. WallnerAbstracting Assumptions in Structured Argumentation
554Yirui Zhang and Zhixuan FangDecentralized Competing Bandits in Many-to-One Matching Markets
555Mauricio Diaz-Ortiz Jr, Benjamin Kempinski, Daphne Cornelisse, Yoram Bachrach and Tal KachmanPruning Neural Networks Using Cooperative Game Theory
559Pascal van der Vaart, Neil Yorke-Smith and Matthijs SpaanBayesian Ensembles for Exploration in Deep Q-Learning
561Jarod Vanderlynden, Philippe Mathieu and Romain WarlopUnderstanding the impact of promotions on consumer behavior
563Daisuke Kikuta, Hiroki Ikeuchi, Kengo Tajiri, Yuta Toyama, Masaki Nakamura and Yuusuke NakanoElectric Vehicle Routing for Emergency Power Supply with Deep Reinforcement Learning
567Amirreza Bagheridelouee, Marzie Nilipour, Masoud Seddighin and Maziar ShamsipourMetric Distortion Under Public-Spirited Voting
569Megha Bose, Praveen Paruchuri and Akshat KumarFactored MDP based Moving Target Defense with Dynamic Threat Modeling
573Wentao Ye, Bo Liu, Yuan Luo and Jianwei HuangDual Role AoI-based Incentive Mechanism for HD map Crowdsourcing
574Karl Jochen Micheel and Anaëlle WilczynskiFairness in Repeated House Allocation
579Yangyang Zhao, Mehdi Dastani and Shihan WangBootstrapped Policy Learning: Goal Shaping for Efficient Task-oriented Dialogue Policy Learning
583Jayden Teoh, Wenjun Li and Pradeep VarakanthamUnifying Regret and State-Action Space Coverage for Effective Unsupervised Environment Design
593Matthew Sheldon, Dario Paccagnan and Giuliano CasaleCournot Games for Closed Cournot Queueing Games with Applications to Mobility Systems Networks
595Edith Elkind, Svetlana Obraztsova and Nicholas TehVerifying Proportionality in Temporal Voting
599Daniele Orner, Elizabeth Ondula, Nick Mumero and Richa GoyalSentimental Agents: Combining Sentiment Analysis and Non-Bayesian Updating for Cooperative Decision-Making
616Jayakrishnan Madathil, Neeldhara Misra and Yash MoreOpinion Diffusion on Society Graphs Based on Approval Ballots
621Yongjie YangOn the Complexity of Candidates-Embedded Multiwinner Voting under the Hausdorff Function
630Bin Chen and Zehong CaoHLG: Bridging Human Heuristic Knowledge and Deep Reinforcement Learning for Optimal Agent Performance
637Jinyun Tong, Bart De Keijzer and Carmine VentreReducing Systemic Risk in Financial Networks through Donations
658Piotr Faliszewski, Łukasz Janeczko, Andrzej Kaczmarczyk, Grzegorz Lisowski, Piotr Skowron and Stanisław SzufaStrategic Cost Selection in Participatory Budgeting
659Alexandra Cimpean, Catholijn Jonker, Pieter Libin and Ann NowéA Reinforcement Learning Framework For Studying Group And Individual Fairness
672Timo SpeithUnlocking the Potential of Machine Ethics with Explainability
677Maxime Reynouard, Olga Gorelkina and Rida LarakiBAR Nash Equilibrium and Application to Blockchain Design
696Somnath Hazra, Pallab Dasgupta and Soumyajit DeyAddressing Permutation Challenges in Multi-Agent Reinforcement Learning
702Yunfan Zhao, Nikhil Behari, Edward Hughes, Edwin Zhang, Dheeraj Nagaraj, Karl Tuyls, Aparna Taneja and Milind TambeTowards Zero Shot Learning in Restless Multi-armed Bandits
708Mohammad Irfan, Hau Chan and Jared SoundyComputing Nash Equilibria in Multidimensional Congestion Games
709Alberto Olivares-Alarcos, Sergi Foix, Júlia Borràs, Gerard Canal and Guillem AlenyàOntological modeling and reasoning for comparison and contrastive narration of robot plans
712Yi Mao and Andrew PerraultTime-Constrained Restless Multi-Armed Bandits with Applications to City Service Scheduling
719Peng Tang, Lifan Wang, Weidong Qiu, Zheng Huang and Qiangmin WangFuzzy Clustered Federated Learning Under Mixed Data Distributions
720Hadi Hosseini, Joshua Kavner, Tomasz Wąs and Lirong XiaDistribution of Chores with Information Asymmetry
724Siqi Chen, Jianing Zhao, Kai Zhao, Gerhard Weiss, Fengyun Zhang, Ran Su, Yang Dong, Daqian Li and Kaiyou LeiANOTO: Improving Automated Negotiation via Offline-to-Online Reinforcement Learning
725Stefan Roesch, Stefanos Leonardos and Yali DuThe Selfishness Level of Social Dilemmas
727Sheng Tian, Hong Shen, Yuan Tian and Hui TianConsensus of Nonlinear Multi-Agent Systems with Semi-Markov Switching Under DoS Attacks
729Sharlin Utke, Jeremie Houssineau and Giovanni MontanaEmbracing Relational Reasoning in Multi-Agent Actor-Critic
738Gokce Dayanikli, Mathieu Lauriere and Jiacheng ZhangDeep Learning for Population-Dependent Controls in Mean Field Control Problems with Common Noise
744Erin Richardson, Savannah Buchner, Jacob Kintz, Torin Clark and Allison AndersonPsychophysiological Models of Cognitive States Can Be Operator-Agnostic
749Xianjie Zhang, Jiahao Sun, Chen Gong, Kai Wang, Yifei Cao, Hao Chen and Yu LiuMutual Information as Intrinsic Reward of Reinforcement Learning Agents for On-demand Ride Pooling
757Daji Landis and Nikolaj Ignatieff SchwartzbachWhich Games are Unaffected by Absolute Commitments?
766Viviana Arrigoni, Giulio Attenni, Novella Bartolini, Matteo Finelli and Gaia MaselliMiKe: Task Scheduling for UAV-based Parcel Delivery
768Alexander Rutherford, Benjamin Ellis, Matteo Gallici, Jonathan Cook, Andrei Lupu, Garðar Ingvarsson, Timon Willi, Akbir Khan, Christian Schroeder de Witt, Alexandra Souly, Saptarashmi Bandyopadhyay, Mikayel Samvelyan, Minqi Jiang, Robert Lange, Shimon Whiteson, Bruno Lacerda, Nick Hawes, Tim Rocktäschel, Chris Lu and Jakob FoersterJaxMARL: Multi-Agent RL Environments in JAX
769Jacobus Smit and Fernando SantosFairness and Cooperation between Independent Reinforcement Learners through Indirect Reciprocity
773Alessandro Aloisio, Vittorio Bilo, Antonio Mario Caruso, Michele Flammini and Cosimo VinciApproximately Fair Allocation of Indivisible Items with Random Valuations
775Michael Akintunde, Vahid Yazdanpanah, Asieh Salehi Fathabadi, Corina Cirstea, Mehdi Dastani and Luc MoreauActual Trust in Multiagent Systems
776Gianvincenzo Alfano, Sergio Greco, Francesco Parisi and Irina TrubitsynaGeneral Epistemic Abstract Argumentation Framework: Semantics and Complexity
790Rafael Pina, Varuna De Silva, Corentin Artaud and Xiaolan LiuFully Independent Communication in Multi-Agent Reinforcement Learning
796Chaitanya Kharyal, Sai Krishna Gottipati, Tanmay Sinha, Srijita Das and Matthew E. TaylorGLIDE-RL: Grounded Language Instruction through DEmonstration in RL
805Youssef Hamadi and Gauthier PicardTowards Socially-Acceptable Multi-Criteria Resolution of the 4D-Contracts Repair Problem
810Sam Williams and Jyotirmoy DeshmukhPotential Games on Cubic Splines for Multi-Agent Motion Planning of Autonomous Agents
814Michael Y Fatemi, Wesley A Suttle and Brian M SadlerDeceptive Path Planning via Reinforcement Learning with Graph Neural Networks
818Jiehua Chen and William ZwickerCutsets and EF1 Fair Division of Graphs
824Hau Chan, Xinliang Fu, Minming Li and Chenhao WangMechanism Design for Reducing Agent Distances  to Prelocated Facilities
840Matt Hare, Douglas Salt, Ric Colasanti, Richard Milton, Mike Batty, Alison Heppenstall and Gary PolhillTaking Agent-Based Social Simulation to the Next Level Using Exascale Computing: Potential Use-Cases, Capacity Requirements and Threats.
842Binyu Zhao, Wei Zhang and Zhaonian ZouDistance-Aware Attentive Framework for Multi-Agent Collaborative Perception in Presence of Pose Error
845Calarina Muslimani and Matthew TaylorLeveraging Sub-Optimal Data for Human-in-the-Loop Reinforcement Learning
868Gaël Gendron, Yang Chen, Mitchell Rogers, Yiping Liu, Mihailo Azhar, Shahrokh Heidari, David Arturo Soriano Valdez, Kobe Knowles, Padriac O’Leary, Simon Eyre, Michael Witbrock, Gillian Dobbie, Jiamou Liu and Patrice DelmasBehaviour Modelling of Social Animals via Causal Structure Discovery and Graph Neural Networks
876Anindya Sarkar, Alex DiChristofano, Sanmay Das, Patrick Fowler, Nathan Jacobs and Yevgeniy VorobeychikGeospatial Active Search for Preventing Evictions
880Redha Taguelmimt, Samir Aknine, Djamila Boukredera, Narayan Changder and Tuomas SandholmEfficient Size-based Hybrid Algorithm for Optimal Coalition Structure Generation
894William Yue, Bo Liu and Peter StoneOverview of t-DGR: A Trajectory-Based Deep Generative Replay Method for Continual Learning in Decision Making
901Ava Pettet, Yunuo Zhang, Baiting Luo, Kyle Wray, Hendrik Baier, Aron Laszka, Abhishek Dubey and Ayan MukhopadhyayDecision Making in Non-Stationary Environments with Policy-Augmented Search
904Shao-Hung Chan, Zhe Chen, Dian-Lun Lin, Yue Zhang, Daniel Harabor, Sven Koenig, Tsung-Wei Huang and Thomy PhanAnytime Multi-Agent Path Finding using Operator Parallelism in Large Neighborhood Search
905Maya Viswanathan and Ruta MehtaOn the existence of EFX under picky or non-differentiative agents
914Arpita Biswas, Yiduo Ke, Samir Khuller and Quanquan LiuFair Allocation of Conflicting Courses under Additive Utilities
947Nilson Mori Lazarin, Carlos Pantoja and Jose ViterboA Specific-Purpose Linux Distribution for Embedded BDI-based Multi-agent Systems
974Marwa Abdulhai, Micah Carroll, Justin Svegliato, Anca Dragan and Sergey LevineDefining Deception in Decision Making
976Yuxin Chen, Chen Tang, Ran Tian, Chenran Li, Jinning Li, Masayoshi Tomizuka and Wei ZhanQuantifying Agent Interaction in Multi-agent Reinforcement Learning for Cost-efficient Generalization
982Federico Berto, Chuanbo Hua, Junyoung Park and Jinkyoo ParkEfficiently Solving Min-Max Routing Problems via Parallel Autoregressive Policies
983Ben Aoki-Sherwood, Catherine Bregou, David Liben-Nowell, Kiran Tomlinson and Thomas ZengBounding consideration probabilities in consider-then-choose ranking models
997Xuan Kien Phung and Sylvie HamelOptimal majority rules and quantitative Condorcet properties of setwise Kemeny voting schemes
999John Randolph, Amy Greenwald and Denizalp GoktasBanzhaf Power in Hierarchical Games
1010Yansong Li and Shuo HanEfficient Collaboration with Unknown Agents: Ignoring Similar Agents without Checking Similarity
1015Akshat KumarDifference of Convex Functions Programming for Policy Optimization in Reinforcement Learning
1023Redha Taguelmimt, Samir Aknine, Djamila Boukredera, Narayan Changder and Tuomas SandholmA Multiagent Path Search Algorithm for Large-Scale Coalition Structure Generation
1027Bruce M. Kapron and Koosha SamieefarOn the Computational Complexity of Quasi-Variational Inequalities and Multi-Leader-Follower Games
1029Titas Chakraborty and Parth ShettiwarNon Stationary Bandits with Periodic Variation
1051Martina Baiardi, Samuele Burattini, Giovanni Ciatto, Danilo Pianini, Andrea Omicini and Alessandro RicciConcurrency model of BDI programming frameworks: why should we control it?
1067Alexey Gorbatovski and Sergey KovalchukReinforcement learning for question answering in programming domain using public community scoring as a human feedback
1068Ganesh Ramanathan, Simon Mayer, Simon Hess and Andres GomezImproving Utilization and Sustainability of Low-power Wireless Sensors through Decentralized Role Allocation in a Multi-agent System
1073Karthik Sama, Jayati Deshmukh and Srinath SrinivasaSocial Identities and Responsible Agency
1074Janvi Chhabra, Jayati Deshmukh and Srinath SrinivasaModelling the Dynamics of Subjective Identity in Allocation Games
1075Ganesh Ramanathan, Simon Mayer and Andrei CiorteaSemantic Bridges in Engineering: Integrating Knowledge to Enable Autonomous Systems for Automation
1077Berk Buzcu, Emre Kuru and Reyhan AydoganUser-centric Explanation Strategies for Interactive Recommenders
1083Georgios Chionas, Pedro Braga, Stefanos Leonardos, Carmine Ventre, Georgios Piliouras and Piotr KrystaWho gets the Maximal Extractable Value? A Dynamic Sharing Blockchain Mechanism
1095Philipp Altmann, Adelina Bärligea, Jonas Stein, Michael Kölle, Thomas Gabor, Thomy Phan and Claudia Linnhof-PopienChallenges for Reinforcement Learning in Quantum Computing
1097Ruixi Luo, Kai Jin and Zelin YeSimple $k$-crashing Plan with a Good Approximation Ratio
1100Hafez Ghaemi, Hamed Kebriaei, Alireza Ramezani Moghaddam and Majid Nili AhmadabadiRisk-Sensitive Multi-Agent Reinforcement Learning in Network Aggregative Markov Games
1107Tianyi Yang, Yuxiang Zhai, Dengji Zhao, Xinwei Song and Miao LiTruthful and Stable One-sided Matching on Networks
1115Pankaj KumarDeep Hawkes Process for High-Frequency Market Making
1116Prabhat Kumar Chand, Apurba Das and Anisur Rahaman MollaAgent-Based Triangle Counting and its Applications in Anonymous Graphs
1124Tim FrenchAleatoric Predicates: Reasoning about Marbles
1128Gogulapati SreedurgaHybrid Participatory Budgeting: Divisible, Indivisible, and Beyond