Movatterモバイル変換


[0]ホーム

URL:


AMD Schola

AMD Schola is a library for developing reinforcement learning (RL) agents in Unreal® Engine, and training with your favorite python-based RL frameworks: Gym, RLlib and Stable Baselines 3.

Download the latest version - v2.0.1

What’s new in AMD Schola v2

Flexible inference architecture with agent/policy/stepper system

AMD Schola v2 introduces a powerful and flexible architecture that decouples the inference process into components for maximum flexibility and reusability. This modular design allows you to mix and match different policies, stepping strategies, and agent implementations to suit your specific needs.

Key Components:

  • Agent Interface - Define an agent that takes actions and makes observations.

    • UInferenceComponent - Add inference to any actor.
    • AInferencePawn - Standalone pawn-based agents.
    • AInferenceController - AI controller pattern for complex behaviors.
  • Policy Interface - Plug in different inference backends, to turn observations into actions.

    • UNNEPolicy - Native ONNX inference with Unreal Engine’s Neural Network Engine.
    • UBlueprintPolicy - Custom Blueprint-based decision making.
    • Extensible interface for custom policy implementations, and new inference providers.
  • Stepper Objects - Control inference execution patterns by coordinating agents and policies.

    • SimpleStepper - Synchronous, straightforward inference.
    • PipelinedStepper - Overlap inference with simulation for better throughput.
    • Build custom steppers for specialized performance requirements.

This architecture means you can easilyswitch between inference backends,optimize performance characteristics, andcompose behaviors without rewriting your agent logic. Whether you’re prototyping with Blueprints or deploying production-ready neural networks, the same agent interface works seamlessly with your chosen policy and execution strategy.

Minari dataset support

AMD Schola v2 introduces native support for theMinari dataset format, the standard for offline RL and imitation learning datasets. Minari provides a unified interface for storing and loading trajectory data, making it easier to share demonstrations and datasets across different projects and research communities.

Dynamic agent management

One of the most powerful improvements in AMD Schola v2 is robust support foragents being spawned and deleted mid-episode. Previous versions required a static set of agents throughout an episode, or a predefined spawning function to spawn agents but v2 can now handle dynamic populations seamlessly.

This enables realistic scenarios like:

  • Battle Royale / Survival Games - Agents can be eliminated and removed from training without breaking the episode.
  • Population Simulations - Spawn new agents based on game events or environmental triggers.
  • Dynamic Team Composition - Add or remove team members on the fly.
  • Procedural Scenarios - Dynamically create agents as players progress through procedurally generated content.

The system lets you manage lifecycles the way you want, simply mark the agents as terminated when they die, or start reporting observations when they spawn. This makes it much easier to build realistic, dynamic environments that mirror actual game scenarios.

Enhanced command-line interface

Training from the command line is now more intuitive than ever:

Terminal window
# Stable Baselines 3
scholasb3trainppo...
# Ray RLlib
scholarllibtrainppo...
# Utilities
scholacompile-proto
scholabuild-docs

The new CLI built withcyclopts provides better error messages, auto-completion support, and a more consistent interface across different RL frameworks.

Unreal Blueprints improvements

Working in Unreal Engine Blueprints is smoother than ever:

  • Instanced Struct Based Objects Full Blueprint Support for all points and spaces.
  • Enhanced Blueprint Utilities for spaces and points.

Updated framework support

AMD Schola v2 has been updated to support the latest versions of all major RL frameworks and libraries:

  • Gymnasium - Full support for the latest Gymnasium API (1.1+).
  • Ray RLlib New API Stack - Compatible with the latest Ray RLlib features and algorithms.
  • Stable-Baselines3 2.x - Updated to work with the newest SB3 release.

These updates ensure you can leverage the latest features, bug fixes, and performance improvements from the RL ecosystem while training your agents in Unreal Engine.

Prerequisites

Features

Inference in C++

Schola provides tools for connecting and controlling agents with ONNX models inside Unreal Engine, allowing for inference with or without Python.

Simple Unreal interfaces

Schola exposes simple interfaces in Unreal Engine for the user to implement, allowing you to quickly build and develop reinforcement learning environments.

Modular components

Environments in Schola are modular so you can quickly design new agents from existing components, such as sensors and actuators.

Multi-agent training

Train multiple agents to compete against each other at the same time using RLlib and multi-agent environments built with Schola.

Vectorized training

Run multiple copies of your environment within the same Unreal Engine process to accelerate training.

Headless training

Run training without rendering to significantly improve training throughput.

AMD Schola v1.3 sample environments

Basic

TheBasic environment features an agent that can move in the X-dimension and receives a small reward for going five steps in one direction and a bigger reward for going in the opposite direction.

MazeSolver: Using raycasts

TheMazeSolver environment features a static maze that the agent learns to solve as fast as possible. The agent observers the environment using raycasts, moves by teleporting in 2 dimensions and is given a reward for getting closer to the goal.

3DBall: Physics based environments

The3DBall environment features an agent that is trying to balance a ball on-top of itself. The agent can rotate itself and receives a reward every step until the ball falls.

BallShooter: Building your own actuator

TheBallShooter environment features a rotating turret that learns to aim and shoot at randomly moving targets. The agent can rotate in either direction, and detects the targets by using a cone shaped ray-cast.

Pong: Collaborative training

ThePong environment features two agents playing a collaborative game of pong. The agents receive a reward every step as long as the ball has not hit the wall behind either agent. The game ends when the ball hits the wall behind either agent.

Tag: Competitive multi-agent training

TheTag environment features a 3v1 game of tag, where one agent(the runner) has to run away from the other agents which are trying to collide with it. The agents move using forward, left and right movement input, and observe the environment with a combination of ray-casts and global position data.

RaceTrack: Controlling chaos vehicles with Schola

TheRaceTrack environment features a car implemented with Chaos Vehicles, that learns to follow a race track. The agent controls the throttle, break and steering of the car, and can see it’s velocity and position relative to the center of the track.

Additional resources

Endnotes

Unreal® is a trademark or registered trademark of Epic Games, Inc. in the United States of America and elsewhere.

“Python” is a trademark or registered trademark of the Python Software Foundation.

Version history

  • Updated python dependencies to prevent new releases of dependencies from breaking install.
  • Fixed an issue with Camera Sensors using incorrect space dimensions.
  • Fixed several occasional compilation errors on Microsoft® Windows®.
  • Removed debug files included in v2.0.0. and updated tests to not use these files.

Related software

AMD Radeon™ Anti-Lag 2
AMD Radeon™ Anti-Lag 2
AMD Radeon™ Anti-Lag 2 reduces the system latency by applying frame alignment between the CPU and GPU jobs.

Related news and technical articles

Training an X-ARM 5 robotic arm with AMD Schola and Unreal Engine
Training an X-ARM 5 robotic arm with AMD Schola and Unreal Engine
Train a robot arm with reinforcement learning in AMD Schola using Unreal® Engine, progressively increasing task complexity to adapt to changing conditions.
AMD FidelityFX Super Resolution 4 plugin updated for Unreal Engine 5.7
AMD FidelityFX Super Resolution 4 plugin updated for Unreal Engine 5.7
Our AMD FSR 4 plugin has been updated to support Unreal® Engine 5.7, empowering you to build expansive, lifelike, and high-performance worlds.
Sim-to-real in AMD Schola
Sim-to-real in AMD Schola
Replicating a physical line-following device in Unreal® Engine and training it with reinforcement learning using AMD Schola.
AMD FSR Upscaling Unreal Engine plugin guide
AMD FSR Upscaling Unreal Engine plugin guide
Download the AMD FSR plugin for Unreal Engine, and learn how to install and use it.
AMD FidelityFX Super Resolution 4 now available on GPUOpen
AMD FidelityFX Super Resolution 4 now available on GPUOpen
Discover AMD FSR 4, our cutting-edge ML-based upscaler (with UE5 plugin) which delivers significant image quality improvements over FSR 3.1.
Get ready for FSR Redstone with AMD FSR 3.1.4 and our new UE 5.6 plugin
Get ready for FSR Redstone with AMD FSR 3.1.4 and our new UE 5.6 plugin
Integrate AMD FSR 3.1.4 to get ready for FSR Redstone, updated Unreal Engine plugin now available.
AMD TressFX 5.0 for Unreal Engine 5 is now available
AMD TressFX 5.0 for Unreal Engine 5 is now available
AMD TressFX 5.0 has been updated for high-quality simulation and rendering of realistic hair and fur in Unreal Engine 5.
Advancing AI in Video Games with AMD Schola
Advancing AI in Video Games with AMD Schola
By connecting popular open-source RL libraries (written in Python) with the visual and physics capabilities of Unreal Engine, Schola empowers AI researchers and game developers alike to push the boundaries of intelligent gameplay.

Related videos

Advancing AI in video games with AMD Schola | HTEC Days 2025 - YouTube link
Advancing AI in video games with AMD Schola | HTEC Days 2025 - YouTube link
Join Alexander Cann, Lead Developer at Schola, and Mehdi Saeedi, AI Lead at Schola, as they take you through the fascinating world of reinforcement learning (RL) and its transformative impact on gaming. They'll be joined by Gabor Sines, Sr. Fellow Engineer at AMD, as moderator.
Unreal Engine 4 TressFX 5.0
Unreal Engine 4 TressFX 5.0
Watch our video explaining what UE4 TressFX 5.0 is, and how to use it. TressFX is designed to simulate and render realistic hair and fur.
Subsurface Scattering in Unreal Forward Renderer (2017) - YouTube link
Subsurface Scattering in Unreal Forward Renderer (2017) - YouTube link
This talk discusses how subsurface scattering is implemented in Unreal Engine’s forward renderer. Because UE4 implements subsurface scattering as a screen space effect, it wasn’t available on the forward path by default, so a new technique had to be implemented, and one that had to still work with the UE4 material system and editor.
A Sampling of UE4 Rendering Improvements (2017) - YouTube link
A Sampling of UE4 Rendering Improvements (2017) - YouTube link
Arne Schober at Epic gives a rundown of a number of recent (2017) interesting improvements to UE4’s renderers, from MSAA support in the forward renderer for VR, to compositing a usable UI on top of a HDR image.

[8]ページ先頭

©2009-2026 Movatter.jp