- Notifications
You must be signed in to change notification settings - Fork45
Parallel, highly efficient code (CPU and GPU) for DEM and CFD-DEM simulations.
License
PhasicFlow/phasicFlow
Folders and files
| Name | Name | Last commit message | Last commit date | |
|---|---|---|---|---|
Repository files navigation
PhasicFlow is a robust, open-source C++ framework designed for the efficient simulation of granular materials using the Discrete Element Method (DEM). Leveraging parallel computing paradigms, PhasicFlow is capable of executing simulations on shared-memory multi-core architectures, including CPUs and NVIDIA GPUs (CUDA-enabled). The core parallelization strategy focuses on loop-level parallelism, enabling significant performance gains on modern hardware. Users can seamlessly transition between serial execution on standard PCs, parallel execution on multi-core CPUs (OpenMP), and accelerated simulations on GPUs. Currently, PhasicFlow supports simulations involving up to 80 million particles on a single desktop workstation. Detailed performance benchmarks are available on thePhasicFlow Wiki.
Scalable Parallelism: MPI Integration
Ongoing development includes the integration of MPI-based parallelization with dynamic load balancing. This enhancement will extend PhasicFlow's capabilities to distributed memory environments, such as multi-GPU workstations and high-performance computing clusters. Upon completion, PhasicFlow will offer six distinct execution modes:
- Serial Execution: Single-core CPU.
- Shared-Memory Parallelism: Multi-core CPU (OpenMP).
- GPU Acceleration: NVIDIA GPU (CUDA).
- Distributed-Memory Parallelism: MPI.
- Hybrid Parallelism: MPI + OpenMP.
- Multi-GPU Parallelism: MPI + CUDA.
PhasicFlow can be compiled for both CPU and GPU execution.
- Current Development (v-1.0): Comprehensive build instructions are availablehere.
- Latest Release (v-0.1): Detailed build instructions are availablehere.
In-depth documentation, including code structure, features, and usage guidelines, is accessible via theonline documentation portal.
Practical examples and simulation setups are provided in thetutorials directory. For detailed explanations and step-by-step guides, please refer to thetutorial section on the PhasicFlow Wiki.
We welcome contributions to PhasicFlow! Whether you're a developer or a new user, there are many ways to get involved. Here's how you can help:
- Bug Reports
- Suggestions for better user experience
- Feature request and algorithm improvements
- Tutorials, Simulation Case Setups and documentation
- Direct Code Contributions
For more details on how you can contribute to PhasicFlow seethis page.
PhasicFlowPlus is an extension of PhasicFlow that facilitates the simulation of particle-fluid systems using resolved and unresolved CFD-DEM methods. The repository for PhasicFlowPlus can be foundhere.
If you are using PhasicFlow in your research or industrial work, cite the followingarticle:
@article{ NOROUZI2023108821, title = {PhasicFlow: A parallel, multi-architecture open-source code for DEM simulations}, journal = {Computer Physics Communications}, volume = {291}, pages = {108821}, year = {2023}, issn = {0010-4655}, doi = {https://doi.org/10.1016/j.cpc.2023.108821}, url = {https://www.sciencedirect.com/science/article/pii/S0010465523001662}, author = {H.R. Norouzi}, keywords = {Discrete element method, Parallel computing, CUDA, GPU, OpenMP, Granular flow}}PhasicFlow relies on the following external libraries:
- Kokkos: A community-led performance portability ecosystem within the Linux Foundation's High-Performance Software Foundation (HPSF). (https://github.com/kokkos/kokkos)
- CLI11 1.8: A command-line interface parser developed by the University of Cincinnati. (https://github.com/CLIUtils/CLI11)
About
Parallel, highly efficient code (CPU and GPU) for DEM and CFD-DEM simulations.
Topics
Resources
License
Uh oh!
There was an error while loading.Please reload this page.
Stars
Watchers
Forks
Packages0
Uh oh!
There was an error while loading.Please reload this page.
Contributors11
Uh oh!
There was an error while loading.Please reload this page.
