Movatterモバイル変換


[0]ホーム

URL:


PDF, PPTX3,156 views

Introducing HPC with a Raspberry Pi Cluster

The document discusses the implementation of High-Performance Computing (HPC) using a Raspberry Pi cluster in educational settings, highlighting the author's background, inspirations, and teaching materials. It details the author's personal project, 'Tweety Pi', and various demonstrations utilizing Raspberry Pi for HPC training, alongside challenges and feedback from teaching experiences. Future work includes enhancements in configuration management and public engagement initiatives using HPC for research applications.

Related topics:

Embed presentation

Download as PDF, PPTX
Introducing HPC with a Raspberry Pi ClusterColin Sauzé <cos@aber.ac.uk>Research Software EngineerSuper Computing Wales ProjectAberystwyth UniversityA practical use of and good excuse to build Raspberry Pi Clusters
Overview● About Me● Inspirations● Why teach HPC with Raspberry Pi?● My Raspberry Pi cluster● Experiences from teaching● Future Work
About Me● Research Software Engineer withSupercomputing Wales project– 4 university partnership to supplyHPC systems– Two physical HPCs● PhD in Robotics– Experience with Linux on singleboard computers– Lots of Raspberry Pi projects
Inspiration #1: Los Alamos National Laboratory● 750 node cluster● Test system for softwaredevelopment● Avoid tying up the real cluster
Inspiration #2: Wee Archie/Archlet● EPCC’s Raspberry Pi Cluster● Archie: 18x Raspberry Pi 2’s(4 cores each)● Archlet: smaller 4 or 5 nodeclusters.● Used for outreach demos.● Setup instructions:https://github.com/EPCCed/wee_archletImage fromhttps://raw.githubusercontent.com/EPCCed/wee_archlet/master/images/IMG_20170210_132818620.jpg
Inspiration #3: Swansea’s Raspberry Pi Cluster● 16x Raspberry Pi 3s● CFD demo using a Kinectsensor● Demoed at the SwanseaFestival of Science 2018
Why Teach with a Raspberry Pi cluster?● Avoid loading real clusters doing actual research– Less fear from learners that they might break something● Resource limits more apparent● More control over the environment● Hardware less abstract● No need to have accounts on a real HPC
My Cluster● “Tweety Pi”– 10x Raspberry Pi model Bversion 1s– 1x Raspberry Pi 3 ashead/login node– Raspbian Stretch● Head node acts as WiFiaccess point– Internet via phone or laptop
Demo Software● British Science Week 2019– Simple Pi with Monte Carlo methods demo– MPI based– GUI to control how many jobs launch and showqueuing● Swansea CFD demo– Needs more compute power– 16x Raspberry Pi 3 vs 10x Raspberry Pi 1● Wee Archie/Archlet Demos– Many demos available●I only found this recently– https://github.com/EPCCed/wee_archie
Making a realistic HPC environment● MPICH● Slurm● Quotas on home directories● NFS mounted home directories● Software modules● Network booting compute nodes
Network booting hack● No PXE boot support on original Raspberry Pi (or Raspberry PiB+ and 2)● Kernel + bootloader on SD card● Root filesystem on NFS– Cmdline.txt contains:● console=tty1 root=/dev/nfs nfsroot=10.0.0.10:/nfs/node_rootfs,vers=3 ro ip=dhcp elevator=deadline rootwait● SD cards can be identical, small 50mb image, easy to replace
Teaching Materials● Based on Introduction to HPC with Super Computing Wales carpentry stylelesson:– What is an HPC?– Logging in– Filesystems and transferring data– Submitting/monitoring jobs with Slurm– Profiling– Parallelising code, Amdahl’s law– MPI– HPC Best Practice
Experiences from Teaching – STFC SummerSchool● New PhD students in solarphysics– Not registered at universitiesyet, no academic accounts● 15 people each time– 1st time using HPC for many– Most had some Unixexperience● Subset of Super ComputingWales introduction to HPCcarpentry lesson
Feedback● Very Positive●A lot seemed to enjoy playing around with SSH/SCP– First time using a remote shell for some– Others more adventurous than they might have been on a real HPC● Main complaint was lack of time (only 1.5 hours)– Only got as far as covering basic job submission– Quick theoretical run through of MPI and Amdahl’s law– Probably have 3-4 hours of material●Queuing became very apparent– 10 nodes, 15 users– “watch squeue” running on screen during practical parts
Problems● Slurm issues on day 1– Accidentally overwrote a system user when creating accounts● WiFi via Laptop/phone slow– When users connect to the cluster its their internet connection too– Relied on this for access to course notes
Experiences from teaching – SupercomputingWales Training● Approximately 10 people– Mix of staff and research students– Mixed experience levels– All intending to use a real HPC● Simultaneously used Raspberry Pi and real HPC– Same commands run on both● Useful backup system for those with locked accounts● Feedback good– Helped make HPC more tangible
Future Work● Configuration management tool(Ansible/Chef/Puppet/Salt etc) insteadof script for configuration● CentOS/Open HPC stack instead ofRaspbian● Public engagement demo whichfocuses on our research– Analysing satellite imagery– Simulate the monsters from MonsterLab(https://monster-lab.org/)
More Information● Setup instructions and scripts -https://github.com/colinsauze/pi_cluster● Teaching material -https://github.com/SCW-Aberystwyth/Introduction-to-HPC-with-RaspberryPi● Email me: cos@aber.ac.uk

Recommended

PDF
A Library for Emerging High-Performance Computing Clusters
PDF
Exploring the Programming Models for the LUMI Supercomputer
PDF
State of ARM-based HPC
PDF
Introduction to GPUs in HPC
PDF
ARM HPC Ecosystem
PDF
CUDA-Python and RAPIDS for blazing fast scientific computing
PDF
High-Performance and Scalable Designs of Programming Models for Exascale Systems
PDF
Programming Models for Exascale Systems
PDF
Utilizing AMD GPUs: Tuning, programming models, and roadmap
PDF
Getting started with AMD GPUs
PDF
Assisting User’s Transition to Titan’s Accelerated Architecture
PDF
Evaluating GPU programming Models for the LUMI Supercomputer
PDF
OpenHPC: A Comprehensive System Software Stack
PDF
ARM and Machine Learning
PDF
Microsoft Project Olympus AI Accelerator Chassis (HGX-1)
PDF
OpenCAPI-based Image Analysis Pipeline for 18 GB/s kilohertz-framerate X-ray ...
PDF
Deep Learning on ARM Platforms - SFO17-509
 
PDF
Lustre Best Practices
PDF
A PCIe Congestion-Aware Performance Model for Densely Populated Accelerator S...
PDF
DOME 64-bit μDataCenter
PDF
BXI: Bull eXascale Interconnect
PDF
Huawei’s requirements for the ARM based HPC solution readiness - Joshua Mora
 
PDF
IBM HPC Transformation with AI
PDF
Energy Efficient Computing using Dynamic Tuning
PDF
SCFE 2020 OpenCAPI presentation as part of OpenPWOER Tutorial
PPTX
AI OpenPOWER Academia Discussion Group
PDF
TAU E4S ON OpenPOWER /POWER9 platform
PPT
Welcome to the 2016 HPC Advisory Council Switzerland Conference
PPTX
PDF

More Related Content

PDF
A Library for Emerging High-Performance Computing Clusters
PDF
Exploring the Programming Models for the LUMI Supercomputer
PDF
State of ARM-based HPC
PDF
Introduction to GPUs in HPC
PDF
ARM HPC Ecosystem
PDF
CUDA-Python and RAPIDS for blazing fast scientific computing
PDF
High-Performance and Scalable Designs of Programming Models for Exascale Systems
PDF
Programming Models for Exascale Systems
A Library for Emerging High-Performance Computing Clusters
Exploring the Programming Models for the LUMI Supercomputer
State of ARM-based HPC
Introduction to GPUs in HPC
ARM HPC Ecosystem
CUDA-Python and RAPIDS for blazing fast scientific computing
High-Performance and Scalable Designs of Programming Models for Exascale Systems
Programming Models for Exascale Systems

What's hot

PDF
Utilizing AMD GPUs: Tuning, programming models, and roadmap
PDF
Getting started with AMD GPUs
PDF
Assisting User’s Transition to Titan’s Accelerated Architecture
PDF
Evaluating GPU programming Models for the LUMI Supercomputer
PDF
OpenHPC: A Comprehensive System Software Stack
PDF
ARM and Machine Learning
PDF
Microsoft Project Olympus AI Accelerator Chassis (HGX-1)
PDF
OpenCAPI-based Image Analysis Pipeline for 18 GB/s kilohertz-framerate X-ray ...
PDF
Deep Learning on ARM Platforms - SFO17-509
 
PDF
Lustre Best Practices
PDF
A PCIe Congestion-Aware Performance Model for Densely Populated Accelerator S...
PDF
DOME 64-bit μDataCenter
PDF
BXI: Bull eXascale Interconnect
PDF
Huawei’s requirements for the ARM based HPC solution readiness - Joshua Mora
 
PDF
IBM HPC Transformation with AI
PDF
Energy Efficient Computing using Dynamic Tuning
PDF
SCFE 2020 OpenCAPI presentation as part of OpenPWOER Tutorial
PPTX
AI OpenPOWER Academia Discussion Group
PDF
TAU E4S ON OpenPOWER /POWER9 platform
PPT
Welcome to the 2016 HPC Advisory Council Switzerland Conference
Utilizing AMD GPUs: Tuning, programming models, and roadmap
Getting started with AMD GPUs
Assisting User’s Transition to Titan’s Accelerated Architecture
Evaluating GPU programming Models for the LUMI Supercomputer
OpenHPC: A Comprehensive System Software Stack
ARM and Machine Learning
Microsoft Project Olympus AI Accelerator Chassis (HGX-1)
OpenCAPI-based Image Analysis Pipeline for 18 GB/s kilohertz-framerate X-ray ...
Deep Learning on ARM Platforms - SFO17-509
 
Lustre Best Practices
A PCIe Congestion-Aware Performance Model for Densely Populated Accelerator S...
DOME 64-bit μDataCenter
BXI: Bull eXascale Interconnect
Huawei’s requirements for the ARM based HPC solution readiness - Joshua Mora
 
IBM HPC Transformation with AI
Energy Efficient Computing using Dynamic Tuning
SCFE 2020 OpenCAPI presentation as part of OpenPWOER Tutorial
AI OpenPOWER Academia Discussion Group
TAU E4S ON OpenPOWER /POWER9 platform
Welcome to the 2016 HPC Advisory Council Switzerland Conference

Similar to Introducing HPC with a Raspberry Pi Cluster

PPTX
PDF
PDF
PPTX
Senior Design: Raspberry Pi Cluster Computing
PDF
HPC Cluster Computing from 64 to 156,000 Cores 
PDF
PDF
High Performance Computing in a Nutshell
PPTX
High performance computing for research
PDF
Uber cloud at ucc dresden dec 2013
PDF
R&D work on pre exascale HPC systems
PPT
Raspberry Pi Cluster Test Bed
PDF
PDF
PPTX
Presentation 2 Spring 2016 FINAL fat cut (1)
PDF
Cheap HPC
PDF
Accessible hpc for everyone with docker and containers
PDF
PERFORMANCE AND ENERGY-EFFICIENCY ASPECTS OF CLUSTERS OF SINGLE BOARD COMPUTERS
PDF
PERFORMANCE AND ENERGY-EFFICIENCY ASPECTS OF CLUSTERS OF SINGLE BOARD COMPUTERS
PDF
OpenNebulaconf2017US: Rapid scaling of research computing to over 70,000 cor...
PPTX
HannaRaspberryPi
Senior Design: Raspberry Pi Cluster Computing
HPC Cluster Computing from 64 to 156,000 Cores 
High Performance Computing in a Nutshell
High performance computing for research
Uber cloud at ucc dresden dec 2013
R&D work on pre exascale HPC systems
Raspberry Pi Cluster Test Bed
Presentation 2 Spring 2016 FINAL fat cut (1)
Cheap HPC
Accessible hpc for everyone with docker and containers
PERFORMANCE AND ENERGY-EFFICIENCY ASPECTS OF CLUSTERS OF SINGLE BOARD COMPUTERS
PERFORMANCE AND ENERGY-EFFICIENCY ASPECTS OF CLUSTERS OF SINGLE BOARD COMPUTERS
OpenNebulaconf2017US: Rapid scaling of research computing to over 70,000 cor...
HannaRaspberryPi

More from inside-BigData.com

PDF
How to Achieve High-Performance, Scalable and Distributed DNN Training on Mod...
PDF
Overview of HPC Interconnects
PDF
Data Parallel Deep Learning
PPTX
Transforming Private 5G Networks
PDF
Adaptive Linear Solvers and Eigensolvers
PDF
Versal Premium ACAP for Network and Cloud Acceleration
PPTX
HPC AI Advisory Council Update
PDF
HPC at Scale Enabled by DDN A3i and NVIDIA SuperPOD
PDF
The Incorporation of Machine Learning into Scientific Simulations at Lawrence...
PDF
Major Market Shifts in IT
PDF
Machine Learning for Weather Forecasts
PDF
Biohybrid Robotic Jellyfish for Future Applications in Ocean Monitoring
PDF
Preparing to program Aurora at Exascale - Early experiences and future direct...
PDF
Fugaku Supercomputer joins fight against COVID-19
PDF
Efficient Model Selection for Deep Neural Networks on Massively Parallel Proc...
PDF
Evolving Cyberinfrastructure, Democratizing Data, and Scaling AI to Catalyze ...
PDF
HPC Impact: EDA Telemetry Neural Networks
PDF
Zettar: Moving Massive Amounts of Data across Any Distance Efficiently
PDF
Scaling TCO in a Post Moore's Era
PDF
Making Supernovae with Jets
How to Achieve High-Performance, Scalable and Distributed DNN Training on Mod...
Overview of HPC Interconnects
Data Parallel Deep Learning
Transforming Private 5G Networks
Adaptive Linear Solvers and Eigensolvers
Versal Premium ACAP for Network and Cloud Acceleration
HPC AI Advisory Council Update
HPC at Scale Enabled by DDN A3i and NVIDIA SuperPOD
The Incorporation of Machine Learning into Scientific Simulations at Lawrence...
Major Market Shifts in IT
Machine Learning for Weather Forecasts
Biohybrid Robotic Jellyfish for Future Applications in Ocean Monitoring
Preparing to program Aurora at Exascale - Early experiences and future direct...
Fugaku Supercomputer joins fight against COVID-19
Efficient Model Selection for Deep Neural Networks on Massively Parallel Proc...
Evolving Cyberinfrastructure, Democratizing Data, and Scaling AI to Catalyze ...
HPC Impact: EDA Telemetry Neural Networks
Zettar: Moving Massive Amounts of Data across Any Distance Efficiently
Scaling TCO in a Post Moore's Era
Making Supernovae with Jets

Recently uploaded

PDF
Parallel Computing BCS702 Module notes of the vtu college 7th sem 4.pdf
PDF
Open Source Post-Quantum Cryptography - Matt Caswell
PDF
[DevFest Strasbourg 2025] - NodeJs Can do that !!
PDF
Lets Build a Serverless Function with Kiro
PDF
Agentic Intro and Hands-on: Build your first Coded Agent
PDF
[BDD 2025 - Full-Stack Development] Digital Accessibility: Why Developers nee...
PDF
[BDD 2025 - Artificial Intelligence] Building AI Systems That Users (and Comp...
PDF
[BDD 2025 - Full-Stack Development] The Modern Stack: Building Web & AI Appli...
PDF
Accessibility & Inclusion: What Comes Next. Presentation of the Digital Acces...
PDF
[BDD 2025 - Full-Stack Development] PHP in AI Age: The Laravel Way. (Rizqy Hi...
PDF
[BDD 2025 - Full-Stack Development] Agentic AI Architecture: Redefining Syste...
PDF
ODSC AI West: Agent Optimization: Beyond Context engineering
PDF
Dev Dives: Build smarter agents with UiPath Agent Builder
PDF
KMWorld - KM & AI Bring Collectivity, Nostalgia, & Selectivity
PPTX
Connecting the unconnectable: Exploring LoRaWAN for IoT
PDF
DUBAI IT MODERNIZATION WITH AZURE MANAGED SERVICES.pdf
PDF
The Evolving Role of the CEO in the Age of AI
PDF
Integrating AI with Meaningful Human Collaboration
PDF
Cybersecurity Prevention and Detection: Unit 2
PDF
How Much Does It Cost To Build Software
Parallel Computing BCS702 Module notes of the vtu college 7th sem 4.pdf
Open Source Post-Quantum Cryptography - Matt Caswell
[DevFest Strasbourg 2025] - NodeJs Can do that !!
Lets Build a Serverless Function with Kiro
Agentic Intro and Hands-on: Build your first Coded Agent
[BDD 2025 - Full-Stack Development] Digital Accessibility: Why Developers nee...
[BDD 2025 - Artificial Intelligence] Building AI Systems That Users (and Comp...
[BDD 2025 - Full-Stack Development] The Modern Stack: Building Web & AI Appli...
Accessibility & Inclusion: What Comes Next. Presentation of the Digital Acces...
[BDD 2025 - Full-Stack Development] PHP in AI Age: The Laravel Way. (Rizqy Hi...
[BDD 2025 - Full-Stack Development] Agentic AI Architecture: Redefining Syste...
ODSC AI West: Agent Optimization: Beyond Context engineering
Dev Dives: Build smarter agents with UiPath Agent Builder
KMWorld - KM & AI Bring Collectivity, Nostalgia, & Selectivity
Connecting the unconnectable: Exploring LoRaWAN for IoT
DUBAI IT MODERNIZATION WITH AZURE MANAGED SERVICES.pdf
The Evolving Role of the CEO in the Age of AI
Integrating AI with Meaningful Human Collaboration
Cybersecurity Prevention and Detection: Unit 2
How Much Does It Cost To Build Software

Introducing HPC with a Raspberry Pi Cluster

  • 1.
    Introducing HPC witha Raspberry Pi ClusterColin Sauzé <cos@aber.ac.uk>Research Software EngineerSuper Computing Wales ProjectAberystwyth UniversityA practical use of and good excuse to build Raspberry Pi Clusters
  • 2.
    Overview● About Me●Inspirations● Why teach HPC with Raspberry Pi?● My Raspberry Pi cluster● Experiences from teaching● Future Work
  • 3.
    About Me● ResearchSoftware Engineer withSupercomputing Wales project– 4 university partnership to supplyHPC systems– Two physical HPCs● PhD in Robotics– Experience with Linux on singleboard computers– Lots of Raspberry Pi projects
  • 4.
    Inspiration #1: LosAlamos National Laboratory● 750 node cluster● Test system for softwaredevelopment● Avoid tying up the real cluster
  • 5.
    Inspiration #2: WeeArchie/Archlet● EPCC’s Raspberry Pi Cluster● Archie: 18x Raspberry Pi 2’s(4 cores each)● Archlet: smaller 4 or 5 nodeclusters.● Used for outreach demos.● Setup instructions:https://github.com/EPCCed/wee_archletImage fromhttps://raw.githubusercontent.com/EPCCed/wee_archlet/master/images/IMG_20170210_132818620.jpg
  • 6.
    Inspiration #3: Swansea’sRaspberry Pi Cluster● 16x Raspberry Pi 3s● CFD demo using a Kinectsensor● Demoed at the SwanseaFestival of Science 2018
  • 7.
    Why Teach witha Raspberry Pi cluster?● Avoid loading real clusters doing actual research– Less fear from learners that they might break something● Resource limits more apparent● More control over the environment● Hardware less abstract● No need to have accounts on a real HPC
  • 8.
    My Cluster● “TweetyPi”– 10x Raspberry Pi model Bversion 1s– 1x Raspberry Pi 3 ashead/login node– Raspbian Stretch● Head node acts as WiFiaccess point– Internet via phone or laptop
  • 9.
    Demo Software● BritishScience Week 2019– Simple Pi with Monte Carlo methods demo– MPI based– GUI to control how many jobs launch and showqueuing● Swansea CFD demo– Needs more compute power– 16x Raspberry Pi 3 vs 10x Raspberry Pi 1● Wee Archie/Archlet Demos– Many demos available●I only found this recently– https://github.com/EPCCed/wee_archie
  • 10.
    Making a realisticHPC environment● MPICH● Slurm● Quotas on home directories● NFS mounted home directories● Software modules● Network booting compute nodes
  • 11.
    Network booting hack●No PXE boot support on original Raspberry Pi (or Raspberry PiB+ and 2)● Kernel + bootloader on SD card● Root filesystem on NFS– Cmdline.txt contains:● console=tty1 root=/dev/nfs nfsroot=10.0.0.10:/nfs/node_rootfs,vers=3 ro ip=dhcp elevator=deadline rootwait● SD cards can be identical, small 50mb image, easy to replace
  • 12.
    Teaching Materials● Basedon Introduction to HPC with Super Computing Wales carpentry stylelesson:– What is an HPC?– Logging in– Filesystems and transferring data– Submitting/monitoring jobs with Slurm– Profiling– Parallelising code, Amdahl’s law– MPI– HPC Best Practice
  • 13.
    Experiences from Teaching– STFC SummerSchool● New PhD students in solarphysics– Not registered at universitiesyet, no academic accounts● 15 people each time– 1st time using HPC for many– Most had some Unixexperience● Subset of Super ComputingWales introduction to HPCcarpentry lesson
  • 14.
    Feedback● Very Positive●Alot seemed to enjoy playing around with SSH/SCP– First time using a remote shell for some– Others more adventurous than they might have been on a real HPC● Main complaint was lack of time (only 1.5 hours)– Only got as far as covering basic job submission– Quick theoretical run through of MPI and Amdahl’s law– Probably have 3-4 hours of material●Queuing became very apparent– 10 nodes, 15 users– “watch squeue” running on screen during practical parts
  • 15.
    Problems● Slurm issueson day 1– Accidentally overwrote a system user when creating accounts● WiFi via Laptop/phone slow– When users connect to the cluster its their internet connection too– Relied on this for access to course notes
  • 16.
    Experiences from teaching– SupercomputingWales Training● Approximately 10 people– Mix of staff and research students– Mixed experience levels– All intending to use a real HPC● Simultaneously used Raspberry Pi and real HPC– Same commands run on both● Useful backup system for those with locked accounts● Feedback good– Helped make HPC more tangible
  • 17.
    Future Work● Configurationmanagement tool(Ansible/Chef/Puppet/Salt etc) insteadof script for configuration● CentOS/Open HPC stack instead ofRaspbian● Public engagement demo whichfocuses on our research– Analysing satellite imagery– Simulate the monsters from MonsterLab(https://monster-lab.org/)
  • 18.
    More Information● Setupinstructions and scripts -https://github.com/colinsauze/pi_cluster● Teaching material -https://github.com/SCW-Aberystwyth/Introduction-to-HPC-with-RaspberryPi● Email me: cos@aber.ac.uk

[8]ページ先頭

©2009-2025 Movatter.jp