Movatterモバイル変換


[0]ホーム

URL:


Skip to content

Navigation Menu

Search code, repositories, users, issues, pull requests...

Provide feedback

We read every piece of feedback, and take your input very seriously.

Saved searches

Use saved searches to filter your results more quickly

Sign up

Vagrant Environment for playing with Tinkerbell for provisioning AMD64 and ARM64 machines

NotificationsYou must be signed in to change notification settings

rgl/tinkerbell-vagrant

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

84 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

This is aVagrant Environment for playing withTinkerbell for provisioning AMD64 and ARM64 (e.g. Raspberry Pi) machines.

Usage

Thisprovisioner environment is essentially running all the Tinkerbellcomponents inside a single virtual machine.

In order for it to work you need to connect theprovisioner virtual network to a physical network that reaches the physical machines.

I'm using Ubuntu 20.04 as the host, qemu/kvm/libvirt has the hypervisor, and atp-link tl-sg108e switch.

NB You can also use this vagrant environment without the switch (see theVagrantfile).

The network is connected as:

The tp-link tl-sg108e switch is configured withrgl/ansible-collection-tp-link-easy-smart-switch as:

NB this line of switches is somewhat insecure as, at least, its configuration protocol (UDP port 29808 and TCP port 80) uses clear text messages. For more information seeHow I can gain control of your TP-LINK home switch andInformation disclosure vulnerability in TP-Link Easy Smart switches.

The host network is configured by netplan with/etc/netplan/config.yaml as:

network:version:2renderer:networkdethernets:enp3s0:link-local:[]addresses:        -10.1.0.1/24        -192.168.0.254/24bridges:# NB this is equivalent of executing:#       ip link add name br-rpi type bridge#       ip addr flush dev br-rpi#       ip addr add dev br-rpi 10.3.0.1/24#       ip link set dev br-rpi up#       ip addr ls dev br-rpi#       ip -d link show dev br-rpi#       ip route# NB later, you can remove with:#       ip link set dev br-rpi down#       ip link delete dev br-rpibr-rpi:link-local:[]addresses:        -10.3.0.1/24interfaces:        -vlan.rpivlans:vlan.wan:id:2link:enp3s0link-local:[]addresses:        -192.168.1.1/24gateway4:192.168.1.254nameservers:addresses:# cloudflare+apnic public dns resolvers.# see https://en.wikipedia.org/wiki/1.1.1.1          -"1.1.1.1"          -"1.0.0.1"# google public dns resolvers.# see https://en.wikipedia.org/wiki/8.8.8.8#- "8.8.8.8"#- "8.8.4.4"# NB this is equivalent of executing:#       ip link add link enp3s0 vlan.rpi type vlan proto 802.1q id 2#       ip link set dev vlan.rpi up#       ip -d link show dev vlan.rpi# NB later, you can remove with:#       ip link set dev vlan.rpi down#       ip link delete dev vlan.rpivlan.rpi:id:3link:enp3s0link-local:[]

NB For more information about VLANs see theIEEE 802.1Q VLAN Tutorial.

Build and install theUbuntu Linux vagrant box.

BuildDebian OSIE in../tinkerbell-debian-osie.

Optionally, build and install the following vagrant boxes (which must be usingthe UEFI variant):

Login into docker hub to have ahigher rate limits.

Launch theprovisioner with:

# NB this takes about 30m in my machine. YMMV.vagrant up --no-destroy-on-error --no-tty provisioner

Enter theprovisioner machine, and tail the relevant logs with:

vagrant ssh provisionersudo -icd~/tinkerbell-sandbox/deploy/composedocker compose logs --follow tink-server boots nginx

In another terminal, launch theuefi worker machine with:

vagrant up --no-destroy-on-error --no-tty uefi

In another terminal, watch the workflow progress with:

vagrant ssh provisionersudo -iwatch-hardware-workflows uefi

You should eventually see something alike:

+----------------------+--------------------------------------+| FIELD NAME           | VALUES                               |+----------------------+--------------------------------------+| Workflow ID          | dc2ff4c3-13b1-11ec-a4c5-0242ac1a0004 || Workflow Progress    | 100%                                 || Current Task         | hello-world                          || Current Action       | info                                 || Current Worker       | 00000000-0000-4000-8000-080027000001 || Current Action State | STATE_SUCCESS                        |+----------------------+--------------------------------------++--------------------------------------+-------------+-------------+----------------+---------------------------------+---------------+| WORKER ID                            | TASK NAME   | ACTION NAME | EXECUTION TIME | MESSAGE                         | ACTION STATUS |+--------------------------------------+-------------+-------------+----------------+---------------------------------+---------------+| 00000000-0000-4000-8000-080027000001 | hello-world | hello-world |              0 | Started execution               | STATE_RUNNING || 00000000-0000-4000-8000-080027000001 | hello-world | hello-world |              3 | finished execution successfully | STATE_SUCCESS || 00000000-0000-4000-8000-080027000001 | hello-world | info        |              0 | Started execution               | STATE_RUNNING || 00000000-0000-4000-8000-080027000001 | hello-world | info        |              0 | finished execution successfully | STATE_SUCCESS |+--------------------------------------+-------------+-------------+----------------+---------------------------------+---------------+

NB After a workflow action is executed,tink-worker will not re-execute it, even if you reboot the worker. You must create a new workflow, e.g.provision-workflow hello-world uefi && watch-hardware-workflows uefi.

You can see the worker and action logs from Grafana Explore (its address is displayed at the end of the provisioning).

From within the worker machine, you can query the metadata endpoint:

NB this endpoint returns the data set in theTODO field of the particular workerhardware document.

metadata_url="$(cat /proc/cmdline| tr'''\n'| awk'/^tinkerbell=(.+)/{print "$1:50061/metadata"}')"wget -qO-"$metadata_url"

Then repeat the process with theuefi worker machine.

To execute a more realistic workflow, you can install one of the following:

provision-workflow debian        uefi&& watch-hardware-workflows uefiprovision-workflow flatcar-linux uefi&& watch-hardware-workflows uefiprovision-workflow proxmox-ve    uefi&& watch-hardware-workflows uefiprovision-workflow ubuntu        uefi&& watch-hardware-workflows uefiprovision-workflow windows-2022  uefi&& watch-hardware-workflows uefi

See which containers are running in theprovisioner machine:

vagrant ssh provisionersudo -i# see https://docs.docker.com/engine/reference/commandline/ps/#formattingpython3<<'EOF'import ioimport jsonimport subprocessfrom tabulate import tabulatedef info():  p = subprocess.Popen(    ('docker', 'ps', '-a', '--no-trunc', '--format', '{{.ID}}'),    stdout=subprocess.PIPE,    stderr=subprocess.STDOUT)  for id in (l.rstrip("\r\n") for l in io.TextIOWrapper(p.stdout)):    p = subprocess.Popen(      ('docker', 'inspect', id),      stdout=subprocess.PIPE,      stderr=subprocess.STDOUT)    for c in json.load(p.stdout):      yield (c['Name'], c['Config']['Image'], c['Image'])print(tabulate(sorted(info()), headers=('ContainerName', 'ImageName', 'ImageId')))EOF

At the time of writing these were the containers running by default:

ContainerName                        ImageName                                 ImageId-----------------------------------  ----------------------------------------  -----------------------------------------------------------------------/compose-boots-1                     10.3.0.2/debian-boots                     sha256:397e3206222130ada624953220e8cb38c66365a4e31df7ce808f639c9a141599/compose-db-1                        postgres:14-alpine                        sha256:eb82a397daaf176f244e990aa6f550422a764a88759f43e641c3a1323953deb7/compose-hegel-1                     quay.io/tinkerbell/hegel:sha-89cb9dc8     sha256:23c22f0bb8779fb4b0fdab8384937c54afbbed6b45aefb3554f2d54cb2c7cffa/compose-images-to-local-registry-1  quay.io/containers/skopeo:latest          sha256:9f5c670462ec0dc756fe52ec6c4d080f62c01a0003b982d48bb8218f877a456a/compose-osie-bootloader-1           nginx:alpine                              sha256:b46db85084b80a87b94cc930a74105b74763d0175e14f5913ea5b07c312870f8/compose-osie-work-1                 bash:4.4                                  sha256:bc8b0716d7386a05b5b3d04276cc7d8d608138be723fbefd834b5e75db6a6aeb/compose-registry-1                  registry:2.7.1                            sha256:b8604a3fe8543c9e6afc29550de05b36cd162a97aa9b2833864ea8a5be11f3e2/compose-registry-auth-1             httpd:2                                   sha256:ad17c88403e2cedd27963b98be7f04bd3f903dfa7490586de397d0404424936d/compose-tink-cli-1                  quay.io/tinkerbell/tink-cli:sha-3743d31e  sha256:8c90de15e97362a708cde2c59d3a261f73e3a4242583a54222b5e18d4070acaf/compose-tink-server-1               quay.io/tinkerbell/tink:sha-3743d31e      sha256:fb21c42c067588223b87a5c1f1d9b2892f863bfef29ce5fcd8ba755cfa0a990b/compose-tink-server-migration-1     quay.io/tinkerbell/tink:sha-3743d31e      sha256:fb21c42c067588223b87a5c1f1d9b2892f863bfef29ce5fcd8ba755cfa0a990b/compose-tls-gen-1                   cfssl/cfssl                               sha256:655abf144edde793a3ff1bc883cc82ca61411efb35d0d403a52f202c9c3cd377/compose_tls-gen_run_67135735bbb3    cfssl/cfssl                               sha256:655abf144edde793a3ff1bc883cc82ca61411efb35d0d403a52f202c9c3cd377/grafana                             grafana/grafana:8.2.5                     sha256:ddfae340d0681fe1a10582b06a2e8ae402196df9d429f0c1cefbe8dedca73cf0/loki                                grafana/loki:2.4.1                        sha256:e3e722f23de3fdbb8608dcf1f8824dec62cba65bbfd5ab5ad095eed2d7c5872a/meshcommander                       meshcommander                             sha256:aff2fc5004fb7f77b1a14a82c35af72e941fa33715e66c2eab5a5d253820d4bb/portainer                           portainer/portainer-ce:2.9.2              sha256:a1c22f3d250fda6b357aa7d2148dd333a698805dd2878a08eb8f055ca8fb4e99

Those containers were started with docker compose and you can use it toinspect the tinkerbell containers:

vagrant ssh provisionersudo -icd~/tinkerbell-sandbox/deploy/composedocker compose psdocker compose logs -f

You can also use thePortainerapplication at the address that is displayed after the vagrant environmentis launched (e.g. athttp://10.3.0.2:9000).

Tinkerbell Debian OSIE

This vagrant environment uses theDebian based OSIEinstead of theLinuxKit (aka Hook) based OSIE.

You can login into it using theosie username and password.

Raspberry Pi

Install the RPI4-UEFI-IPXE firmware into a sd-card as described athttps://github.com/rgl/rpi4-uefi-ipxe.

Insert an external disk (e.g. an USB flash drive or USB SSD) to use as target onyour Tinkerbell Action.

Intel NUC

You canuse the Intel Integrator Toolkit ITK6.efi EFI application to set the SMBIOS properties.

Troubleshooting

Network Packet Capture

You can see all the network traffic from within the provisioner by running:

vagrant ssh-config provisioner>tmp/provisioner-ssh-config.conf# NB this ignores the following ports:#          22: SSH#       16992: AMT HTTP#       16994: AMT Redirection/TCP#        4000: MeshCommanderwireshark -k -i<(ssh -F tmp/provisioner-ssh-config.conf provisioner'sudo tcpdump -s 0 -U -n -i eth1 -w - not tcp port 22 and not port 16992 and not port 16994 and not port 4000')

You can also do it from the host by capturing traffic from thebr-rpi orvlan.rpi interface.

Database

Tinkerbell uses thetinkerbellPostgreSQL database, you can access its console with, e.g.:

vagrant ssh provisionersudo -idockerexec -i compose-db-1 psql -U tinkerbell -c'\dt'dockerexec -i compose-db-1 psql -U tinkerbell -c'\d hardware'dockerexec -i compose-db-1 psql -U tinkerbell -c'select * from template'dockerexec -i compose-db-1 psql -U tinkerbell -c'select * from workflow'dockerexec -i compose-db-1 psql -U tinkerbell -c'select * from workflow_event order by created_at desc'

Notes

  • All workflow actions run as--privileged containers.

Reference

About

Vagrant Environment for playing with Tinkerbell for provisioning AMD64 and ARM64 machines

Topics

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

[8]ページ先頭

©2009-2025 Movatter.jp