Movatterモバイル変換


[0]ホーム

URL:


Skip to content

Navigation Menu

Sign in
Appearance settings

Search code, repositories, users, issues, pull requests...

Provide feedback

We read every piece of feedback, and take your input very seriously.

Saved searches

Use saved searches to filter your results more quickly

Sign up
Appearance settings

Not able to use custom docker image, healthcheck failing#17821

Unanswered
Discussion options

Is there an existing issue for this?

  • I have searched the existing issues

Current Behavior

I am using a custom dockerfile to integraredocker in docker on kubernetes, the problem is that to do so I have to install the coder server and launch it as a post-hook, and this seems to break the healthcheck from the UI and therefore I am not able to connect to the pod terminal from the UI.

This is the Dockerfile:

FROM nestybox/ubuntu-focal-systemd-docker:latestRUN /bin/bash -c 'apt-get update && \    apt-get install --yes tmux wget && \    rm -rf /var/lib/apt/lists/*'WORKDIR /home/coder

This is the template:

terraform {  required_providers {    coder = {      source = "coder/coder"    }    kubernetes = {      source = "hashicorp/kubernetes"    }  }}provider "coder" {}data "coder_parameter" "cpu" {  name         = "cpu"  display_name = "CPU"  description  = "The number of CPU cores"  default      = "1"  type         = "number"  icon         = "/icon/memory.svg"  mutable      = true  validation {    min = 1    max = 8  }}data "coder_parameter" "memory" {  name         = "memory"  display_name = "Memory"  description  = "The amount of memory in GB"  default      = "2"  type         = "number"  icon         = "/icon/memory.svg"  mutable      = true  validation {    min = 1    max = 256  }}data "coder_parameter" "home_disk_size" {  name         = "home_disk_size"  display_name = "Home disk size"  description  = "The size of the home disk in GB"  default      = "10"  type         = "number"  icon         = "/emojis/1f4be.png"  mutable      = false  validation {    min = 1    max = 1000  }}provider "kubernetes" {  config_path = null}data "coder_workspace" "me" {}data "coder_workspace_owner" "me" {}resource "coder_agent" "main" {  os             = "linux"  arch           = "amd64"  startup_script = <<-EOT    set -e    # Start the pre-installed code-server in the background    curl -fsSL https://code-server.dev/install.sh | sh -s -- --method=standalone --prefix=/usr/local/code-server    /usr/local/code-server/bin/code-server --auth none --port 13337 >code-server.log 2>&1 &  EOT  metadata {    display_name = "CPU Usage"    key          = "0_cpu_usage"    script       = "coder stat cpu"    interval     = 10    timeout      = 1  }  metadata {    display_name = "RAM Usage"    key          = "1_ram_usage"    script       = "coder stat mem"    interval     = 10    timeout      = 1  }  metadata {    display_name = "Home Disk"    key          = "3_home_disk"    script       = "coder stat disk --path $${HOME}"    interval     = 60    timeout      = 1  }  metadata {    display_name = "CPU Usage (Host)"    key          = "4_cpu_usage_host"    script       = "coder stat cpu --host"    interval     = 10    timeout      = 1  }  metadata {    display_name = "Memory Usage (Host)"    key          = "5_mem_usage_host"    script       = "coder stat mem --host"    interval     = 10    timeout      = 1  }  metadata {    display_name = "Load Average (Host)"    key          = "6_load_host"    # get load avg scaled by number of cores    script   = <<EOT      echo "`cat /proc/loadavg | awk '{ print $1 }'` `nproc`" | awk '{ printf "%0.2f", $1/$2 }'    EOT    interval = 60    timeout  = 1  }}# code-serverresource "coder_app" "code-server" {  agent_id     = coder_agent.main.id  slug         = "code-server"  display_name = "code-server"  icon         = "/icon/code.svg"  url          = "http://localhost:13337?folder=/home/coder"  subdomain    = false  share        = "owner"  healthcheck {    url       = "http://localhost:13337/healthz"    interval  = 3    threshold = 10  }}resource "kubernetes_persistent_volume_claim" "home" {  metadata {    name      = "coder-${data.coder_workspace_owner.me.name}-${data.coder_workspace.me.name}-home"    namespace = "coder"    labels = {      "app.kubernetes.io/name"     = "coder-pvc"      "app.kubernetes.io/instance" = "coder-pvc-${data.coder_workspace.me.id}"      "app.kubernetes.io/part-of"  = "coder"      "com.coder.resource"       = "true"      "com.coder.workspace.id"   = data.coder_workspace.me.id      "com.coder.workspace.name" = data.coder_workspace.me.name      "com.coder.user.id"        = data.coder_workspace_owner.me.id      "com.coder.user.username"  = data.coder_workspace_owner.me.name    }    annotations = {      "com.coder.user.email" = data.coder_workspace_owner.me.email    }  }  wait_until_bound = false  spec {    access_modes = ["ReadWriteOnce"]    storage_class_name = "longhorn"    resources {      requests = {        storage = "${data.coder_parameter.home_disk_size.value}Gi"      }    }  }}resource "kubernetes_deployment" "main" {  count = data.coder_workspace.me.start_count  depends_on = [    kubernetes_persistent_volume_claim.home  ]  wait_for_rollout = false  metadata {    name      = "${data.coder_workspace_owner.me.name}-${data.coder_workspace.me.name}"    namespace = "coder"    labels = {      "app.kubernetes.io/name"     = "coder-workspace"      "app.kubernetes.io/instance" = "coder-workspace-${data.coder_workspace.me.id}"      "app.kubernetes.io/part-of"  = "coder"      "com.coder.resource"         = "true"      "com.coder.workspace.id"     = data.coder_workspace.me.id      "com.coder.workspace.name"   = data.coder_workspace.me.name      "com.coder.user.id"          = data.coder_workspace_owner.me.id      "com.coder.user.username"    = data.coder_workspace_owner.me.name    }    annotations = {      "com.coder.user.email" = data.coder_workspace_owner.me.email      "io.kubernetes.cri-o.userns-mode" = "auto:size=65536"    }  }  spec {    replicas = 1    selector {      match_labels = {        "app.kubernetes.io/name"     = "coder-workspace"        "app.kubernetes.io/instance" = "coder-workspace-${data.coder_workspace.me.id}"        "app.kubernetes.io/part-of"  = "coder"        "com.coder.resource"         = "true"        "com.coder.workspace.id"     = data.coder_workspace.me.id        "com.coder.workspace.name"   = data.coder_workspace.me.name        "com.coder.user.id"          = data.coder_workspace_owner.me.id        "com.coder.user.username"    = data.coder_workspace_owner.me.name      }    }    strategy {      type = "Recreate"    }    template {      metadata {        labels = {          "app.kubernetes.io/name"     = "coder-workspace"          "app.kubernetes.io/instance" = "coder-workspace-${data.coder_workspace.me.id}"          "app.kubernetes.io/part-of"  = "coder"          "com.coder.resource"         = "true"          "com.coder.workspace.id"     = data.coder_workspace.me.id          "com.coder.workspace.name"   = data.coder_workspace.me.name          "com.coder.user.id"          = data.coder_workspace_owner.me.id          "com.coder.user.username"    = data.coder_workspace_owner.me.name        }        annotations = {          "io.kubernetes.cri-o.userns-mode" = "auto:size=65536"        }      }      spec {        image_pull_secrets {          name = "regcred"        }                runtime_class_name = "sysbox-runc"                container {          name              = "dev"          image             = "translatednet/cpu-machine:sysbox"          image_pull_policy = "Always"          command           = ["/sbin/init"]          env {            name  = "CODER_AGENT_TOKEN"            value = coder_agent.main.token          }                    lifecycle {            post_start {              exec {                command = ["/bin/bash", "-c", "${coder_agent.main.startup_script}"]              }            }          }                    resources {            requests = {              "cpu"    = "250m"              "memory" = "512Mi"            }            limits = {              "cpu"    = "${data.coder_parameter.cpu.value}"              "memory" = "${data.coder_parameter.memory.value}Gi"            }          }          volume_mount {            mount_path = "/home/coder"            name       = "home"            read_only  = false          }        }        volume {          name = "home"          persistent_volume_claim {            claim_name = kubernetes_persistent_volume_claim.home.metadata.0.name            read_only  = false          }        }        affinity {          pod_anti_affinity {            preferred_during_scheduling_ignored_during_execution {              weight = 1              pod_affinity_term {                topology_key = "kubernetes.io/hostname"                label_selector {                  match_expressions {                    key      = "app.kubernetes.io/name"                    operator = "In"                    values   = ["coder-workspace"]                  }                }              }            }          }        }      }    }  }}

And this is the healthcheck from inside the container (sometimes expired sometimes alive):

root@santurini-test-5b794f88c9-9t6zk:/home/coder# curl http://127.0.0.1:13337/healthz{"status":"expired","lastHeartbeat":1747047539127}

Relevant Log Output

Expected Behavior

I would like to be able to integrate docker in docker without compromising coder.

Steps to Reproduce

  1. Install Coder
  2. Install sysbox
  3. Create docker image
  4. Create workspace with custom template

Environment

  • Host OS: Ubuntu 20.04
  • Coder version: v2.21.3+bd1ef88

Additional Context

No response

You must be logged in to vote

Replies: 1 comment

Comment options

I tried using this startup-script that gets ran as a post start command:

    #!/usr/bin/env bash    set -eu    # Error handling - to preserve logs in case of failure    waitonexit() {      echo "=== Agent script exited with non-zero code ($?). Sleeping 1h to preserve logs..."      sleep 3600    }    trap waitonexit EXIT    # Create a directory with the right permissions    CODE_SERVER_DIR="/home/coder/.local/code-server"    mkdir -p "$CODE_SERVER_DIR"        # Install the latest code-server    echo "Installing code-server..."    curl -fsSL https://code-server.dev/install.sh | sh -s -- --method=standalone --prefix="$CODE_SERVER_DIR"        # Ensure correct permissions    chmod -R 755 "$CODE_SERVER_DIR"        # Start code-server in the background    echo "Starting code-server..."    "$CODE_SERVER_DIR/bin/code-server" --auth none --port 13337 >/tmp/code-server.log 2>&1 &        # Verify code-server is running    sleep 2    if ! ps aux | grep -v grep | grep code-server > /dev/null; then      echo "WARNING: code-server may have failed to start, check logs"      cat /tmp/code-server.log    else      echo "code-server is running"    fi    # If the script completes successfully, remove the trap    trap - EXIT    echo "Startup script completed successfully"  EOT

Same as before, the agent is started but from the UI im not able to connect to the workspace even if the pod is running.
How can I start the agent without changing the container command:

     spec {       image_pull_secrets {         name = "regcred"       }       runtime_class_name = "sysbox-runc"       container {         name = "dev"         image             = "translatednet/cpu-machine:sysbox"         image_pull_policy = "Always"         command           = ["/sbin/init"]         env {           name  = "CODER_AGENT_TOKEN"           value = coder_agent.main.token         }                  lifecycle {           post_start {             exec {               command = ["/bin/bash", "-c", coder_agent.main.startup_script]             }           }         }
You must be logged in to vote
0 replies
Sign up for freeto join this conversation on GitHub. Already have an account?Sign in to comment
Labels
None yet
1 participant
@santurini
Converted from issue

This discussion was converted from issue #17759 on May 14, 2025 07:58.


[8]ページ先頭

©2009-2025 Movatter.jp