Movatterモバイル変換


[0]ホーム

URL:


Skip to content

Navigation Menu

Sign in
Appearance settings

Search code, repositories, users, issues, pull requests...

Provide feedback

We read every piece of feedback, and take your input very seriously.

Saved searches

Use saved searches to filter your results more quickly

Sign up
Appearance settings

Embodied AI experiment using light-weight AI models. Tested with Kinova Gen3 arm

NotificationsYou must be signed in to change notification settings

rishiktiwari/kinova-AI-experiments

Repository files navigation

User interface and Simplified control flow

  • Technical Report: [coming soon]
  • Demonstration: [coming soon]
  • Detailed Control Flow: [coming soon]

Requirements

  • Decent graphic card, or Apple M-series device
  • Minimum 8GB RAM
  • 10+ GB free storage
  • Conda |docs
  • Python 3.10.14
  • Phi-3-mini-4k-instruct-q4 language model |🤗 Page
  • clipseg-rd64-refined (auto downloaded) |🤗 Page

Environment Setup

  1. Create conda environment

    conda create -n kinovaAi python=3.10.14
  2. Download the LM and move it tollm_models

  3. Install packages:

    pip install -r requirements.txt

Embodied AI Inference

  1. Activate conda environment.

    conda activate kinovaAi
  2. Set the IP address of host server (i.e. running ROS, Kortex API & connected to arm) ininference2_clipseg.py.

    Example:HOST_IP = '192.168.1.100'

  3. Start the server first by following theRemote AI Inference instructions onrishiktiwari/rishik_ros_kortex

  4. Run the following command to connect to server and begin inference script:

    python3 embodied_ai/inference2_clipseg.py

    inference1_gdino is incomplete and not recommended.

    Only actions translating to pick or pick_place are supported.English prompt works the best!

Device connection overview

Device connection overview

About

Embodied AI experiment using light-weight AI models. Tested with Kinova Gen3 arm

Topics

Resources

Stars

Watchers

Forks


[8]ページ先頭

©2009-2025 Movatter.jp