- Notifications
You must be signed in to change notification settings - Fork6
bharath5673/ros_ws
Folders and files
Name | Name | Last commit message | Last commit date | |
---|---|---|---|---|
Repository files navigation
Computer vision is an important application of ROS2, as it allows robots and other autonomous systems to understand and interact with their environment. Here are some common topics related to computer vision in ROS2:
Image processing: Process images from cameras and other sensors. These libraries enable developers to implement various computer vision algorithms, such as object detection, segmentation, and tracking.
Depth sensing: In addition to cameras, many robots use depth sensors, such as LIDAR or depth cameras, to perceive their environment. ROS2 provides several libraries, such as depth_image_proc and pointcloud2, to process and analyze depth data.
Perception algorithms: Various perception algorithms that can be used for object detection, tracking, and recognition.
Integration with other ROS2 components: Computer vision algorithms in ROS2 are often used in conjunction with other ROS2 components, such as navigation, manipulation, and planning. ROS2 provides various communication and messaging protocols, such as ROS2 Topics and ROS2 Services, that enable developers to integrate these components and create complex systems.
## ros distroabc@xyz:~$ rosversion -dhumble## ubuntu versionabc@xyz:~$ lsb_release -aNo LSB modules are available.Distributor ID:UbuntuDescription:Ubuntu 22.04.1 LTSRelease:22.04Codename:jammy## python versionabc@xyz:~$ python3 --versionPython 3.10.6
### steps for setting upgit clone https://github.com/bharath5673/ros_ws.gitcd ros_wscolcon buildcd ..
### steps for turtleotsource ros_ws/install/setup.bashros2 run turtlesim turtlesim_node& ros2 run my_robot_controller turtle_controller
### steps for opencvsource ros_ws/install/setup.bashros2 run test_opencv run_test
pip install mediapipe
### steps for facemeshsource ros_ws/install/setup.bashros2 run test_mediapipe facemesh_demo
### steps for posesource ros_ws/install/setup.bashros2 run test_mediapipe pose_demo
### steps for handssource ros_ws/install/setup.bashros2 run test_mediapipe hands_demo
### steps for holisticsource ros_ws/install/setup.bashros2 run test_mediapipe holistic_demo
pip install yolov5
### steps for yolov5nsource ros_ws/install/setup.bashros2 run test_yolov5 yolov5n_demo
### steps for yolov5ssource ros_ws/install/setup.bashros2 run test_yolov5 yolov5s_demo
### steps for testing installation## install dependanciespython3 -m pip install -r src/self_driving_car_pkg/requirements.txtsudo apt install ros-humble-gazebo-rossudo apt-get install ros-humble-gazebo-msgssudo apt-get install ros-humble-gazebo-plugins## copy models to gazebo envcp -r ros_ws/src/self_driving_car_pkg/models/* /home/bharath/.gazebo/models## once build you can run the simulation e.g [ ros2 launch (package_name) world(launch file) ]source ros_ws/install/setup.bashsource /opt/ros/humble/setup.bashros2 launch self_driving_car_pkg world_gazebo.launch.py## To activate the SelfDriving Carsource ros_ws/install/setup.bashsource /opt/ros/humble/setup.bashros2 run self_driving_car_pkg computer_vision_node
### steps to run Self-Driving-Car## Launch the maze_solving world in gazebosource ros_ws/install/setup.bashsource /opt/ros/humble/setup.bashros2 launch self_driving_car_pkg maze_solving_world.launch.py## in another terminalsource ros_ws/install/setup.bashsource /opt/ros/humble/setup.bashros2 run self_driving_car_pkg sdc_V2
for detailed explainations and tutorilas @https://github.com/noshluk2/ROS2-Self-Driving-Car-AI-using-OpenCV
### steps for rosbag turtle vel cmdssource ros_ws/install/setup.bashros2 run turtlesim turtlesim_node& ros2 run test_turtle_bag turtlebot_for_rosbag### steps to control turtlebotsource ros_ws/install/setup.bashros2 run turtlesim turtle_teleop_key
### open turtlebotsource ros_ws/install/setup.bashros2 run turtlesim turtlesim_node### steps to rosbag playsource ros_ws/install/setup.bashros2 bag play 'ros_ws/src/test_turtle_bag/test_turtle_bag/rosbag2_2023_02_06-18_46_50/rosbag2_2023_02_06-18_46_50_0.db3' -d 0.5
## prerequisitessudo apt install ros-humble-gazebo-rossudo apt-get install ros-humble-gazebo-msgssudo apt-get install ros-humble-gazebo-pluginspip install yolov5## copy models to gazebo envcp -r ros_ws/src/yolobot/models/* /home/bharath/.gazebo/models
### step for roslaunchsource /opt/ros/humble/setup.bashsource ros_ws/install/setup.bashros2 launch yolobot yolobot_launch.py
### on new terminal### step for yolobot detetcionsource /opt/ros/humble/setup.bashsource ros_ws/install/setup.bashpython3 ros_ws/src/yolobot/yolobot_recognition/ros_recognition_yolo.py
for detailed explainations and tutorilas @https://www.youtube.com/watch?v=594Gmkdo-_s&t=610s
## prerequisitessudo apt-get install ros-humble-imu-tools
### step for simple testing imu sensors with random datasource /opt/ros/humble/setup.bashsource ros_ws/install/setup.bash ros2 run test_imu simple_imu
### on new terminal for rvizsource ros_ws/install/setup.bashrviz2now click on add create visualization By Topic /Imu/imu
flash ur pico-W on Thonny withhttps://github.com/bharath5673/ros_ws/blob/main/src/test_imu/pico_W/sketch1.py
Note : please ignore those other components on the board showing in the below demo.. my circuit was built for different use case.. u can connect ur pico-w and mpu6050 as shown in this below simple circuit diagram
### step to read imu MPU6050 data from pico-W and visualize on rviz2source /opt/ros/humble/setup.bashsource ros_ws/install/setup.bashros2 run test_imu picoW_mpu6050
### on new terminal for rvizsource ros_ws/install/setup.bashrviz2
flash ur ESP32 on Arduino withhttps://github.com/bharath5673/ros_ws/blob/main/src/test_imu/esp_32/sketch2.ino
Note : please ignore those other components on the board showing in the below demo.. my circuit was built for different use case.. u can connect ur esp-32 and mpu6050 as shown in this below simple circuit diagram
### step to read imu MPU6050 data from pico-W and visualize on rviz2source /opt/ros/humble/setup.bashsource ros_ws/install/setup.bashros2 run test_imu esp32_mpu6050
### on new terminal for rvizsource ros_ws/install/setup.bashrviz2
The OAK (OpenCV AI Kit) is a series of edge computing devices developed by Luxonis, designed to provide high-performance AI inference for computer vision applications in a compact and low-power form factor. a dedicated AI accelerator chip that provides high-speed neural network inference for running complex AI models on-device. With the OAK devices, users can deploy AI models for tasks such as object detection, facial recognition, and gesture recognition and making it ideal for edge AI applications.
### steps for installing Death-AIpython3 -m pip install depthai --upgrade
### set USB rules to recognise and access oak devices to grant permissionecho'SUBSYSTEM=="usb", ATTRS{idVendor}=="03e7", MODE="0666"'| sudo tee /etc/udev/rules.d/80-movidius.rulessudo udevadm control --reload-rules&& sudo udevadm trigger
### steps for running yolo on oak-1 publishersource ros_ws/install/setup.bashros2 run test_OAK OAK_1_publisher
### on new terminal for subscribersource ros_ws/install/setup.bashros2 run test_OAK OAK_subscriber
### steps for running yolo on oak-D publishersource ros_ws/install/setup.bashros2 run test_OAK OAK_D_publisher
### on new terminal for subscribersource ros_ws/install/setup.bashros2 run test_OAK OAK_subscriber
Simple and easy project, about building a custom robot maps and that can navigate autonomously using the ROS2 Navigation Stack.
@https://github.com/bharath5673/ros_ws/tree/main/src/navigation_tb3
Expand
- https://github.com/AlexeyAB/darknet
- https://github.com/ultralytics/yolov5
- https://github.com/TexasInstruments/edgeai-yolov5/tree/yolo-pose
- https://github.com/noshluk2/ROS2-Self-Driving-Car-AI-using-OpenCV.git
- https://github.com/google/mediapipe.git
- https://github.com/ellenrapps/Road-to-Autonomous-Drone-Using-Raspberry-Pi-Pico.git
- https://github.com/ros2/ros2_documentation.git
- [https://github.com/raspberrypi](https://github.com/raspberrypi](https://github.com/raspberrypi)
- https://github.com/NVIDIA
- https://github.com/opencv/opencv
- https://roboticsbackend.com/