Movatterモバイル変換


[0]ホーム

URL:


Skip to content

Navigation Menu

Sign in
Appearance settings

Search code, repositories, users, issues, pull requests...

Provide feedback

We read every piece of feedback, and take your input very seriously.

Saved searches

Use saved searches to filter your results more quickly

Sign up
Appearance settings

Landmark Detection and Robot Tracking (SLAM) made with ❤️ in Pytorch

License

NotificationsYou must be signed in to change notification settings

phreakyphoenix/Landmark-Detection-and-Tracking-SLAM

Repository files navigation

Project Overview

In this project, you'll implement SLAM (Simultaneous Localization and Mapping) for a 2 dimensional world! You’ll combine what you know about robot sensor measurements and movement to create a map of an environment from only sensor and motion data gathered by a robot, over time. SLAM gives you a way to track the location of a robot in the world in real-time and identify the locations of landmarks such as buildings, trees, rocks, and other world features. This is an active area of research in the fields of robotics and autonomous systems.

Below is an example of a 2D robot world with landmarks (purple x's) and the robot (a red 'o') located and found usingonly sensor and motion data collected by that robot. This is just one example for a 50x50 grid world; in your work you will likely generate a variety of these maps.

The project will be broken up into three Python notebooks; the first two are for exploration of provided code, and a review of SLAM architectures,only Notebook 3 and therobot_class.py file will be graded:

Notebook 1 : Robot Moving and Sensing

Notebook 2 : Omega and Xi, Constraints

Notebook 3 : Landmark Detection and Tracking

Project Instructions

All of the starting code and resources you'll need to compete this project are in this Github repository. Before you can get started coding, you'll have to make sure that you have all the libraries and dependencies required to support this project. If you have already created acv-nd environment forexercise code, then you can use that environment! If not, instructions for creation and activation are below.

Local Environment Instructions

  1. Clone the repository, and navigate to the downloaded folder.
git clone https://github.com/phreakyphoenix/Landmark-Detection-and-Tracking-SLAM.gitcd Landmark-Detection-and-Tracking-SLAM
  1. Create (and activate) a new environment, namedcv-nd with Python 3.6. If prompted to proceed with the install(Proceed [y]/n) type y.

    • Linux orMac:
    conda create -n cv-nd python=3.6source activate cv-nd
    • Windows:
    conda create --name cv-nd python=3.6activate cv-nd

    At this point your command line should look something like:(cv-nd) <User>:P3_Implement_SLAM <user>$. The(cv-nd) indicates that your environment has been activated, and you can proceed with further package installations.

  2. Install a few required pip packages, which are specified in the requirements text file (including OpenCV).

pip install -r requirements.txt

Notebooks

  1. Navigate back to the repo. (Also, your source environment should still be activated at this point.)
cd Landmark-Detection-and-Tracking-SLAM
  1. Open the directory of notebooks, using the below command. You'll see all of the project files appear in your local environment; open the first notebook and follow the instructions.
jupyter notebook

[8]ページ先頭

©2009-2025 Movatter.jp