-II-Project-Programming-an-Autonomous-Driving-Car

所属分类:自动驾驶
开发工具:Jupyter Notebook
文件大小:42354KB
下载次数:0
上传日期:2022-05-21 13:36:38
上 传 者sh-1993
说明:  MLiS II自动驾驶汽车编程项目,,
(MLiS-II-Project-Programming-an-Autonomous-Driving-Car,,)

文件列表:
.virtual_documents (0, 2022-05-21)
.virtual_documents\Code (0, 2022-05-21)
.virtual_documents\Code\Test (0, 2022-05-21)
.virtual_documents\Code\Test\Threshold.ipynb.py (1341, 2022-05-21)
.virtual_documents\Transfer_Learning (0, 2022-05-21)
.virtual_documents\Transfer_Learning\get_data.ipynb.py (6161, 2022-05-21)
.vscode (0, 2022-05-21)
.vscode\settings.json (38, 2022-05-21)
Code (0, 2022-05-21)
Code\Dev (0, 2022-05-21)
Code\Dev\Data_Labeler (0, 2022-05-21)
Code\Dev\Data_Labeler\.vs (0, 2022-05-21)
Code\Dev\Data_Labeler\.vs\Data_Labeler (0, 2022-05-21)
Code\Dev\Data_Labeler\.vs\Data_Labeler\v17 (0, 2022-05-21)
Code\Dev\Data_Labeler\.vs\Data_Labeler\v17\.suo (519168, 2022-05-21)
Code\Dev\Data_Labeler\App.xaml (230, 2022-05-21)
Code\Dev\Data_Labeler\App.xaml.cs (3926, 2022-05-21)
Code\Dev\Data_Labeler\Assets (0, 2022-05-21)
Code\Dev\Data_Labeler\Assets\LockScreenLogo.scale-200.png (1430, 2022-05-21)
Code\Dev\Data_Labeler\Assets\SplashScreen.scale-200.png (7700, 2022-05-21)
Code\Dev\Data_Labeler\Assets\Square150x150Logo.scale-200.png (2937, 2022-05-21)
Code\Dev\Data_Labeler\Assets\Square44x44Logo.scale-200.png (1647, 2022-05-21)
Code\Dev\Data_Labeler\Assets\Square44x44Logo.targetsize-24_altform-unplated.png (1255, 2022-05-21)
Code\Dev\Data_Labeler\Assets\StoreLogo.png (1451, 2022-05-21)
Code\Dev\Data_Labeler\Assets\Wide310x150Logo.scale-200.png (3204, 2022-05-21)
Code\Dev\Data_Labeler\Class_Images_List.cs (354, 2022-05-21)
Code\Dev\Data_Labeler\Data_Labeler.csproj (7689, 2022-05-21)
Code\Dev\Data_Labeler\Data_Labeler.sln (2724, 2022-05-21)
Code\Dev\Data_Labeler\MainPage.xaml (3818, 2022-05-21)
Code\Dev\Data_Labeler\MainPage.xaml.cs (1886, 2022-05-21)
Code\Dev\Data_Labeler\Package.appxmanifest (1567, 2022-05-21)
Code\Dev\Data_Labeler\Properties (0, 2022-05-21)
Code\Dev\Data_Labeler\Properties\AssemblyInfo.cs (1044, 2022-05-21)
Code\Dev\Data_Labeler\Properties\Default.rd.xml (1243, 2022-05-21)
Code\Dev\Data_Labeler\bin (0, 2022-05-21)
Code\Dev\Data_Labeler\bin\x86 (0, 2022-05-21)
Code\Dev\Data_Labeler\bin\x86\Debug (0, 2022-05-21)
Code\Dev\Data_Labeler\bin\x86\Debug\App.xbf (525, 2022-05-21)
Code\Dev\Data_Labeler\bin\x86\Debug\AppX (0, 2022-05-21)
... ...

# MLiS-II-Project-Programming-an-Autonomous-Driving-Car # Table of Contents 1. [Overview](https://github.com/Srushanth/MLiS-II-Project-Programming-an-Autonomous-Driving-Car/blob/master/#overview) 2. [Challenge Details](https://github.com/Srushanth/MLiS-II-Project-Programming-an-Autonomous-Driving-Car/blob/master/#challenge-details) 1. [Training data](https://github.com/Srushanth/MLiS-II-Project-Programming-an-Autonomous-Driving-Car/blob/master/#training-data) 2. [Challanges](https://github.com/Srushanth/MLiS-II-Project-Programming-an-Autonomous-Driving-Car/blob/master/#challanges) 3. [Equipment](https://github.com/Srushanth/MLiS-II-Project-Programming-an-Autonomous-Driving-Car/blob/master/#equipment) 4. [Deploying Models to the Car](https://github.com/Srushanth/MLiS-II-Project-Programming-an-Autonomous-Driving-Car/blob/master/#deploying-models-to-the-car) 5. [Training Machines](https://github.com/Srushanth/MLiS-II-Project-Programming-an-Autonomous-Driving-Car/blob/master/#training-machines) 6. [Tracks](https://github.com/Srushanth/MLiS-II-Project-Programming-an-Autonomous-Driving-Car/blob/master/#tracks) 7. [Driving Scenarios](https://github.com/Srushanth/MLiS-II-Project-Programming-an-Autonomous-Driving-Car/blob/master/#driving-scenarios) 3. [Assessment and Important Dates](https://github.com/Srushanth/MLiS-II-Project-Programming-an-Autonomous-Driving-Car/blob/master/#assessment-and-important-dates) 4. [Teams](https://github.com/Srushanth/MLiS-II-Project-Programming-an-Autonomous-Driving-Car/blob/master/#teams) 5. [Resources to build the model](https://github.com/Srushanth/MLiS-II-Project-Programming-an-Autonomous-Driving-Car/blob/master/#resources-to-build-the-model) 6. [Tips](https://github.com/Srushanth/MLiS-II-Project-Programming-an-Autonomous-Driving-Car/blob/master/#tips) 7. [Report Marking CriteriaFile](https://github.com/Srushanth/MLiS-II-Project-Programming-an-Autonomous-Driving-Car/blob/master/#report-marking-criteria) 8. [Pre-Processing](https://github.com/Srushanth/MLiS-II-Project-Programming-an-Autonomous-Driving-Car/blob/master/#pre-processing) # Overview **_In this project the task is to train a deep learning algorithm to autonomously navigate a real car around a realistic test circuit, and make the appropriate manoeuvres where necessary. At the end of the project, you are expected to give a presentation and write a report about what you have done. Your model will be tested on the track and will compete against the models of your peers._** # Challenge Details - Work in pairs - Develop a deep learning model - Input = Image from a camera on the car - Predictions = Appropriate speed and steering angle ## Training data - [Dataset](https://github.com/Srushanth/MLiS-II-Project-Programming-an-Autonomous-Driving-Car/blob/master/https://www.kaggle.com/c/machine-learning-in-science-2022/data) is hosted on Kaggle - 13.8k images - Spped & Steering angle is also available in the dataset - We are free to generate our own dataset ## Challanges 1. This will allow you to automate the process of model submission, and obtain an indication of performance (using a small set of test data), before we evaluate them on the final, unseen data. 2. Kaggle competition is hosted [here](https://github.com/Srushanth/MLiS-II-Project-Programming-an-Autonomous-Driving-Car/blob/master/https://www.kaggle.com/c/machine-learning-in-science-2022). 3. Create a Kaggle account (if you do not have one) and form a team with your project partner. 4. A live challenge, where your pre-trained model will be deployed to the car and tested on real circuits. This will be performed in person. ## Equipment - The main body of the car is the SunFounder [PiCar-V kit V2](https://github.com/Srushanth/MLiS-II-Project-Programming-an-Autonomous-Driving-Car/blob/master/https://www.sunfounder.com/products/smart-video-car) and is equipped with a Raspberry Pi (RPi) - **_TensorFlow v2.4_** is installed on the car, - The car has an optional Coral Edge TPU, which is a custom device to run forward-pass operations for edge computing. - Note that it isn’t necessary to convert your model to TensorFlow Lite. ## Deploying Models to the Car A standardised skeleton code will be provided to you that you should integrate your pre-trained model with, which we will then install on the car prior to the live testing. ## Training Machines - We can use Google Colab or our own local machine to train the model - We will also have access to MLiS1 or MLiS2 machines (each with w) to perform training. These are accessible by ssh’ing into the machine, by typing - ssh username@mlis1.nottingham.ac.uk or - ssh username@mlis2.nottingham.ac.uk, where username is your University username. - In order to install custom packages on your machine, you will need to set up a conda environment. To install conda, type the following command - bash /shared/Anaconda3-2019.10-Linux-x86 ***.sh - Once installed, you will need to add a start up script - echo . /.bashrc >> .profile - Lastly to create your conda environment use - conda create --name my env python=3.6 ## Tracks ![](https://github.com/Srushanth/MLiS-II-Project-Programming-an-Autonomous-Driving-Car/blob/master/Images/Live_Testing_Track.png)
Figure 1: - (Top) T-junction tracks - (Middle) Oval Track - (Bottom) Figure-of-eight track. ## Driving Scenarios **Important**: we only use UK driving rules, i.e. driving on the left-hand side. The training data was based on the following driving scenarios: 1. Keeping in lane driving along the straight section of the T-junction track. 2. As (1), but stopping if a pedestrian is in the road. 3. As (1), but driving as normal if pedestrians or other objects are on the side of (but not in) the road. 4. Driving around the oval track in both directions. 5. As (4), but stopping if a pedestrian is in the road. 6. As (4), but driving as normal if pedestrians or other objects are on the side of (but not in) the road. 7. Performing a turn at the T-junction, in response to a traffic sign (either left or right). 8. Driving around the figure-of-eight track in both directions, continuing straight at the intersection. We will not consider objects in or at the side of the road for this scenario. 9. Stopping at a red traffic light and continuing at a green traffic light. We will only consider these scenarios in the live testing. # Assessment and Important Dates - **Kaggle and code submission**: 1.30pm on Friday 6th May 2022 - **Live testing**: Wednesday 11th May 2021 - live testing performance will contribute 20% towards your total mark - **In-person Presentations**: 16th and 17th May 2022 - The presentations will last for 25 minutes - speak for approximately 10 minutes each, with 5 minutes for questions - You will be penalized if you over- run significantly - The presentation will contribute 40% towards your total mark - **Report**: Friday May 20th 2022 - This is to be completed individually and will contribute 40% towards your total mark # Teams Please come up with a suitable team name and use this on Kaggle
Baride & Talukdar
Team name is **_Alpha_** # Resources to build the model 1. [DeepPiCar — Part 1: How to Build a Deep Learning, Self Driving Robotic Car on a Shoestring Budget](https://github.com/Srushanth/MLiS-II-Project-Programming-an-Autonomous-Driving-Car/blob/master/https://towardsdatascience.com/deeppicar-part-1-102e03c83f2c) 2. [TensorFlow models on the Edge TPU](https://github.com/Srushanth/MLiS-II-Project-Programming-an-Autonomous-Driving-Car/blob/master/https://coral.ai/docs/edgetpu/models-intro/#compatibility-overview) # Tips 1. Make good use of your Kaggle submissions to test different things. You get 1 per day – don’t let them go to waste! The only submission that matters is your final one earlier ones have no bearing on your mark, so don’t worry if the test score isn’t great. The best performing groups last year all made many submissions. 2. Get started early and make a plan. 3. Some initial data exploration on the training data will help you understand it better. 4. More tips to come in the introductory session, including lessons from last year! # Report Marking Criteria [report-guidelines.pdf](https://github.com/Srushanth/MLiS-II-Project-Programming-an-Autonomous-Driving-Car/blob/master/Resources/report-guidelines.pdf)
[presentation-guidelines.pdf](https://github.com/Srushanth/MLiS-II-Project-Programming-an-Autonomous-Driving-Car/blob/master//Resources/presentation-guidelines.pdf) # Pre-Processing The pre-processing testing file is [here](https://github.com/Srushanth/MLiS-II-Project-Programming-an-Autonomous-Driving-Car/blob/master//Code/Test/Threshold.ipynb). ## Initial Observations - Image used is [this](https://github.com/Srushanth/MLiS-II-Project-Programming-an-Autonomous-Driving-Car/blob/master//Code/Test/10022.png) - Tried to threshold the image using **_cv2.threshold(img, 253, 255, cv2.THRESH_BINARY)_** - Added a lower & upper range threshold - Crop the bottom of the image ## Below is the sample code ```Python # Load libraries import cv2 import numpy as np # Read image img = cv2.imread('./10022.png') # Show the image cv2.imshow('img', img) cv2.waitKey(0) cv2.destroyAllWindows() # global thresholding ret1, th1 = cv2.threshold(img, 253, 255, cv2.THRESH_BINARY) lower_black = np.array([0, 0, 0], dtype = "uint16") upper_black = np.array([250, 250, 250], dtype = "uint16") black_mask = cv2.inRange(th1, lower_black, upper_black) cv2.imshow('img', black_mask) cv2.waitKey(0) cv2.destroyAllWindows() # Image shape (h, w) = black_mask.shape # Required shape of the image required = black_mask[int((3*h)/5):h, :w] # This image will be used for the final inference cv2.imshow('img', required) cv2.waitKey(0) cv2.destroyAllWindows() ```

近期下载者

相关文件


收藏者