You can easily run our project in jupyter-notebook and we also state any packages to install to avoid any errors The following commands needed to install the packages that we used
- For cv2 library you can use the following commands: pip install opencv-python if you face any probem you can upgrade the python version to the latest one pip install --upgrade opencv-contrib-python
- For numpy library you can use the following command: pip install numpy
- For matplotlib library you can use the following command: pip install matplotlib
- For glob library you can use the following command: pip install glob
In this project we are going to create a simple perception stack for self-driving cars (SDCs.) Although a typical perception stack for a self-driving car may contain different data sources from different sensors (ex.: cameras, lidar, radar, etc…), we’re only going to be focusing on video streams from cameras for simplicity. We’re mainly going to be analyzing the road ahead, detecting the lane lines, detecting other cars/agents on the road, and estimating some useful information that may help other SDCs stacks. The project is split into two phases.
The expected output is as follows:
- Your pipeline should be able to detect the lanes, highlight them using a fixed color, and pain the area between them in any color you like (it’s painted green in the image above.)
- You’re required to be able to roughly estimate the vehicle position away from the center of the lane.
- We used hough transform to detect the lines that we will draw on the lanes
- We used the canny function to detect all the edges in the test videos and imgs
- We could find the lines when we know the equation of line (the slope and interecpt)
- We could find the region of interest and fill it by knowing the the equations of the lines to know the coordinates of the lines
- We extracted the yellow color and threshold the while color with the original image or video to detect the yellow and white lanes
- To reduce the noise we applied gaussianBlur after convert the every frame to grayscale
- We called the reigon of interest function and make AND operation with the blur video to detect the region
- We also handle the blanking of the lines
- We also Determined the center of the lane by mapping of the actual distance in meters to pixel by knowing the center of the lane and the car
- There is also a Debugging mode also by put with the output videos and test imgs (the hough result ,Canny result,Region of interest and The filling between the lanes)
The expected output is as follows: Locate and identify the cars on the road