Project Overview:
This project aimed to design and implement a prototype of an autonomous robocar system capable of driving independently using image processing and object detection algorithms. The system was divided into two parts: the robotic actuator system, based on the Lego EV3 Mindstorm Robot, and the sensor system, utilizing the Viola-Jones object detection algorithm for real-time object detection and recognition.
Key Achievements:
- Developed a robocar system capable of autonomous navigation based on image detection.
- Implemented Viola-Jones object detection algorithm for recognizing traffic signs and other relevant objects in real-time using over 1800 positive and 900 negative images.
- Achieved high accuracy in object detection using a 10-stage trained Haar-Cascade classifier.
- Integrated the system using OpenCV, EmguCV, and C# programming language along with Lego EV3 Mindstorm libraries.
- Demonstrated the ability to navigate safely and respond to traffic signs autonomously.
Technologies & Tools:
- Programming Languages: C#, Python
- Libraries & Tools: OpenCV, EmguCV, Lego EV3 Mindstorm Libraries
- Algorithms: Viola-Jones Object Detection, Haar-Cascade Classifier
- Hardware: Lego EV3 Mindstorm Robot
Methodology:
The methodology of this project involved both hardware and software components to create a functional autonomous robocar system. The primary objective was to design a system that could navigate autonomously and detect objects using computer vision algorithms.
- System Design:
- Hardware Components:
- The robotic vehicle was built using the Lego EV3 Mindstorm kit. It featured motors for movement, sensors for obstacle detection, and a camera for image processing.
- The camera mounted on the robocar captured real-time images, which were then processed to detect relevant objects (e.g., traffic signs, obstacles).
- Software Architecture:
- The software was implemented in C# using the EmguCV library to integrate OpenCV’s image processing capabilities with Lego EV3 libraries for motor control.
- The system was designed to capture frames from the camera feed, preprocess them for object detection, and make decisions based on detected objects (e.g., stop for a red light, turn at a sign).
- Hardware Components:
- Image Processing and Object Detection:
- Viola-Jones Algorithm: The Viola-Jones object detection algorithm, implemented using Haar-Cascade classifiers, was utilized to detect objects such as traffic signs, obstacles, or any other relevant objects in the car’s path.
- A Haar-Cascade classifier was trained on over 1800 positive images (of objects) and 900 negative images (without objects) to detect and classify objects in real-time.
- The classifier was trained in 10 stages to improve accuracy, detecting objects with high precision.
- Autonomous Navigation:
- The robocar was programmed to respond to the detected objects by taking appropriate actions, such as:
- Turning: The car could detect traffic signs (e.g., left or right turns) and navigate accordingly.
- Stopping: If the system recognized a stop sign or an obstacle, the car would stop to prevent a collision.
- The software integrated the robot’s actuators (motors) to ensure movement based on real-time decisions from the object detection system.
- The robocar was programmed to respond to the detected objects by taking appropriate actions, such as:
Implementation:
The implementation process was divided into several key stages to ensure a smooth integration of hardware and software.
Hardware Setup:
- Lego EV3 Robot Assembly: The physical robocar was built using Lego EV3 components. It included the following:
- Motors for movement (driving and steering).
- Camera sensor for real-time image capture.
- The motors were configured to control the robot’s movement, while the camera provided the visual input for object detection.
- Lego EV3 Robot Assembly: The physical robocar was built using Lego EV3 components. It included the following:
Software Development:
- Programming in C#: The majority of the programming was done in C# using the EmguCV library, which allowed seamless integration of OpenCV’s image processing functions with the Lego EV3 robot’s control system.
- Image Processing Pipeline:
- Preprocessing: The captured images were preprocessed using techniques like grayscale conversion and normalization to improve the performance of object detection.
- Object Detection: The Haar-Cascade classifier was used to detect objects. Once an object was detected, the system identified the type of object (e.g., a traffic sign) and passed this information to the decision-making module.
- Control Logic: Based on the detected object, a control logic was implemented:
- Obstacle Detection: The system would stop or avoid obstacles.
- Traffic Sign Detection: If a traffic sign was detected, the robot would either stop, turn, or follow other behavior based on the recognized sign.
Training the Haar-Cascade Classifier:
- Image Collection: A set of images for training the classifier was collected. This involved capturing multiple positive images (images with the target object) and negative images (images without the object).
- Training the Classifier: Using the collected dataset, the classifier was trained in multiple stages to detect objects with the highest possible accuracy. This process involved fine-tuning parameters to minimize false positives and improve detection accuracy.
Testing and Optimization:
- After implementing the system, several tests were conducted to evaluate the system’s performance. This involved placing the robocar in a controlled environment with various obstacles and traffic signs.
Real-Time Execution:
- Once the classifier was trained and optimized, the system was integrated for real-time execution. The robocar navigated autonomously by analyzing the images it captured, detecting objects, and reacting accordingly to its environment.