Week3

Week #3 #

Implemented MVP features #

Hardware Integration #

Completely rewired the RC car. Arduino, Raspberry Pi, and sensors are mounted and connected on the vehicle.

Communication Setup #

Stable SSH connection established between Raspberry Pi and Arduino. IMU data is streamed and accessed remotely.

Sensor Fusion #

Orientation is calculated using both gyroscope and accelerometer. The values are compared and visualized with error tracking.

Dataset Collection #

Collected 32 datasets: 16 for normal driving, 16 with drift transitions. Ready for training the control model.

Demonstration of the working MVP #

Videos demonstrating MVP

ML #

Link to the training code: Link

We trained a feed-forward neural network (5-128-128-4) on 94k real-world (state, action -> next state) samples from straight driving and drifting. Data was preprocessed (yaw integration, normalization, rotation to global frame). The model is used in a random-shooting MPC with a 0.2s horizon, minimizing yaw rate and lateral acceleration. It outputs steering every 20 ms. In tests, the controller damped drifts approximately 3x faster than a human, completing a full data-to-control pipeline.

Links to the initial model artifacts: Link

Weekly commitments #

Individual contribution of each participant #

Valeria:

  • Collecting driving datasets
  • Writing code for control a servo motor based on a signal from RL
  • Working with platform:
    Connecting all the elements according to the new scheme
    Integrating all computing components (Arduino, Raspberry Pi, IMU) on the machine and set up stable communication between them
    Photo 1, Photo 2, Photo 3

Nikolay:

  • Collecting driving datasets
  • Implementing the MPC controller (random-shooting, 0.2s horizon, UART control output)

Lidia:

  • Working with sensors:
    Preprocessed raw sensor data: timestamp conversion, yaw integration, coordinate frame rotation, angular acceleration computation
    Analyzing the values from the IMU sensors
    Estimated the magnitude of the gyroscope error
    Visualization of angular acceleration and trajectory of car (for 3 cases)
  • Helping with the collection datasets

Ilyas:

  • Built the full dynamics dataset: created (state, action) → next state pairs and computed normalization stats
  • Trained the neural network model (5-128-128-4) using MSE and Adam, saved model weights and stats
    Training Code, Initial Model

Andrey:

Plan for Next Week #

Upgrade Arduino Wiring:
Replace Arduino wires with higher-quality ones, solder them securely, and reinforce connections to prevent disconnections.

Model and Mount Hardware Frames:
Design and 3D print custom mounts for all components to ensure a compact, safe, and organized layout inside the RC car.

Fix Arduino–Raspberry Pi Communication:
Debug and optimize the connection between Arduino and Raspberry Pi to ensure fast, stable, and lossless data transfer — current delays and dropouts need to be resolved.

Improve RL Stability:
Collect additional driving datasets to increase the robustness and consistency of the reinforcement learning model during drifting.

Refactor Arduino Code:
Add interrupt handling for all critical and unpredictable car states to improve reliability and prevent unexpected behavior.

Confirmation of the code’s operability #

We confirm that the code in the main branch:

  • In working condition.
  • Run via docker-compose (or another alternative described in the README.md).