The code, designed to run on a Raspberry Pi Pico, used for the micromouse allows for a range of modes such as; combat mode, obstacle avoidance and white line following. The code, initially, configures the hardware and assigns GPIO pins to various sensors. Specifically, it uses two front-facing and two rear-facing white line sensors to detect boundary limits. Additionally, IR sensors were installed for obstacle avoidance, which is crucial for enabling autonomous navigation, while avoiding collisions. The program integrates these multiple sensors though the GPIO interface to facilitate coordinated movement.

The program contains a special feature relating to the white line following where, after 20 seconds of following the white line, it reverses and follows the line anti-clockwise for another 20 seconds.

In regards to movement and navigation, the code includes a set of functions that manage the motors for forward, revere and some turning actions, as well as other movement commands. These functions respond to input signals to accurately regulate the speed of the motors, which enable the micromouse to handle manoeuvres when necessary. The use of these sensors allows the micromouse to respond to feedback as it happens. In real-time. This is significant for things like avoiding obstacles.

Overall, the code running the micromouse, is what brings everything together, allowing it to switch between different modes like combat and obstacle avoidance. By setting up the seesaw (which gave options for more features) and the GPIO pins to the sensors, the micromouse could essentially see it’s surroundings and react in real time. Hence, the code makes sure that the micromouse can move smartly and handle different situations on it’s own.