top of page

Setup: with/without damping

Mocap software, Tracking balls in a 3D printed jig with mics at centroids

Real Time Mic feedback example

normalcover50.png

Project Paper

Multi-Microphone Acoustic Mapping for Drone Path Planning

Embedded Systems | ROS2  | Multithreaded Audio Processing | Trajectory Optimization

This project developed a real-time, ROS2-based data collection framework to generate spatially and temporally synchronized acoustic maps for drone navigation in noise-sensitive urban environments. The system enables dynamic acoustic obstacle modeling by capturing 12-channel audio and drone position data at 10Hz using a motion capture system and microphone array. These data form the foundation for trajectory optimization methods such as RRT*, Control Lyapunov Functions (CLF), and Control Barrier Functions (CBF), allowing future planners to treat loud zones as dynamic, soft constraints.

I worked in a team of 2 to develop the complete software development and integration, ensuring the platform could reliably log, synchronize, and export high-quality data to support advanced noise-aware motion planning.

Contributions

ROS2 Software Stack:
• Developed three core ROS2 nodes:
– mic12_Audio_capture.py: Parallel 12-mic audio capture with per-channel buffers and live .wav streaming
– mic12_CSV_logger.py: Real-time logging of Tello + mic positions, distances, dB levels, and nanosecond timestamps
– mic12_Audio_plotter.py: Live visualization of all mic channels for experimental tuning
• Launched and coordinated all systems via 12mic_launch.py

Path Planning–Oriented Design:
• Designed the data schema and logging cadence (10Hz) specifically for compatibility with dynamic planners using CLF-CBF constraints and RRT* cost reweighting
• Enabled data fusion: each frame includes drone pose, relative mic distances, dB values, and synchronized timestamps to simulate spatiotemporal acoustic obstacles
• Integrated drone automation (tellocontroller.py) to ensure repeatable trajectories across environmental conditions

System Architecture:
• Motion capture integration via VRPN using prebuilt ROS1–ROS2 bridge

• Mic + Mocap ball-holding jig for effective mic position capture
• Audio capture via sounddevice using preallocated buffers and periodic .wav output with binary interleaving
• Thread-safe audio streaming with timestamped message passing for cross-node coherence

Experimental Variation for Planner Stress Testing:
• Conducted systematic tests varying mic coverage, foam dampening, and drone altitude to produce path-relevant datasets with shifting acoustic fields
• Designed for compatibility with CBF-based planners that optimize flight paths by minimizing cumulative noise exposure

More Resources

bottom of page