Draft:PACPOD: Planar Acoustic Camera: Difference between revisions

From Wikipedia, the free encyclopedia
Content deleted Content added
Submitting using AfC-submit-wizard
Declining submission: v - Submission is improperly sourced and nn - Submission is about a topic not yet shown to meet general notability guidelines (be more specific if possible) (AFCH)
Line 1: Line 1:
{{AFC submission|d|v|u=Wrobin19|ns=118|decliner=KylieTastic|declinets=20240428173029|reason2=nn|ts=20240428171826}} <!-- Do not remove this line! -->

{{Short description|Acoustic Camera}}
{{Short description|Acoustic Camera}}
{{Draft topics|computing|technology}}
{{Draft topics|computing|technology}}
{{AfC topic|stem}}
{{AfC topic|stem}}

{{AfC submission|||ts=20240428171826|u=Wrobin19|ns=118}}
{{AfC submission|t||ts=20240428171508|u=Wrobin19|ns=118|demo=}}<!-- Important, do not remove this line before article has been created. -->{{UserboxCOI|PACPOD: Planar Acoustic Camera}}
{{UserboxCOI|PACPOD: Planar Acoustic Camera}}


= PACPOD: Planar Acoustic Camera =
= PACPOD: Planar Acoustic Camera =
Line 49: Line 51:


A heat map corresponding to 10 seconds of audio is generated and overlaid onto an image, indicating where the various noises originate. The camera footage is recorded in tandem with the audio.
A heat map corresponding to 10 seconds of audio is generated and overlaid onto an image, indicating where the various noises originate. The camera footage is recorded in tandem with the audio.









== References ==
== References ==

Revision as of 17:30, 28 April 2024

This user has publicly declared that they have a conflict of interest regarding the Wikipedia article PACPOD: Planar Acoustic Camera.

PACPOD: Planar Acoustic Camera

Problem Statement:

Acoustic cameras are essential tools for locating sources of sound. While there are existing projects and products of acoustic cameras available online, they tend to be both complex and expensive. Therefore, there is a clear need to develop a simpler, more cost-effective acoustic camera system.

Currently, available acoustic cameras often require substantial expertise to operate and are prohibitively expensive for many potential users, limiting their widespread adoption. The challenge lies in creating an acoustic camera that performs beamforming effectively but is affordable, user-friendly, and still delivers accurate results.

Most of the market relies on microphones up to stereo configuration, and getting more than two microphones to operate in sync can be complex, requiring both technical expertise and costly equipment. Thus, the need for a simplified, accessible solution is evident.

Goals and Requirements:

Goals and Requirements for this Planar Acoustic Camera were developed by our group through research on existing Acoustic Cameras on the market. We knew that in order to meet market standards we needed to accomplish the following: Requirements:

  • GUI with spectrograms and high pass, low pass, band pass, and band stop filters
  • Visual and acoustic image overlay
  • Audio input from the microphones
  • Video input from the camera
  • Create a beamformed matrices using Acoular, a Python Library

Goals:

  • Pinpoint frequency emissions across the x, y, and z axes
  • Filter the proper frequencies based on the user’s input
  • Real-time analysis
  • Create a microphone array with a high signal-to-noise ratio and suppress side lobes

Approach:

In order to utilize the time that we had most efficiently, we split up different tasks amongst the group members. The different tasks to tackle were the Graphical User Interface, PCB design and manufacturing, localization of the sounds from the microphone array and USB camera, and operation of the USB camera and visualization. After completion, all the tasks will fit seamlessly together to create our overall system. The GUI takes in user input that is sent to start the beam forming process. Wave signals and a picture will then be received from the microphone and camera. This data will be displayed as an image overlaid with a heatmap of the filtered signal on the GUI.

Design:

Our acoustic camera design employs a sophisticated approach, utilizing a 16-microphone array shield on the MINIDSP UMA 16 that samples at 16 kHz in combination with a USB camera to accurately localize sources of sound using Acoular to utilize beamforming techniques. The 16-microphone array is strategically positioned to capture sound from various directions, providing comprehensive coverage and enhancing the accuracy of the system. This array is capable of capturing a wide range of frequencies, ensuring the effective detection and localization of sound sources.

The process begins with the Python code, which controls the microphone array. The code records the audio and generates the necessary beamformed matrices. Subsequently, the data is seamlessly transferred to a MATLAB Graphical User Interface (GUI), where it is merged with the video feed from the USB camera. This integration allows for the visualization of sound sources on the video feed. The MATLAB GUI provides an intuitive platform for the user to interact with the data to visualize the time series and the spectrograms of the WAV file. Users can select specific frequencies they want to filter, enabling them to focus on particular sound sources. Once the user selects the frequencies, the beamforming process is restarted to create a new set of matrices, customized to the selected frequencies. The overlay of the beamformed matrices on the video feed provides users with a clear and precise visualization of the sources of sound, facilitating a more efficient analysis.

The use of a 16-microphone array, in conjunction with the USB camera and the MATLAB GUI, presents an accessible, user-friendly solution for sound localization. This design not only simplifies the process but also enhances the accuracy and efficiency of the acoustic camera system.

Component and System Testing:

We have created a custom four-layer PCB, designed to interface with the miniDSP UMA-16 processing system. The board features 16 digital MEMS microphones, all of which record simultaneously, then their output is stored in a .wav file and transmitted via USB. An asymmetrical layout was chosen to increase accuracy.

While we await our custom PCB, we have been testing using the miniDSP UMA-16 microphone array. Any heat maps shown here were generated using output from this device. This has allowed us to ensure the software portion of our project functions before we use our own array.

Project Status:

At the moment, the user is able to input an audio file into our MATLAB code. Then, they may use filters to focus on different frequencies contained within the file, as well as analyze separate channels within the file. The data is then sent to our Python code, where beamforming calculations are performed.

A heat map corresponding to 10 seconds of audio is generated and overlaid onto an image, indicating where the various noises originate. The camera footage is recorded in tandem with the audio.

References