A real-time sound detection and alert app for people with hearing loss. Dumbo listens for dangerous sounds and alerts you through haptic vibration, a full-screen flash, and a local notification — so you never miss something important.
People with hearing loss or deafness face a hidden danger that most people take for granted: they cannot hear warnings that others rely on instinctively — fire alarms, approaching emergency vehicles, a baby crying in the next room, breaking glass.
Existing solutions are either hardware-dependent (expensive smart home devices) or require constant screen attention. Dumbo runs entirely on your phone, requires no extra hardware, and alerts you through the senses you rely on most — touch and sight.
No cloud. No subscription. No personal audio data leaves your device.
| Sound | Flash Color | Haptic Pattern |
|---|---|---|
| Siren | Red | Double pulse |
| Fire Alarm | Red | Double pulse |
| Baby Cry | Blue | Long soft vibration |
| Breaking Glass | Amber | Short sharp pulse |
| Car Horn | Amber | Short sharp pulse |
Sounds are classified at 70% confidence or higher to minimize false alerts.
Microphone (16 kHz PCM)
|
v
Audio Chunks (100ms)
|
v
ML Classifier
┌─────────────────────────────────────────┐
│ Android: MediaPipe YAMNet (TFLite) │
│ iOS: Apple SoundAnalysis Framework │
└─────────────────────────────────────────┘
|
v
Confidence >= 70%?
|
v
Alert Layer
┌────────────────────────────────────────────────┐
│ Haptic → vibration pattern via OS feedback │
│ Visual → full-screen color flash (1.6s fade) │
│ Notify → local notification (works in bg) │
└────────────────────────────────────────────────┘
All inference runs on-device. Audio is processed in memory and never stored or transmitted.
Dumbo is a Kotlin Multiplatform app with ~95% shared code. Platform-specific APIs (microphone, haptics, notifications, ML) are abstracted behind interfaces and injected via Koin.
commonMain
├── domain/ UseCase logic — ClassifySoundUseCase, AlertUseCase
├── platform/ Interface declarations — AudioCapture, AudioClassifier,
│ HapticFeedback, NotificationService, MicrophonePermission
├── viewmodel/ SoundDetectionViewModel — StateFlow<SoundDetectionState>
├── ui/ Compose screens, FlashOverlay, permission primers
└── di/ Koin modules — AppModule, PlatformModule (expect)
androidMain
├── platform/ AudioRecord, MediaPipe YAMNet, Vibrator, NotificationCompat
└── di/ PlatformModule.android.kt (actual)
iosMain
├── platform/ AVAudioEngine, SoundAnalysis, UIImpactFeedback, UNUserNotification
└── di/ PlatformModule.ios.kt (actual)
Data flow:
AudioCapture → ClassifySoundUseCase → SoundDetectionViewModel → AlertUseCase
(Flow) (Flow) (StateFlow) (haptic + flash + notify)
| Component | Technology |
|---|---|
| Language | Kotlin 2.3.0 (K2 compiler) |
| UI | Compose Multiplatform 1.10.0 |
| DI | Koin 4.1.0 |
| Android ML | MediaPipe Tasks Audio + YAMNet (TFLite) |
| iOS ML | Apple SoundAnalysis framework (built-in) |
| Async | Kotlin Coroutines + Flow |
| Min Android | API 24 (Android 7.0) |
| Min iOS | iOS 15 |
- Android Studio Meerkat or later
- Xcode 15 or later (for iOS builds)
- JDK 17
- Kotlin Multiplatform plugin installed in Android Studio
git clone https://github.com/your-username/Dumbo.git
cd Dumbo- Open the project in Android Studio.
- Wait for Gradle sync to complete.
- Select the
composeApprun configuration. - Run on a physical device or emulator (API 24+).
Sound classification works best on a real device. Microphone input on emulators is limited.
- Build the shared framework from the project root:
./gradlew :composeApp:linkDebugFrameworkIosSimulatorArm64
- Open
iosApp/iosApp.xcodeprojin Xcode. - Select your simulator or device target.
- Build and run (
Cmd+R).
For a physical iOS device, set your development team in Xcode under Signing & Capabilities.
Dumbo requests permissions only when you need them — never on first launch.
| Permission | Platform | When Asked |
|---|---|---|
| Microphone | Android + iOS | When you tap "Enable Microphone" |
| Vibration | Android | Granted automatically at install |
| Notifications | Android (API 33+) | On the first sound detection |
| Notifications | iOS | On the first sound detection |
If a permission is permanently denied, the app shows a banner with a direct link to your device settings. The app stays usable — it never blocks you behind a permission wall.
Dumbo/
├── composeApp/
│ └── src/
│ ├── commonMain/ Shared Kotlin code (UI, domain, DI)
│ ├── androidMain/ Android implementations + YAMNet model asset
│ └── iosMain/ iOS implementations
├── iosApp/ Xcode project (Swift entry point)
└── gradle/
└── libs.versions.toml Single source of truth for all dependency versions
Contributions are welcome. Please open an issue first to discuss what you want to change before submitting a pull request.
Areas where help would be most valuable:
- Adding new sound classes beyond the current five
- iOS background audio session improvements
- Accessibility improvements (larger flash area, screen reader support)
- Localization
MIT License. See LICENSE for details.
- YAMNet by Google — audio classification model used on Android
- Apple SoundAnalysis — built-in iOS sound recognition framework
- MediaPipe — on-device ML tasks for Android
- Kotlin Multiplatform — shared code across Android and iOS
- Compose Multiplatform by JetBrains — shared declarative UI