What Is Hum to Search? How Google's Song Finder Works

A complete explanation of Google's AI-powered melody recognition technology — what it is, how it works, its history, accuracy, and everything in between

Last updated: February 2026 | Reading time: 11 minutes

Imagine humming a melody you can't get out of your head — slightly off-key, no words, just a few bars of tune — and having a computer tell you the song title and artist within seconds. That's exactly what Google's Hum to Search does. It's one of the most quietly impressive AI applications of recent years, solving a problem that has frustrated music lovers for generations: how do you find a song when you only remember how it sounds?

This article explains everything about Hum to Search: what it is, the technology behind it, its history, accuracy, limitations, and how it compares to other music recognition tools. For hands-on usage instructions across all platforms, see our complete hum to search guide.

Hum to Search: Quick Definition

Hum to Search is a Google feature that identifies songs from hummed, whistled, or sung melodies using artificial intelligence. It was launched in October 2020, works on Android, iOS, and desktop, and covers 500,000+ songs in 20+ languages. Unlike Shazam, it doesn't need the song to be playing — it works from your voice alone.

The Problem Hum to Search Solves

To understand why Hum to Search is significant, consider the problem it addresses. Music recognition has existed for years — Shazam launched in 2002, SoundHound in 2009 — but these tools all require the actual song to be present. They work by recording a snippet of ambient audio and matching it to a database of real recordings.

This works perfectly when you hear a song in a café and want to identify it. But it completely fails when the song is only in your memory — an earworm you heard yesterday, a tune from a movie you watched three years ago, a song your parent used to hum when you were a child. In these cases, there is no external audio to capture.

Before Hum to Search, your options were limited: search the internet with descriptive phrases (“upbeat pop song 2019 female singer piano”), remember any fragments of lyrics and search those, ask friends, or post in online communities. All of these work occasionally, but none reliably and quickly. Hum to Search fills this gap with a direct, intuitive interface: just hum what's in your head.

History and Development

The Origins of Melody-Based Search

The concept of querying by melody (often called “query by humming” or QBH in academic literature) dates back to the early 1990s in music information retrieval (MIR) research. Academic systems like Themefinder (1995) and Melodic-Index allowed users to enter musical notation or use a keyboard interface to search music databases. However, these tools were clunky, required musical knowledge, and had small databases.

Midomi (2007) was among the first consumer-facing applications to accept voice humming, but its database and accuracy were limited. SoundHound built voice recognition into its music identification app but remained less accurate than Google would eventually achieve.

Google's Entry: October 2020

Google officially launched Hum to Search in October 2020 with a blog post describing the underlying technology. The announcement came during a period of rapid AI advancement at Google, and the feature was built on the same infrastructure powering other Google AI projects, including the SPICE (Self-Supervised Pitch Estimation) model developed by the Google Research team.

The initial launch supported English on Android devices, with the feature accessible via the Google Search widget and the Google app. Accuracy on launch was already notably higher than existing consumer products, which generated significant media coverage and rapid adoption.

Expansion and Improvements (2021–2026)

  • 2021: Expanded to iOS via the Google app; language support grew to include Spanish, Portuguese, French, German, Japanese, Korean, and others.
  • 2022: Database expanded significantly; integration with YouTube Music for instant playback of identified songs.
  • 2023: Desktop support improved; feature integrated more deeply into Google Assistant on Android.
  • 2024: Further accuracy improvements through model updates; expanded to 20+ languages.
  • 2025–2026: Continued accuracy refinements; better coverage of regional and non-Western music genres.

How Hum to Search Works: The Technology

The technical challenge Hum to Search solves is genuinely difficult. Your hummed version of a song and the studio recording of that same song are acoustically very different — different instrumentation, different timbre, different production, potentially different key or tempo. A traditional audio fingerprinting system (like Shazam uses) would fail to match them. Hum to Search requires a completely different approach.

Step 1: Audio Capture and Preprocessing

When you begin humming, your device's microphone captures audio and streams it to Google's servers in real time. On the server side, the raw audio is preprocessed: background noise is filtered, the audio is normalized, and it's converted into a format suitable for the neural network.

Step 2: Melody Extraction

The core of the system is a pitch estimation model — Google uses a variant of their SPICE model — that extracts the fundamental melody line from your humming. This model identifies the sequence of notes you're producing: not as absolute pitches (C4, D4, etc.) but as a relative sequence of intervals (up by a half step, down by a third, same note, etc.).

This relative representation is crucial because it makes the system robust to transposition. If you happen to hum the melody in C major but the original is in E major, the relative intervals are the same, and the system will still find a match.

Step 3: Melody Fingerprinting

The extracted melody sequence is then converted into a numerical embedding — a compact vector representation in a high-dimensional space — by a neural network. Crucially, both the query (your humming) and the reference songs in the database go through the same embedding process. The network is trained so that similar melodies, regardless of whether they come from humming or professional recordings, end up near each other in this embedding space.

Think of it like GPS coordinates: if you describe the same location using different methods (address, latitude/longitude, nearby landmarks), a good system maps all descriptions to the same point. Google's melody embedding does the same for music.

Step 4: Database Search and Matching

Your melody embedding is compared against the pre-computed embeddings of all 500,000+ songs in Google's database using a nearest-neighbor search algorithm. This is computationally intensive at scale, but Google uses approximate nearest-neighbor techniques (similar to those in their search infrastructure) to return results in under a second.

The system returns the top matches ranked by similarity score, which is converted into the confidence percentages you see in the results (e.g., “87% match”).

The Role of Rhythm

Google's system doesn't rely solely on pitch intervals — it also incorporates rhythmic information. The timing and duration of notes carry significant information about a song's identity. A melody played at the wrong tempo or with incorrect rhythmic patterns will be harder to match, which is why maintaining the original song's rhythm while humming improves accuracy noticeably.

Accuracy: What the Research and Data Show

Google stated at launch that Hum to Search achieved over 80% accuracy on their test set. Independent testing by tech reviewers generally found similar results for mainstream popular music, with some reporting accuracy rates of 85-92% for well-known songs from the last 40 years.

Factors That Improve Accuracy

  • Song popularity: More popular songs are better represented in the database and have been used to train the model more thoroughly.
  • Humming the chorus: Choruses tend to be the most distinctive and most-indexed sections of songs.
  • Correct rhythm: Maintaining the original song's tempo improves matching accuracy significantly.
  • Longer recordings: 10-15 seconds provides more data than 5-6 seconds.
  • Low background noise: Clean audio allows better melody extraction.
  • Distinctive melodies: Songs with unique melodic hooks are easier to identify than songs with generic melodic patterns.

Known Limitations

  • Database coverage: Songs must be in the database. Very obscure tracks, regional folk music, unsigned indie artists, and very old recordings may not be included.
  • Similar melodies: Songs that share melodic patterns (common in folk music, classical compositions, and certain pop genres) may produce incorrect matches.
  • Very short samples: Less than 5-6 seconds of humming often doesn't provide enough data for confident matching.
  • Extremely off-key humming: While the system is robust to moderate pitch errors, very large pitch deviations can confuse the melody extractor.
  • Non-melodic songs: Highly rhythmic tracks with minimal melody (some electronic dance music, certain hip-hop) can be harder to identify from humming alone.
  • Internet dependency: The feature requires an active internet connection; it cannot work offline.

Hum to Search vs. Other Music Recognition Tools

Understanding how Hum to Search fits into the broader music recognition landscape helps clarify when to use it and when other tools might be better.

FeatureHum to SearchShazamSoundHound
Identifies from humming✓ Yes✗ No✓ Yes
Identifies ambient recording✗ No✓ Yes (primary use)✓ Yes
Song database size500,000+10 million+10 million+
Free to use✓ Yes✓ Yes (limited)✓ Yes (limited)
Requires perfect pitch✗ NoN/A✗ No
Needs internet✓ RequiredPartial offline✓ Required

Key Differentiator: The Input Source

The critical distinction is not accuracy or database size — it's what you feed the system. Shazam's 10 million+ song database is much larger than Google's 500,000+, but it's entirely irrelevant when you need to identify a song from memory. For that specific use case — the earworm problem — Hum to Search has no real competitor from a mainstream technology perspective.

For a detailed side-by-side comparison and real-world testing results, our complete hum to search guide covers both tools across multiple categories and use cases.

Availability: Where and How to Access Hum to Search

Platform Availability

Android

  • Google app
  • Google Search widget
  • Google Assistant
  • Requires Android 6+

iOS (iPhone/iPad)

  • Google app (App Store)
  • Requires iOS 12+
  • Not in Safari
  • Not via Siri

Desktop

  • google.com (any browser)
  • Chrome, Firefox, Safari
  • Requires microphone
  • No app needed

Language and Regional Availability

As of 2026, Hum to Search supports 20+ languages including English, Spanish, Portuguese, French, German, Italian, Japanese, Korean, Hindi, and several others. The feature is available in most countries where Google services operate. Language support affects which songs are in the database — English-language tracks generally have better coverage than regional music from smaller markets.

Note that the language setting affects the interface, but you can hum a song in any language regardless of your device's language setting. The melody recognition is language-independent.

Privacy: What Data Does Google Collect?

When you use Hum to Search, your audio is transmitted to Google's servers for processing. Understanding the data implications is important for privacy-conscious users.

What Is Collected

  • Audio recording: Your humming is captured and sent to Google's servers. It's processed in real time.
  • Search activity: If you're signed into a Google account, the song search may be logged in your Google My Activity history.
  • Device data: Standard Google app telemetry (device type, OS version, app version) is collected for diagnostic purposes.
  • Location (optional): Location data may be used to serve region-appropriate results if location permissions are granted.

What Is Not Stored (According to Google)

  • The raw audio recording is not permanently stored after processing
  • Voice data from song searches is not retained for voice profile building (per Google's published policies)

Managing Your Privacy

  • Use Hum to Search without signing into a Google account to avoid activity logging
  • Delete your search history at myactivity.google.com
  • Revoke microphone permissions for the Google app when not in use
  • Review Google's privacy policy for the most current data handling information

Practical Use Cases for Hum to Search

While the most obvious use case is earworm identification, Hum to Search turns out to be useful in a variety of everyday scenarios:

Memory Retrieval

  • Song stuck in your head with no lyrics
  • Melody from a dream you can't place
  • Tune you remember from childhood
  • Theme music from a TV show you watched

Real-Time Identification

  • Background music too quiet to Shazam
  • Someone else humming a song nearby
  • Melody heard briefly before the song was changed
  • Instrumental version without vocals

Music Discovery

  • Identifying a melody to add to a playlist
  • Finding the original version of a cover song
  • Settling music trivia disputes
  • Identifying samples in hip-hop and electronic music

Creative and Professional

  • Musicians checking if a new melody already exists
  • Film/TV researchers identifying library music
  • Music teachers identifying unfamiliar songs
  • Checking for potential copyright issues

The Future of Melody Recognition

Hum to Search represents the current frontier of consumer melody recognition, but the technology is still evolving. Several areas are likely to see significant improvement in the coming years:

  • Larger databases: Coverage of regional music, older recordings, and independent artists will expand as Google indexes more of the world's musical heritage.
  • Better accuracy for imperfect input: As models improve, the tolerance for off-key or imprecise humming will increase.
  • Multimodal search: Future systems may combine humming with visual context (show the AI a screenshot of a movie scene) or text descriptions to narrow identification.
  • Offline capability: On-device neural networks could eventually allow limited recognition without an internet connection.
  • Integration with wearables: As smart glasses and earbuds become more capable, always-on ambient music identification — combined with humming detection — may emerge.

For the most current information on what Hum to Search can do and how to get the most out of it today, the complete hum to search guide is regularly updated to reflect the latest platform changes and feature additions.

Frequently Asked Questions

What is Hum to Search?

Hum to Search is a Google feature that uses artificial intelligence to identify songs from hummed, whistled, or sung melodies. You hum the tune you're thinking of, and Google's neural network matches it against a database of 500,000+ songs to tell you the title and artist. It was launched in October 2020 and is available free on Android, iOS, and desktop.

When did Google launch Hum to Search?

Google officially launched Hum to Search in October 2020, initially for Android devices in English. It expanded to iOS via the Google app shortly after, and has since grown to support 20+ languages. The feature has been continuously improved since launch with expanded database coverage and better accuracy.

How accurate is Hum to Search?

For well-known popular songs, Hum to Search achieves approximately 80-92% accuracy. Accuracy is highest for mainstream pop, rock, and other heavily indexed genres, and lower for very obscure tracks, regional music, or songs with generic melodic patterns. Your humming quality also affects accuracy — maintaining correct rhythm matters more than perfect pitch.

How does Google Hum to Search work technically?

Hum to Search uses a machine learning model based on Google's SPICE pitch estimation research. It extracts the relative melodic pattern from your humming and converts it into a numerical embedding. Both your hummed input and all songs in the database are mapped into the same embedding space, and the system finds the songs whose embeddings are closest to yours — regardless of whether the source is professional recording or imperfect humming.

Does Hum to Search require perfect pitch?

No. The system analyzes relative pitch intervals (whether each note is higher or lower than the previous one) and rhythm, not absolute pitch. You can be noticeably off-key and still get accurate results as long as the melodic shape is approximately right.

What's the difference between Hum to Search and Shazam?

The fundamental difference is that Shazam requires the actual song to be playing and records it as ambient audio, while Hum to Search works from your own voice humming from memory. They solve different problems: Shazam is best for identifying songs currently playing around you, while Hum to Search is best for identifying songs stuck in your head. Shazam has a larger database (10M+ vs 500K+), but this is irrelevant for the earworm use case.

What platforms support Hum to Search?

Hum to Search is available on Android (Google app and home screen widget), iPhone and iPad (via the Google app from the App Store), and desktop computers (google.com in any modern browser). It requires Android 6.0+ or iOS 12+. It is not available natively in Safari on iPhone or via Siri.

Conclusion: A Genuine Technological Achievement

Hum to Search is one of those rare features that solves a genuinely universal problem in an elegant way. The challenge of matching an imperfect, off-key human hum to a specific song out of 500,000+ possibilities is a hard machine learning problem, and Google's solution — building a shared embedding space for both human voice and professional recordings — is a thoughtful and effective approach.

What makes it remarkable isn't just the accuracy, but the accessibility. No musical training required, no lyrics needed, no song playing nearby — just hum what's in your head. The democratization of music identification from an active, present sound to a memory-based search represents a meaningful leap forward.

Whether you're trying to identify an earworm, settle a music trivia dispute, or find a half-remembered song from years ago, Hum to Search is the right tool for the job. For practical instructions, tips for maximizing accuracy, and platform-specific guidance, the complete hum to search guide has everything you need to get started.