— Written by
Mikkel Dengsøe
in Case Study

Building reliable customer acquisition models at scale at Insurify

By combining anomaly monitors, data products, and dbt test insights, Insurify overcame alert fatigue and built more robust monitoring.

Data at Insurify

Insurify is the largest online  insurance marketplaces in the United States, using proprietary algorithms and real-time data from multiple carriers to match users with the right policy at the right price.

With much of its volume driven by performance marketing, data is the core of Insurify’s business model. Every decision, from which channels to buy traffic on, to how much to bid, to which products to show, is powered by machine learning models that run on top of their analytics platform.

"Data is fundamentally a model of the real world. For Insurify, there is no factory and no physical product. The company is the web app and the user journey through it. If the data is accurate, our entire representation of the business is accurate." – Isaac Santelli, Analytics Engineering Manager, Insurify

When the data is wrong, it can cost millions of dollars, for example:

  • An LTV model over-predicting policy value due to incorrect data
  • Conversions breaking and Google Ads automatically dialing down bidding

The top priority for Isaac and the analytics engineering team is to get the data right, and if something goes wrong, know about it right away.

The Challenge

As Insurify scaled its data platform to thousands of dbt models and source tables, the team faced two key challenges:

  • Alert fatigue: Hundreds of failing tests and noisy monitors made it hard to focus on the few issues that truly mattered
  • Operational risk: With so many models feeding ad spend and CLTV calculations, missing a critical failure could cost six figures in days
"Once you hit 20, 30, 40 test failures, it stops being a clear signal and becomes noise. Many failures are 'problematic but not severe', so we need a middle ground where we neither remove the validation nor let it fail forever, but instead quickly see which ones really matter."

Monitoring in addition to dbt tests

Insurify’s data team relies on dbt tests for the basics: enforcing not_null and unique constraints, and making sure joins produce the expected row counts. But with thousands of models and growing, this level of testing alone is not enough.

They use SYNQ to deploy monitors that track table volumes, column distributions, and other key statistics over time, surfacing anomalies that cannot be captured by simpler tests. They also run end-to-end integration tests to confirm data validity in dbt. Together, this lets the team spot unexpected shifts in metrics that could silently degrade model performance, such as a drop in lead quality or an inflated conversion rate caused by duplicate records.

Deployment rules in the SYNQ platform make this process scalable. Instead of creating monitors one by one, Insurify applies rules to entire data products, automatically deploying monitors to every table that meets their criteria. This ensures consistent coverage and avoids leaving critical tables unmonitored.

"Our key data marts all have monitors deployed on the fields that matter. We do not want to hand-code thresholds. We want the system to tell us when something looks off."

This combination of dbt tests and behavioral monitoring means the team can catch both deterministic issues and more unexpected data issues before they reach production models.

Optimizing tests and reducing noisy alerts

With thousands of dbt tests in play, Insurify needed a way to separate what matters from what can wait. The team classifies each failing test by its trend, identifying whether it is a new failure that must be triaged immediately, a continuously failing test that can be tracked in the backlog, a degrading test that is slowly getting worse and needs monitoring, or a recently resolved test that no longer needs attention.

By focusing first on new and degrading tests, the team prevents alert fatigue and keeps attention on the failures most likely to impact customer acquisition and revenue reporting. The team also built an internal tool that extracts dbt artifacts and combines them with SYNQ monitor data via API, creating a complete view of data health that is exported into their warehouse.

This approach allows them to run a high volume of tests without overwhelming the team, making sure that critical issues are never buried under false positives or low-impact alerts.

Data products as the organizing layer

To make sense of their large and growing data estate, Insurify organizes key data assets into data products that reflect their core business processes. These include data products focused on marketing spend, real-time bidding, quotes, and user behavior.

Each data product is assigned a priority (P1, P2, P3) based on its criticality to the business and has a clear owner such as Marketing Analytics, Product Analytics, or DMS Owners.

When an issue is detected, the priority and ownership metadata allow the team to quickly assess whether it is business-critical and route the alert to the right person. This cuts response time and helps stakeholders trust that the most important issues are being handled first.

"Grouping data into products lets us see at a glance which part of the business is affected, who owns it, and whether we need to escalate. It keeps everyone focused on what really matters."

What’s Next

Isaac’s long-term vision is to combine monitor and test results into a single view and use AI to surface the most important issues automatically.

"What I want is to walk into work and be told there are 100 issues, but here are the five that matter and here is why. Stack-ranked, with context. That is the future."

The team is experimenting with SYNQ Scout to get closer to this reality, using it to summarize incidents and provide context so engineers and analysts can resolve issues faster.

Subscribe to the blog

Build with data you can depend on

Join the data teams delivering business-critical impact with SYNQ.

Book a Demo

Let's connect