GA4 has six Exploration types and a Library tab full of standard reports. Most tutorials treat all of them as equally useful. They aren't.
I've been building custom reports for clients since 2014. Across the 120-plus containers I've worked on, three Explorations come out almost every week. Two show up once a year. One has never earned a spot in a deliverable I actually sent out.
This piece is not a tour of every button. It's a filter. Which reports are worth learning, which ones eat an afternoon and give nothing back, and which belong in a different tool entirely.
Five real configs follow, plus the limits Google buries in paragraph four of a help page nobody opens.
- Library reports are governed and shared; Explorations are personal and auto-saved for seven days.
- Three Explorations do real work: Free Form, Funnel, Segment Overlap. The other three are traps.
- Hard caps: 500 rows per cell, 500k sampled events, 10M events per date range. Hit any of them, move to BigQuery.
- Looker Studio wins when you need scheduled delivery, blending, or custom calculations.
- Report-vs-Ads mismatch is almost always attribution model, timezone, conversion counting, or dedup, not a bug.
Custom reports in GA4 are actually two different things
Before the types, the surfaces. GA4 lets you build custom reports in two places, and people conflate them constantly.
Library (the "Reports" tab). These are governed reports. You build them, publish them, and they appear in the left nav for everyone with access. They show up next to the default reports like Acquisition and Engagement. A Library report is the right place when you want the whole team looking at the same numbers by default.
Explore (the "Explore" tab). This is a sandbox. By default, what you build here is visible only to you. You can share a link, or set an Exploration's visibility to "Shared with editors and above," but most people never do. Explore auto-saves for seven days and keeps your work if you come back. Use Explore for ad-hoc investigation, one-off analyses, testing a segment before deciding if it deserves a permanent Library report.
The rule I use: if I'm going to look at this report again in six weeks, it belongs in Library. If it's answering a question this Tuesday and I don't care about it Wednesday, it stays in Explore.
Most of this article is about Explore, because that's where the six Exploration types live and where the real customization happens. Library reports are useful but thin on options.
The 6 Exploration types, ranked by how often I actually build them
GA4 ships six Exploration templates when you open a blank Explore: Free Form, Funnel, Path, Segment Overlap, User Explorer, Cohort. The official docs give each one equal billing. My usage is nowhere near even.
1. Free Form: around 70% of everything I build
If you only learn one Exploration, learn this one. Free Form is a pivot table. Rows, columns, values, segments, filters. Almost any "show me X broken down by Y for users who did Z" question answers itself here.
Common uses I reach for:
- Revenue by source / medium, one view per attribution model, side by side.
- Events by device category, filtered to users who completed a specific conversion.
- Custom dimension drill-downs (plan type, logged-in state, country) that the standard reports bury.
When Free Form struggles: multi-step journey questions (use Funnel), overlap-between-segments questions (use Segment Overlap), and anything that needs sequence sensitivity (which is rare, and usually easier in BigQuery).
2. Funnel: whenever a journey has more than two steps
The second most honest report. You define a sequence of events, GA4 counts how many users move step to step, and where they drop. Where Funnel shines:
- Checkout flows. Add-to-cart, begin-checkout, shipping, payment, purchase.
- Onboarding flows in SaaS. Signup, confirm email, first action, aha moment.
- Lead funnels. Landing page view, form start, form complete, contact event.
Closed funnels matter more than the label suggests. A closed funnel requires step 1 before step 2 counts, which is how most stakeholders read a funnel anyway. An open funnel is more forgiving and usually inflates conversion rates in ways that confuse the reader. I default to closed.
Where Funnel quietly lies: if your event taxonomy is inconsistent across devices or platforms, the funnel will drop users that actually converted but did step 3 as a different event name on mobile. I caught this once when a Polish DTC brand had begin_checkout on web and checkout_started in their mobile app. Funnel said mobile users never checked out. They did. Event names just didn't match.
3. Segment Overlap: niche but the payoff is real
Three segments max, overlap visualized as a Venn diagram. Not something I build weekly. But when I need it, nothing else answers the question.
The cases where I use it:
- "Does the paid social audience overlap with the email list?" (If yes, the brand is double-paying.)
- "Do high-LTV customers also convert through the brand campaign?" (If yes, that campaign's ROAS is inflated because they were buying anyway.)
- "Are weekday converters also weekend browsers?" (Useful for media planning.)
Segment Overlap is the only Exploration that answers "are these two audiences the same people," and that question is worth a lot more than people realize.
4. Path Exploration: looks cool, mostly useless
I know this is a hot take. I've tested Path Exploration on a dozen real clients and the output almost always either repeats what Funnel already told me, or surfaces a path pattern that a stakeholder misreads.
The issue is it shows you what happened, not why. A path like page_view → scroll → page_view → click → purchase is technically a journey. It's also completely uninterpretable. You need event sequences with context, and GA4's Path view doesn't give you context.
When Path could help: "show me the top 5 events users do before leaving a specific page." That's a narrow use. For anything broader, I go to BigQuery and write actual sequence SQL.
5. User Explorer: only for debugging, never for reporting
User Explorer lets you look at individual user streams. Every event, every timestamp, every parameter, for one anonymous user ID at a time. Incredibly useful when you need to verify "is this event firing with the right parameters for a real user?" or "what does a complete session look like end to end?"
Useless when you need aggregate insight. This is a debugging tool, full stop. I open User Explorer maybe once every two weeks when validating a new event. I never put it in a deliverable.
6. Cohort Exploration: the trap
Cohort wants to tell you about retention. Group users by when they first did something, track what they do after. In theory, perfect for retention analysis.
In practice, the 500-row cap and the 500,000-event sampling threshold destroy this Exploration the moment you have meaningful data. I've had two clients try to run a 12-month retention cohort in Explore. Both hit the limit. Both had to move to BigQuery. After the second one, I stopped recommending Cohort Exploration at all.
If you have low traffic and short cohort windows, Cohort might work. If you have either volume or historical depth, go to BigQuery. See my GA4 + BigQuery setup guide for the query patterns.
Three Explorations with actual configs (copy-paste ready)
Enough ranking. Here are the three configs I build most often, with exact field setups.
Config 1: Revenue by first-source vs last-source, side by side (Free Form)
The single most valuable custom report I build for paid-ads clients. You get both attribution views in one screen, the stakeholder sees the gap, and nobody has to switch models back and forth.
Setup:
- Technique: Free Form
- Rows:
First user source / medium - Row grouping: set a second row dimension =
Session source / medium - Values:
Purchase revenue(or your conversion-value metric),Purchases,Transactions - Filter: set the date range to match Google Ads (not trailing 28 days, or you'll get mismatch)
- Segment: optional. I add
Direct + Organic excludedto isolate paid performance.
Read it like this: the first dimension tells you who originally acquired the user, the second tells you the session that converted. When those match, last-click and first-click agree. When they don't, you see your attribution gap directly.
Config 2: Checkout funnel with segment overlay (Funnel)
The most common deliverable for ecommerce clients. Shows drop-off plus a segment cut so you can see if mobile users drop differently than desktop.
Setup:
- Technique: Funnel exploration
- Steps (closed funnel):
view_item_list(orview_itemif you skip list views)add_to_cartbegin_checkoutadd_shipping_infopurchase
- Segment comparison:
Mobile trafficvsDesktop traffic(build both as segments first) - Breakdown:
Device category - Date range: trailing 28 days, compared against previous 28 days
What to look for: if one segment drops ten percentage points harder than the other at a specific step, you have a usability bug, not a marketing problem. The shipping step is the most common offender.
Config 3: Weekend vs weekday converter overlap (Segment Overlap)
Media planning context. Run this once a quarter, it'll surprise you more often than you expect.
Setup:
- Technique: Segment overlap
- Segment 1: Users with a conversion event where
Day of weekis Saturday or Sunday - Segment 2: Users with a conversion event where
Day of weekis Monday through Friday - Segment 3 (optional): Users from a specific campaign (paid media of your choice)
What it tells you: how much of your "weekend audience" is actually your weekday audience in disguise. If the overlap is above 70%, your weekend media spend is mostly reaching people who would have converted anyway. If it's below 30%, you have genuine incremental audiences on weekends worth scaling.
The Explore limits nobody warns you about
Let's get the uncomfortable part out of the way. GA4 Explore is not a replacement for a data warehouse. It has hard caps, and once you hit them, no config change will save you.
The caps I've seen clients hit:
- 500 rows per cell in Free Form and Funnel. When you have a dimension with 600 unique values, you see 500 and the rest get bucketed into "(other)".
- 500,000 sampled events per Exploration. Above this, GA4 samples your data. The UI will tell you with a yellow warning triangle. Once sampling kicks in, your precise numbers stop being precise.
- 10 million events for the date range. Above this threshold, sampling starts regardless of Exploration type. For a moderately trafficked ecommerce site, this is about a 90-day range.
- 14-day freshness on some user-scoped dimensions and metrics. Anything older than 14 days that relies on user properties updated after the fact will not populate correctly.
- Cohort size in Cohort Exploration is limited to what fits in the event sample. On clients with more than a few hundred thousand monthly events, cohorts beyond 30 days are effectively useless in Explore.
Google's official documentation on Explore limits covers these, but they're buried. If you hit any of them, the fix is not tweaking the Exploration. The fix is moving the question to BigQuery where none of these caps exist.
GA4 Explore vs Looker Studio: when to switch
Both are free. Both connect to GA4. Neither is strictly better. They answer different questions.
Use GA4 Explore when:
- You're investigating something right now, by yourself, and nobody else needs to see it.
- You want native GA4 segments, including behavioral segments built from event sequences.
- You need a User Explorer deep dive on a specific anonymous user.
- The question fits within the limits above.
Use Looker Studio when:
- You need to email this report to stakeholders on a schedule.
- You're blending GA4 data with Google Ads, Search Console, or a Google Sheet.
- You need custom calculations GA4 doesn't support natively (weighted averages, custom ratios, conditional aggregations).
- The stakeholder viewing the report is not a GA4 user and you don't want to grant them property access.
- You want filter controls the viewer can change interactively.
Use neither when:
- You need guaranteed-accurate numbers for compliance, legal, or billing. GA4 is modeled, sampled, and eventually consistent. Go to BigQuery raw export.
- You need historical analysis beyond Explore's sampling caps.
- You're reconciling against a CRM or Stripe. Again, raw data.
For the Looker Studio deep dive, see my Looker Studio guide.
"My custom report doesn't match Google Ads": a 90-second diagnostic
Nine out of ten times someone tells me their GA4 custom report disagrees with Google Ads, the report is fine. The setup has drift. Run this diagnostic before you rebuild anything.
1. Check the attribution model on both sides. GA4 defaults to data-driven since late 2023. Google Ads defaults vary by account. A GA4 data-driven report will almost always disagree with a Google Ads last-click view. Same-day data-driven on both, or same-day last-click on both. If the models don't match, the numbers can't.
2. Check the date range timezone. Google Ads runs in the account timezone. GA4 runs in the property timezone. If your property is UTC and your Ads account is Europe/Warsaw, a 24-hour window actually spans 25 hours on one side. For short date ranges, this matters.
3. Check the conversion counting method. Google Ads counts conversions by default as "every" (all post-click events within the window). GA4 counts by "once per event" or "once per session" depending on your configuration. These don't agree on multi-purchase users.
4. Check for de-duplication. If you're also using Meta Conversions API or some server-side tagging with deduplication, the conversion event might fire to one platform but not the other. Your custom report is counting real events. Ads just isn't seeing the same ones.
If all four check out and numbers still disagree, then the custom report needs work. Most of the time, they don't all check out. The agency-audit question I wrote about in the GA4 Audit checklist starts here too.
Frequently asked questions
Can I recreate my old Universal Analytics custom reports in GA4?
Sometimes. UA custom reports were built on sessions and pageviews. GA4 is event-based. Anything that was a "pageview by X dimension" report translates directly to a Free Form Exploration with Views as the metric. Anything that used session-scoped custom dimensions gets tricky; you'll need to recreate those as event-scoped or user-scoped definitions in GA4 first. Fully one-for-one replication is rarely possible. A close-enough version usually is.
Why can't I save my Exploration so the team sees it?
By default, Explorations are private. To share, open the Exploration, click the overflow menu, select "Share this Exploration." That switches visibility to "Anyone with Editor access to this property." They still need GA4 access to the property to see it. If they don't have that, build a Library report instead, or export the Exploration to Looker Studio.
What's actually the difference between "Reports" and "Explore"?
Reports (the tab) are governed: everyone with property access sees the same thing, there's a publishing step, and customization is limited. Explore (the tab) is a sandbox: private by default, seven Exploration templates, much more flexibility, but with the caps covered above. Reports is for "everyone agrees on this view." Explore is for "I need to check something specific."
Why does my Exploration show "(other)" for some dimension values?
You've hit the 500-row cap per cell. GA4 keeps the top 500 values and buckets the rest. If you need all the values, either narrow the date range or the segment to reduce uniqueness, or move the analysis to BigQuery.
Can I export GA4 Explore reports?
CSV, PDF, Google Sheets, TSV. Open the Exploration, click the download arrow in the top right. Exports are still limited to what's visible in the Exploration, which means they also inherit the 500-row cap.
Is there a template library for GA4 custom reports?
Yes. When you open a new Exploration, GA4 shows a "Template gallery" at the top with pre-built templates for common cases (Acquisition, Engagement, Path, Form interactions, and a few others). They're decent starting points but none of them are configured exactly how I'd build them for a client. Use the gallery to start, customize from there.
Next step
If you're building custom reports because you suspect your default GA4 data isn't telling you the whole story, you're halfway right. The reports are fine. The question is whether the underlying data is. That's what a monthly GA4 audit actually checks.
Start with the free automated GTM audit if you want a quick read on the tracking side. For full GA4 monitoring with monthly written reports, GA4 Monitoring & Config is 150 EUR per month. If you're running 5,000 EUR or more in paid ads monthly, the GTM + GA4 Bundle at 250 EUR per month covers both containers and catches drift before it contaminates any custom report you build.
Custom reports are only as accurate as the events feeding them. Fix the events first, then build anything you want.
Need reliable GA4 data?
Monthly GA4 Monitoring, full audit in month one, drift caught every month after. Written report, no calls.
See GA4 Monitoring