The "finished" trap
READ TIME - 5 minutes ⏳
Email formatting looks broken? click here to view in browser.
There is a moment in almost every Power BI project that quietly gets treated as the finish line, and it looks like this: the visuals are in place, the filters work, the data refreshes on schedule, and someone pastes a report link into a Teams message with a short note that says something like "here it is, let me know if you have questions."
And then the project moves on.
What has actually happened in that moment is that the builder has finished. The user hasn't started yet. And the gap between those two events, which can be hours or days or weeks, is one that almost no one in analytics explicitly designs for.
The wrong definition of done
When analysts say a report is finished, they tend to mean a specific and very technical set of things: the visuals render correctly, the numbers reconcile with the source, the layout holds up on a standard monitor. These things matter and are genuinely necessary to get right, but they describe a report that is technically complete, not one that is actually usable.
The bar for usable is different. A report is only finished, in any meaningful sense, when the person it was built for can operate it independently and confidently, without needing a walkthrough from the person who built it. That distinction sounds simple, but in practice it shifts almost everything: what you label, how you organise pages, what context you embed in the canvas, how you design the first thing a user sees when they open the file.
The walkthrough is the tell. If a report consistently requires a live explanation before it makes sense to its audience, that is not a communication failure on the user's part. It is a design signal, and what it signals is that the report has not carried enough of its own reasoning into the room.
What gets left in the builder's head
Every analyst who builds a report carries an invisible model of the data in their head, and it is extraordinarily detailed. They know why the revenue figure on this page does not match the one in the finance report, because they were in the meeting where that distinction was decided. They know that the spike in March is a data anomaly tied to a one-off reconciliation, not a real trend. They know that this page was designed for the operations team and that one for leadership, and that reading them the wrong way around produces a misleading picture.
None of that knowledge survives the handoff, and not because no one tried to communicate it. It disappears because it was never embedded into the report itself: it lived in conversations and onboarding sessions and informal explanations that fade the moment the person who gave them is no longer available. What remains in the file is a set of visuals that look complete and carry almost none of the reasoning that made them meaningful.
The user is left to reconstruct a model of the data they were never actually given.
Three questions a finished report should answer on its own
Before marking any report as done, I find it useful to ask three things that the user should be able to answer without any help from the builder.
The first is whether they understand what they are actually looking at, not just at the level of reading the chart, but knowing what question the page is answering, why that question was chosen over other possible ones, and what the data's scope and limitations are. A chart titled "Sales by Region" does not do this. A chart titled "Regional sales vs. target, current quarter only" comes considerably closer, because it removes the assumptions a user would otherwise have to make entirely on their own.
The second is whether they know what to do when something looks wrong. If a number drops unexpectedly and there is no clear path forward, no drillthrough to a detail view, no annotation explaining a known anomaly, no indication of which metric drives which outcome, then the report has handed the user a problem and no direction for solving it. That is not a finished product. That is the opening move in a long thread of support messages.
The third is whether they trust what they are seeing. This one is harder to design for explicitly, but it matters more than the other two in the long run, because a user who does not understand how a number was calculated will not act on it with any real confidence, and a report that cannot be trusted will gradually stop being used at all.
The fix was always upstream
The temptation when a handoff goes badly is to address it downstream: write better documentation, hold a training session, produce a user guide. These things are not useless, but they are a response to a symptom rather than a cause. The real fix is a decision made much earlier, before the tool was ever opened, when someone asks themselves what the user will need in order to trust this without me in the room, and then builds the answer into the report rather than scheduling a meeting to explain it afterwards.
If you have ever sent a report link and immediately followed it with a message explaining what to look at first, that moment is worth examining closely. Reply to this email if it resonates. I read everything.
One small change worth mentioning: starting from this issue, Before it Starts is moving to a biweekly schedule. Same depth, same topics, just more space between issues to give each one the attention it deserves and to free up time for the other things I am building in parallel. If you have been here from the beginning, thank you for reading. More coming, just at a slightly different rhythm.
Cheers,
Julien