MCCULLEY/CUPPAN INC.
  • Home
  • Books
  • Consulting
    • Strategic Review
    • Assessment Services
  • Training
    • Document Standards
    • Skills Development Workshops
  • About
    • Experience
    • Client List
    • Blog
  • Contact

Important vs Relevant: The Distinction That Makes Regulatory Documents Decision-Ready

12/22/2025

0 Comments

 
Teams often endlessly add to reg submission documents what they call “relevant” content to feel complete. But keep mind, regulatory reviewers need “important” content to make decisions. There is a clear distinction between important and relevant. This mismatch drives reading friction, reviewer questions, and avoidable misunderstanding.

I want to define for you how I distinguish important versus relevant, then explain why the distinction matters.

Relevant information
Relevant information has a logical connection to the topic. Relevant content may be accurate, helpful, and necessary somewhere in the dossier.

Common forms of relevant content include:
  • Background context (disease, MOA, precedent, prior studies)
  • Secondary endpoints, subgroup cuts, and exploratory analyses
  • Operational methods detail and procedural nuance
  • Literature support that frames plausibility
  • Completeness content added to satisfy internal stakeholders

Relevant content answers: “Does this relate?”

Important information
Important information changes, supports, or constrains a regulatory decision. Important content earns priority because the reader must act on it.

Important content usually does at least one of these jobs:
  • Advances the logic trail: Claim → Evidence → Interpretation → Decision implication
  • Resolves a known regulatory question or risk
  • Reduces interpretive space by setting boundaries on meaning
  • Connects evidence to the proposed indication, population, dose, and labeling

Important content answers: “So what for the decision?” A working rule helps teams move faster:
  • Relevant = connected.
  • Important = decision-driving.


Why the distinction matters in real regulatory review

Regulatory reviewers read Module 2 to decide, not to accumulate information
Module 2 supports fast, selective decision-making. A reviewer navigates toward decision points, not toward completeness. When sponsors treat relevant as important, reviewers must do some triage. That workload shift creates friction and delays.

Relevant overload hides the message you most need the reader to see
Dense, undifferentiated text encourages skimming. Skimming changes what a reviewer notices. Further, skimming reduces cognitive recall: readers do not remember what they read. A buried bottom line often becomes a missed bottom line. A missed bottom line becomes a question, then a RFI.

Interpretive space becomes sponsor risk
Ambiguity invites inference. Inference becomes a working story inside the regulatory review team as they backfill interpretive gaps. A sponsor then spends precious time and intellectual capital rebutting the story. A better document prevents that story from forming.

Trust drops when prioritization drops
A regulatory reviewer who must hunt for meaning may assume weak reasoning. I suggest a reviewer who must assemble the argument may assume instability in the thinking of the sponsor. This is the 3rd level of information design: Affective. Confidence erodes when the document does not lead the reader tot he "So what?" Additionally, decision time rises when confidence drops.

Teams lose time debating sentences instead of decisions

Teams often defend paragraphs that they argue are “relevant.” When in turn, the paragraphs are just “frosting on the cake". The reality is teams should align on the regulatory decision questions, rank them, and then construct cogent responses. Prioritization is governance. A shared prioritization method reduces rewrite churn.

A practical method: The Decision Test
Use this test on any paragraph, table callout, or subsection. The test works in Module 5 and Module 2.

Ask 5 questions:

  1. What are the decision questions we must address in this document or family of documents?
  2. What content and line of reasoning addresses this decision question?
  3. Which claim does this paragraph support or constrain?
  4. What changes for the regulatory reader if this paragraph disappears from this section?
  5. Where should the paragraph live if removal changes nothing?


When removal changes little or nothing for the regulatory reader, then relevance exists but without importance. That content then should be subordinated, in a table, in an appendix, or in Module 5.

I keep arguing to the teams that I work with: “you gotta stop putting background (relevant information) in the foreground where ONLY important should reside.” To this day I have people tell me they always remember my analogy of treating documents like real estate: “never build cheap nice-to-know information” into your “high rent” districts of the document.

I argue that a team should treat the decision test as a permission slip to move text. A team may also treat the test as a permission slip to delete text.

A second method: The Section Job Rule
Importance is not universal. Importance depends on the job of the section. A simple prompt helps: “What must a reviewer understand after reading this section?”

Section purpose determines what deserves top position. Section purpose also determines what should move downstream. In an efficacy conclusions section, baseline detail is often supporting. The bottom line should lead. In generalizability, baseline imbalances may become decision-driving. Those details may become important.

Patterns that signal “relevant pretending to be important”

Watch for these signals during authoring and reviewing:
  • Background lead-ins before any conclusion
  • Repeated context that does not change interpretation
  • Multiple subgroup cuts without a boundary statement
  • Methods detail in sections whose job is interpretation
  • Statements that report differences without meaning for the decision
  • Lists of facts with no claim, no evidence hierarchy, or no implication

These patterns do not mean the content is wrong. These patterns mean the content is poorly ranked. Remember information design is driver of reader understanding.

Where relevant content should go

Relevant content still matters. Relevant content should not compete with decision-driving messages.

Tables for breadth, text for meaning

Put breadth in tables.
  • Use text to interpret patterns and implications.
  • Tables support scanning and comparison.
  • Text should answer the “so what?” question.


Appendices for completeness
Appendices protect readability in core sections. Appendices also preserve traceability for audits and reviewers.

A reviewer may access detail when needed. A reviewer should not be forced to wade through detail.

Cross-references to Module 5 for depth

Module 2 should guide and interpret. Module 5 should provide underlying evidence detail.
A crisp reference often beats a long restatement. Restatement increases bulk without improving decisions.

Progressive disclosure within a section

Lead with the conclusion.
  • Add boundaries and caveats next.
  • Add key supporting detail last.

This order respects how reviewers navigate and reduces misinterpretation.

Closing thought
Regulatory readers do not need more facts. Regulatory readers need ranked facts that point to decisions.
  • Relevant information connects to the topic.
  • Important information drives the decision.

Sponsors who master the distinction write less, but communicate more.
​Sponsors who master the distinction also reduce avoidable regulatory inquiry.
0 Comments

The FDA Reviewer Is Human: Writing Submissions for Regulatory Readers, Not Aliens

12/22/2025

0 Comments

 
I strongly suggest that most sponsors still write as if FDA reviewers read slowly, linearly, and generously. My experience says the opposite: the FDA reviewer is a time-starved, human decision-maker. The reading model I constantly talk about starts from that premise. In turn, I argue that the model I espouse should change how you design every page in every document you submit to FDA and EMA.

I always argue to get the "So what?" up front (I am practicing what I preach in this article).

“What planet are they from?”
On a client call last week, I was challenged with this query: “Well Greg, your model of reading behavior is interesting. But how do you know it applies to people working at FDA?”

That question triggered a quiet “WTF?” for me—not because the question was rude, but because of what sat underneath the query. I hear variations of the same line in my workshops:

  • “What planet is this reviewer from?”
  • “I don’t read like that.”
  • “But I know a person at FDA who says they do…”

My answer never changes: “Trust me, all regulatory agency reviewers are a carbon-based life forms like you and me. They are not from some exoplanet identified by the Kepler telescope.”

The deeper problem is not skepticism about what I present as a reader behavior model. The deeper problem is a persistent myth: the FDA reviewer is an alien whose behavior sits outside normal human reading constraints and we cannot predict how they will react.

This myth feels convenient. Mythology helps development teams avoid harder questions:
  • Did we design this document for how the busy, highly selective, decision-making reader actually reads under pressure?
  • Or did we design it for how we wish they read if they had unlimited time and attention?

The myth of the alien FDA reviewer
Many teams carry unspoken mental models of the regulatory reader:
  • A perfect logician who reads line-by-line.
  • A hostile outsider with strange expectations and who just cannot appreciate our science.
  • A black box entity whose behavior cannot be predicted.

These models create comfort stories:
  • “They just didn’t read it properly.”
  • “The issue came from their wrong interpretation, not our data or analysis.”
  • “They just didn’t want to work hard enough. They get no pity points from me.”

The deflecting exhortations protect the development team from added discomfort. In reality, your FDA reviewer:
  • Faces high-volume reading across multiple projects.
  • Works inside a clock and a meeting calendar, not a reading retreat nestled in the Catoctin Mountains of Maryland.
  • Carries personal accountability for public health decisions.
In other words: a human with limited working memory, limited time, and real stakes. Blaming “lack of time to read properly” misses the point. The real issue often looks like this: The document was never written for the way humans must read when time and risk collide.

What my reading model really describes
The model I use to explain how expert readers move through complex documents under pressure grew out of the reading research community and the evidence we have collected formally and anecdotally in conversations with regulatory agency reviewers.

The observed and anecdotal behavior is clear:
  • Readers jump across content
  • Readers triage--What must I understand now? What can wait?
  • Readers trace claims
  • Readers conserve cognitive effort
  • Readers optimize for decision risk

The model I use is a description of high-stakes human reading: Triage → Navigate → Sample → Trace → Cross-check → Decide.

If that sounds familiar to your own behavior as a clinical lead, safety physician, regulatory writer, or statistician, then I call it good. This is the point.

Why our reading model applies to FDA reviewers
Back to the question: “How do you know your model applies to people working at FDA?”

Here is the short answer: nothing about the FDA reviewer exempts them from human cognition or organizational pressure. Four reasons matter.

Shared cognitive hardware

FDA reviewers are expert clinicians, statisticians, and pharmacologists. They are not endowed with extra RAM. They still:

  • Hold only a few chunks of information in working memory at once
  • Experience overload when faced with dense, undifferentiated text
  • Rely on cues—headings, topic sentences, structure—to build a mental map

Keep in mind:
Professional training refines judgment and process. Training does not rewrite the basic limits of attention and memory.

Structural pressures of the role
Consider the environment:
  • Multiple applications and supplements in play.
  • Internal meetings, advisory prep, safety signal reviews, emerging data, and those dreaded Type B Meeting briefing books.
  • Formal review clocks and deadlines.
  • Public and political scrutiny as well as legal accountability.

These pressures force triage and selective reading. No one in such an environment reads every paragraph with equal depth. The reading behavior model we use represents a rational coping strategy used by agents in the FDA and EMA.

Consistency across contexts
What I argue as reading behavior appears in:
  • Drug sponsors
  • Ethics committee reviews
  • HTA and payer evaluations

When context looks the same, (complex evidence, compressed time, meaningful risk) reading behavior converges. FDA reviewers are not the exception. They are the most visible example.

If the reading model I describe does not apply to FDA reviewers, then this would imply the "agency" discovered a way around human cognition that the rest of us lack. No evidence supports that belief. People at FDA are indeed carbon-based life forms from good ol' Planet Earth.

“How should we structure this document if our success depends on a busy human understanding the core message in one pass?” This is the question to be addressed in every document planning meeting.

A final thought: If your organization wants to stress-test your current draft documents against a robust reading and decision-efficiency model, then reach out to me.


0 Comments

From Text Density to Decision Clarity

12/2/2025

0 Comments

 
How Decision Efficiency Redefines Quality
Regulatory writing has evolved more in the past 5 years than in the previous 23 years when the first eCTD guidance reshaped expectations for how sponsors structure and submit information. The move to electronic documents led to wholesale “infobesity” in submission documents.

Once submissions shifted from binders to electronic modules, the constraints that once forced concision vanished. Sprawling narratives, dense paragraphs, and an overwhelming volume of “just-in-case” content could be created at virtually no direct cost. In turn, regulatory and clinical research readers did not gain clarity—they gained cognitive load. This shift exposed a deeper truth: regulatory and medical writing was not suffering from a lack of information. Rather the writing was suffering from a lack of decision-oriented design.

Our industry has been evolving ever since, moving from text density toward lean writing and now toward decision clarity.

Text Density — When Volume Masqueraded as Rigor
The early years of application of eCTD created a quiet but powerful shift: once page limits disappeared (or ignored), volume became a proxy for thoroughness. Teams got into the “story telling” business and filled sections with extensive background, descriptions, and narratives about development choices. Exhaustive restatements of data already visible in tables appeared in text whenever the opportunity arose.

Text density produced the illusion of rigor. A 200-page Clinical Overview or a 450-page Summary of Clinical Safety looked complete, but regulatory readers had to work hard to locate relevance, reconstruct logic, and discover answers to their questions. The writing provided information, but not orientation. Reviewers encountered paragraphs thick with detail yet thin on meaning.

Text density also obscured reasoning. Critical comparisons, implications, and interpretive cues were often buried deep within paragraphs, leaving regulatory readers responsible for making connections the sponsor should have made explicit. This created a systematic drag on regulatory review: more content meant more friction, not more understanding.

This era revealed a foundational problem that still affects submissions today: information without structure does not support decisions.

Lean Writing — Necessary, but Not Sufficient
The industry’s first corrective action to address text density was and still is “lean writing.” Teams recognized that dense passages and exhaustive narrative created reader friction, so the solution became minimalism (a topic I have written about here on multiple occasions). The shift has helped, but only at the surface level.

Lean writing reduced noise, but it did not facilitate understanding.

Many documents now look cleaner but still lack decision orientation. A protocol may present short, well- edited sentences yet hide the operational logic reviewers and sites need. A Clinical Overview may be concise yet still require the reviewer to infer why a finding matters or what the sponsor means when they use the term “clinically meaningful” 23 times in the document. A Summary of Clinical Safety may use lean prose but avoid interpreting the patterns revealed in its own tables.

Lean writing is a valuable improvement—no question. But the gains remain shallow when the deeper reasoning structure stays unchanged. Lean prose without interpretive clarity still forces the reader to supply missing logic, rebuild context, or hunt for warrants behind claims.

This era revealed a second truth: reduced text volume does not guarantee reduced cognitive burden. Lean writing improves readability, but not necessarily regulatory decision-making.

Decision Clarity — The New Standard
The next stage in the evolution of regulatory writing is not shorter text—it is supporting clearer
decisions
. Decision clarity shifts the writer’s goal from reducing words to reducing the reviewer’s cognitive load. The measure of quality becomes simple: How efficiently can a regulatory reader reach a defensible conclusion?

Decision clarity begins where lean writing ends. Once excess words are removed, the real work starts: structuring the logic behind claims, interpreting evidence directly, and guiding the reviewer through the reasoning that connects data to decisions. Decision clarity requires writers to treat every paragraph as an opportunity to make meaning visible.

Documents that achieve decision clarity share three characteristics:
Lead with the conclusion.
Sections begin with the “So what?”—the decision-relevant point—followed by the reasoning and evidence that justify the position. Reviewers never have to hunt for the point the sponsor is trying to make.
Make reasoning explicit.
Interpretation is not implied or buried. Writers connect effect size to clinical relevance, exposure to safety implications, uncertainty to risk boundaries, and subgroup findings to generalizability. The reviewer is never forced to supply missing logic.
Create visible decision cues.
Reviewers see the logic trail: purpose → evidence → interpretation → implication. Headings signal meaning, not merely topics. Tables are interpreted directly. Comparisons are explicit. Warrants behind key messages and claims are stated, not assumed.

Decision clarity transforms submission documents from passive repositories of information into decision
tools. Reviewers gain momentum instead of friction. The thinking of the writing team is traceable.

This era reveals the third and most important truth: clarity of reasoning—not speed to final draft, not
minimalism—is the core determinant of regulatory decision efficiency
.

+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++

We solved the wrong problem for 20+ years.
eCTD made it easy to add content—but not to create clarity.
The next leap isn’t faster drafting or cleaner sentences. It’s decision clarity: writing that reduces friction
and accelerates reviewer reasoning.
​
Check out my new article in the Decision Efficiency series: www.linkedin.com/pulse/from-text-density-decision-clarity-gregory-cuppan-bvwvc
0 Comments

    Author

    Gregory Cuppan is the Managing Principal of McCulley/Cuppan Inc., a group he co-founded. Mr. Cuppan has spent 30+ years working in the life sciences with 20+ years providing consulting and training services to pharmaceutical and medical device companies and other life science enterprises.

    View my profile on LinkedIn

    Archives

    December 2025
    November 2025
    October 2025
    September 2025
    July 2025
    June 2025
    April 2025
    August 2019
    April 2019
    March 2019
    May 2016

    Categories

    All

    RSS Feed

Services

Consulting
Training
Assessment

Company

About
Experience
Blog

Support

Contact
© COPYRIGHT 2022. ALL RIGHTS RESERVED.
Picture
  • Home
  • Books
  • Consulting
    • Strategic Review
    • Assessment Services
  • Training
    • Document Standards
    • Skills Development Workshops
  • About
    • Experience
    • Client List
    • Blog
  • Contact