MCCULLEY/CUPPAN INC.
  • Home
  • Books
  • Consulting
    • Strategic Review
    • Assessment Services
  • Training
    • Document Standards
    • Skills Development Workshops
  • About
    • Experience
    • Client List
    • Blog
  • Contact

Decision-Enabling Design: How Briefing Books Should Think Like Regulatory Reviewers

11/11/2025

0 Comments

 
The real performance of a regulatory briefing book is not measured by page count or polish. It is measured by how efficiently a regulatory reader can follow the logic, test it, and reach a defensible position on the sponsor’s ask.

Unfortunately, many briefing books still make the reviewer work too hard. They are organized by how teams built the data—not by how regulators make decisions. That misalignment sounds minor, but it has profound cognitive consequences.

The Hidden Cost of Misalignment
When a document mirrors internal development logic instead of regulatory reasoning, the reader must reverse-engineer the argument. The regulatory reader sifts through design history (“the largest study ever planned in this patient population”) to locate decision relevance. They assemble fragments of details and data to reconstruct the sponsor’s logic trail. Each mental step adds friction—time, effort, and uncertainty.

The concept of friction is not to be underestimated. Legal scholar Brett Frischmann describes friction as resistance or drag built into documents used in legal and regulatory decision processes. Cass Sunstein and Richard Thaler refer to unnecessary complexity in documents as sludge—“bad friction” that drains cognitive energy and delays decisions.

In regulated markets, economists have quantified these effects. Studies in The Quarterly Journal of
Economics and the American Economic Review show that even modest “choice frictions” in Medicare decisions significantly slow and distort judgments. The parallel for regulatory writing is direct: when
reasoning is hard to trace, decision friction rises. 
I often tell clients that their briefing book delivers information but fails to deliver understanding.

Common Patterns of Friction
  • Redundant background that keeps reappearing in position statements and appendices.
  • Chronological sequencing that buries the current issue under pillars of past work and recycled context.
  • Functional silos that separate safety, efficacy, and exposure narratives instead of integrating
them around the regulatory question.
 Tables without interpretive anchors that leave reviewers to construct meaning.
 Massive appendices meant to prove thoroughness rather than aid reasoning.
Each of these choices forces reviewers into reconstruction mode. They also trigger what cognitive-load
theorists call the split-attention effect—when information that must be mentally integrated is physically
separated across pages, tables, or sections. John Sweller’s research in applied cognitive psychology
shows that split attention increases working-memory load, slows comprehension, and makes readers
less confident in their conclusions. In briefing books, the effect appears when a data table sits ten pages
from its interpretation or when related safety and efficacy evidence live in separate silos. The reader’s
mind does the integration work the document should have done.
And when reviewers must reconstruct logic, decision friction increases and confidence declines.
Reviewers start to question not only the data, but the sponsor’s grasp of its own argument.
A document that cannot show its reasoning clearly invites more information requests, longer review
cycles, and less trust in sponsor judgment.
The Decision Pathway
Decision-enabling design exists to solve this exact problem. My vision of information design reorganizes
briefing content around the decision pathway—the same mental sequence reviewers follow when
testing an argument:
Issue → Evidence → Interpretation → Position → Request
This sequence is not arbitrary. It mirrors both the FDA Benefit–Risk Framework (issues → evidence →
appraisal → conclusion) and the structure used in legal briefs (Issue → Rule → Application →
Conclusion). Across disciplines, this logic flow has proven to reduce cognitive friction and increase
confidence in reasoning.
When that sequence is built into the structure, reviewers stop searching for logic and start evaluating
reasoning. That shift—from retrieval to evaluation—is the defining mark of a well-designed briefing
book.
The Shift: From Information Display to Decision Design
Teams often assume that completeness equals clarity. It doesn’t. Regulatory readers don’t reward
density—they reward traceable reasoning.
A decision-enabling design builds its logic trail around the reviewer’s task. Each section of the book
should answer one implicit question: What decision does this content support, and what evidence
makes it reasonable?

When that sequence is visible, reviewers stay oriented. When it’s hidden, they reconstruct it—usually
with lower confidence and more questions.
The Hallmarks of Decision-Enabling Design
A. Purpose-Anchored Structure Each major section begins with the So what?—why the issue matters
and what decision is needed now.
This tells reviewers not just what they’re reading, but why it matters.
B. Progressive Disclosure Information flows from summary → key evidence → traceable detail.
Reviewers can engage at any level of depth without losing sight of the reasoning line.
C. Decision Cues at the Point of Use Tables and figures are placed where the decision occurs—not
buried in appendices. Each display answers a single regulatory question and closes the loop with
interpretation, not adjectives.
The Result: Cognitive Efficiency, Not Rhetorical Flourish
 When design mirrors the decision process, cognitive load drops.
 Readers no longer have to infer your logic; they can inspect it.
 Review becomes faster, more confident, and less adversarial.
That is the quiet power of decision-enabling design: the approach turns information into defensible
reasoning—and does so on the regulatory reader’s terms.
The Takeaway
A decision-enabling briefing book doesn’t argue harder. It argues smarter. It lets structure do the heavy
lifting and keeps reasoning in full view. When regulatory teams master this design discipline, meetings
change. Regulatory readers spend less time asking “Where does this fit?” and more time discussing
“What does this mean for the development program, the patient and the label?” That’s the
point—clarity that enables decision.

Join the conversation on LinkedIn: https://www.linkedin.com/pulse/decision-enabling-design-how-briefing-books-should-think-cuppan-3pa7c
0 Comments

Decision Efficiency: The Missing Metric in Regulatory Writing

11/11/2025

0 Comments

 
Regulatory documents are judged not only by what they say, but by how efficiently the documents help regulatory readers decide. Let’s keep in mind that every document you produce that ends up appearing on the computer screen of a regulatory agency reader is intended to be an advisory document. A document intended to aid the reader in making decisions.

However, my experience suggests that many development teams do not fully appreciate this point. Teams tell me they are in the business of reporting. Teams measure quality by format compliance, data completeness, or adherence to templates — not by decision efficiency. You are not in the reporting business. You are in the business of advising.

This judgment gap on “what documents do” explains why many submissions are technically correct but cognitively exhausting to read. Most regulatory documents fail not because they are incomplete, but because they are opaque to reasoning. Regulatory readers are not struggling to find data — they are struggling to follow the thinking behind the data. Every table, paragraph, and conclusion should serve one purpose: to let the reviewer see how evidence informed our thought processes and contribute to decision-making.

Regulatory reviewers do not need more information. They need better reasoning access — clear logic, visible connections, and writing that mirrors how decisions are made.

Decision efficiency measures how well a document enables a regulatory reader to:
  • Locate what matters. Key messages and comparisons must surface immediately through structure, not search. A reader should know within seconds what question is being answered or where to find the answer to their newly constructed question.
  • Interpret evidence in context. Data gain meaning only when tied to design intent, patient population, and control comparisons. Decision-efficient documents always keep this context in view, preventing cognitive drift.
  • Reach a defensible conclusion with minimal friction. The reviewer’s reasoning path should feel inevitable, not effortful — each paragraph leading logically to the next decision point.

Keep in mind that regulatory readers are auditing reasoning associated with study design, conduct, and data sets. When insight into sponsor reasoning pathways are nonexistent or blocked, the cost is real. This is where the friction comes into play—the reviewer should not have to backtrack, infer, or guess.

Reviewers spend time reconstructing logic that writers should have made transparent. Dense text, redundant tables, and poorly signposted arguments force readers to think about the document rather than through it. Each moment of confusion compounds uncertainty. Confidence in the sponsor’s reasoning drops, and the agency’s questions multiply.

This is not inefficiency of time — it is inefficiency of thinking.

The inefficiency of thinking is the far more dangerous dimension. It erodes trust, clouds the evidence trail, and converts clarity into doubt. In an environment where regulatory review cycles are compressed and AI tools assist human readers, the ability to think clearly through a document — not merely within it — is the new competitive advantage.

Decision efficiency evaluates the clarity and logic density of a document — not the volume of its words.
At McCulley Cuppan, we define decision efficiency through three observable dimensions:
  • Logic Trail Quality — How clearly does the document link Purpose → Evidence → Interpretation→ Decision?
  • Decision-Cue Density — How easy is it for the reviewer to find and recognize the signals that guide judgment (topic sentences, So what? statements, structured comparisons)?
  • Regulatory Reviewer Workload Signal — How much cognitive effort is required to extract meaning, confirm traceability, or identify implications?
When these three align, a document becomes decision-ready — not just submission-ready.

Why This Metric Matters Now
Regulatory authorities worldwide are integrating AI-assisted review tools that rely on structured reasoning and clear metadata. Decision efficiency determines whether those tools (and human reviewers) can interpret content without manual intervention. In this context, poor writing is not just a style flaw — it is an information-access problem. A document with low decision efficiency hides logic behind noise.

Moving from Compliance to Cognition
Teams that focus only on compliance write to satisfy checklists. Teams that focus on decision efficiency
write to support thinking.
This shift transforms how review rubrics, templates, and training are used:
  • Rubrics become diagnostic instruments, not scorecards.
  • Templates become logic frameworks, not containers.
  • Review becomes a test of reasoning fluency, not format and detail accuracy.
Efficiency is not producing documents faster — it is helping regulators decide faster, with confidence.
This is the next frontier of quality in regulatory writing. Decision efficiency is emerging as the missing
metric — the measure that predicts whether a document will accelerate or delay regulatory decision- making.

​Future articles in this series will show how to design decision-efficient documents, measure them with
structured rubrics, and visualize reasoning clarity through AI-enabled analysis dashboards.
0 Comments

Every Regulatory Submission Is an Argument in Disguise

10/23/2025

0 Comments

 
Behind every table, figure, and p-value lies one purpose: persuading regulatory readers that your interpretations are built on logical evidence-based analyses. In the workshops I facilitate, participants always hear me invoke: “regulatory writing is not neutral—it is strategic.” Each justification is an argument that the data are reliable, the analyses reproducible, quality is consistent, and the benefit–risk balance acceptable.

Persuasion in this context is not simply rhetoric. It is “confidence engineering”—helping regulatory readers reach well-supported decisions quickly and with trust.

Why Persuasion Matters in Regulatory Writing

Briefing books and Module 2 submission documents go beyond summarizing data. They exist to justify scientific and development choices—to explain why a development program, design, or conclusion deserves confidence.

Regulatory reviewers approach every document with professional skepticism. They must confirm that claims are supported, methods are sound, and limitations are acknowledged. Writers who anticipate these needs—by shaping information to mirror how reviewers think—make decision-making easier.

The goal is not to impress regulators with volume (I have clients who still want to “bulk up” documents).
Rather, it is to enable clear judgment through structure, logic, and transparency.

Start Where the Reader Starts — Lead with the Conclusion
Regulatory readers read for certainty. Lead with your conclusion, then show how the evidence earns it.
Use a top-down flow:
  • State the conclusion upfront. Present the “So what?” in your first sentence.
  • Support with key evidence. Follow immediately with high-impact data.
  • Provide reasoning. Explain why the evidence supports the conclusion, addressing likely counterarguments.
This deductive approach respects how reviewers process information under time pressure—fast,
selective, and purpose-driven.

Signal Your Logic, Don’t Bury It
Regulators look for explicit markers of reasoning. Framing phrases act as cognitive signposts:
  • “This approach is supported by…”
  • “The data demonstrate that…”
  • “These findings justify the proposed…”
These cues tell the reader where justification begins and how it progresses. They reduce cognitive load
and reinforce transparency—a hallmark of credibility.

Comparison Is the Language of Persuasion
Regulatory readers judge claims in context, not isolation. Comparative framing strengthens justification
by positioning your evidence within a known landscape.
  • Benchmark against standards: “This response rate exceeds historical controls by 30%, suggesting improved clinical benefit.”
  • Position within the landscape: “Unlike standard chemotherapy, this mechanism directly targets tumor pathology, reducing off-target toxicity.”
  • Differentiate from alternatives: “Compared with Drug X, this regimen achieves similar efficacy with a more favorable safety profile.”
Comparison gives reviewers a cognitive reference point—a mental ruler to gauge significance and
relevance.

Readers Follow Logic, Not Chronology
A strong justification follows a predictable rhythm: Why → How → What.
  • Why does this matter? Relevance to the regulatory decision.
  • How was it determined? Brief summary of the evidence or rationale.
  • What does it mean? Implications for design, approval, or labeling.
This structure converts data into reasoning. It organizes thought rather than chronology and minimizes
interpretive burden.

Regulatory writing, done well, aligns with how reviewers think, process, and decide.
That is the science of persuasion: clarity as method, structure as reasoning, and trust as outcome.
0 Comments

eCTD v4.0 More Than a Format Change: A Shift Toward Digital, Decision-ReadySubmissions

9/28/2025

0 Comments

 
For years, eCTD v3.2.2 has been the backbone of global regulatory submissions. It gave structure,
consistency, and a common language for sponsors and regulators. But today, our products and the data
behind them have outgrown the rigid folder hierarchy of this eCTD platform. eCTD 4.0 is not just a
technical upgrade. It is a shift toward data-centric, decision-ready submissions designed to match how
regulators actually read and decide.

Why eCTD 4.0—Built for modern product complexity
For eCTD v3.2.2, the backbone of a submission is a fixed folder tree. Each document must be placed in a rigid location with a pre-defined name. A concept that works well for small molecules and many biologic submissions with predictable sections. But this rigidity creates friction when you add new data streams. For example, emerging therapies like CAR-T cell products and CRISPR-based gene editing make this challenge sharper. These products generate new evidence categories. Such as, chain-of-identity and chain-of-custody records for each patient’s cells, highly specialized potency assays, and genomic off- target analysis for edited cells. Under eCTD v3.2.2, sponsors often wedge these data sets into ill-fitting sections or invent new folder labels

The result is added complexity for the regulatory reviewer. Instead of following an established mental map, regulators must pause to interpret where the information sits, why it’s there, and potentially how it links to other evidence. That extra interpretive work most likely increases cognitive load—the mental effort needed to find and integrate meaning. As I have discussed in other articles, high cognitive load slows review, raises the risk of misinterpretation, and invites clarifying questions or requests for additional information.

eCTD 4.0 reduces these problems with a metadata-first model. Instead of forcing content into a physical place, each document or data set is described by rich metadata. The metadata provides terms for data type, data source, and regulatory intent. These metadata elements become the “coordinates” for regulators to find what they need.

On top of metadata sits “Context-of-Use” tagging. This is a way to declare why the document exists and how it is meant to be used in the dossier review. For example, a potency assay file might carry context tags such as:
  • Supports product comparability across manufacturing sites
  • Establishes control strategy for viral vector potency
This context is an advance organizer that tells reviewers the intended purpose of evidence without requiring a narrative workaround.

In information design, as I teach it in my workshops, an advance organizer is a signal placed before complex material to help the reader build a mental map. By explicitly stating why a document or data set exists and how it will be used in decision-making enables Context-of-Use tags to work in the same way. The tags prepare reviewers to integrate what they are about to read, reduce search time, and anchor interpretation. Instead of scrolling between sections or inferring meaning from file names, the reviewer sees at a glance the analytical or regulatory task the content supports.

This is a cognitive service.

Over the course of a 200,000+ page submission, reducing even small interpretive friction compounds into meaningful efficiency and fewer avoidable queries.

The result: a scalable submission structure. You can introduce new modalities and data types without breaking the CTD map. Reviewers can filter and navigate by function and relevance, not just by where you tucked the file. And you can maintain submission integrity over time — updates to metadata or context can clarify meaning without re-life cycling the entire file.

Instead of duplicating entire documents across IND, NDA, BLA, and global variations, eCTD 4.0 uses unique document identifiers and machine-readable metadata to let the same content unit be updated and referenced throughout a product’s lifecycle. This reduces redundant authoring, helps maintain consistency across submissions, and supports automation for publishing and review.

Lastly, richer metadata, stable document identity is now critical as regulatory agencies pilot AI tools for comparison, safety signal detection, and labeling review. A well-tagged eCTD 4.0 dossier will be easier for both humans and machines to navigate.

Global Status
  • FDA began accepting eCTD 4.0 submissions in September 2024.
  • Japan is piloting and aiming for mandatory use in 2026.
  • The EU and other regulators are preparing pilots.
  • A mixed-mode period is coming — many sponsors will maintain both v3.2.2 and v4.0 workflows for the next several years.
The Opportunity
For organizations ready to adapt, eCTD 4.0 isn’t just compliance work. It’s a chance to modernize regulatory writing, improve clarity and traceability, and align better with how reviewers — and soon their AI assistants — read and analyze submissions.

Further Reading
  • FDA & ICH — eCTD v4.0 Implementation Guide
    Official specification covering metadata-first structure, submission-unit messaging, and context-of-use tagging. Download PDF
  • EMA eSubmission — Business Cases & Advantages of eCTD 4.0
    European perspective on business drivers, reuse, and harmonization of metadata-driven submissions. Download slides
  • Veeva — Plan for Submission Success with eCTD 4.0
    Practical guidance on context-of-use, document reuse, and preparing publishing systems and teams. Read here
0 Comments

Designing Document Links for Humans and Machines: Getting It Right

9/22/2025

0 Comments

 
So what? The way regulatory and medical writers design document hyperlinks now influences not only how efficiently human reviewers read but also how effectively AI tools, like FDA’s Elsa, parse, cross- reference, and retrieve content. Link design is no longer a simple way to manage information economy as defined within lean writing precepts. Link design directly shapes comprehension for humans and interpretability for machines.
​
The Human Reader Challenge
Many biopharma document users must read under time pressure. They work within and across long documents with dense appendices, moving between summary sections, data, methods, and supporting detail. Poorly structured links force them to jump back and forth with little context. That jump is not harmless.

Cognitive psychology highlights two distinct challenges:
  • Split attention effect. Readers must hold partial, spatially separated information in working memory while navigating elsewhere for the missing pieces. The extra load strains memory capacity, leaving less mental energy for comprehension.
  • Misdirected attention effect. In-text hyperlinks require constant micro-decisions about whether to click (what I refer to as the “should I stay or go now” phenomenon), increasing cognitive load and diverting attention from the main text. Poorly conceived links send readers into irrelevant or low-value detail. When they return, context is lost and momentum broken. Attention has been wasted on material that does not advance the argument.
For example:
  • A phrase such as “see appendix for details” without previewing what is in that appendix increases cognitive load.
  • A chain of multiple links (from main text → appendix → sub-appendix → external document) fragments attention further.
  • A sentence that presents three or more hyperlinks in sequence—such as “refer to 5.3.2, 6.4.3.1, 8.4.5.3”—forces readers to scatter attention across several targets at once, with no clear priority. The reader must also decide: Do I need to check all three? In what order? After reviewing one section, do I return to the original sentence before moving to the next? Each of these decisions adds unnecessary cognitive load and fragments comprehension.
Link formatting is not neutral—it influences what stands out and what gets lost. The Von Restorff effect shows that distinctive items are more likely to be noticed and remembered. Purposeful formatting of critical links can therefore guide reviewers’ attention to essential evidence. But overuse or inconsistent styling dilutes the effect, turning the page into visual noise. Instead of helping reviewers prioritize what matters, the links compete for attention and distract from the argument.

The result: reviewers not only lose the argument thread but also misinterpret the evidence when links overshoot or underserve their purpose. Poor link design magnifies two distinct risks—split attention and misdirected attention—each undermining comprehension in different ways.

The AI Reader Challenge
Hyperlinks can both improve and challenge Natural Language Processing (NLP) parsing in large technical documents by providing additional context and structure while also introducing noise and formatting complexities. For modern Large Language Models (LLMs), hyperlinks can be invaluable resources to enhance the quality of text-based applications like Retrieval-Augmented Generation (RAG).

AI tools such as FDA’s Elsa approach links differently. Machines do not skim, infer, or guess. They parse hierarchies and rely on structure. A vague cross-reference like “see above” or “refer to Appendix 1” leaves a machine with no anchor point.
For AI:
  • Consistency matters. A link must follow a stable format—such as “Appendix 3.2.S.4.1 Dissolution Data”—not shorthand like “see Table in Appendix.”
  • Persistence matters. Anchors must point to stable IDs or tags, not text that may shift during drafting.
  • Context matters. Machines need metadata around the link to understand relationships. For example, is the link pointing to supporting evidence, regulatory precedent, or comparative data?

The anchor text of a hyperlink—the clickable words—often provides a concise, semantically meaningful summary of the linked content. An NLP model can use this information to better understand the linked document’s topic and relevance. Internal links that connect different sections of the same document can act as a roadmap, informing NLP models of the document’s structure, similar to how a table of contents functions.

But not all anchor text is descriptive. Hyperlinks with generic text like “refer to Section 6.4.2” or “see Appendix 1” introduce noise for NLP systems, as they provide no semantic information. Another factor is technical documents in various formats, such as PDFs or legacy file types, may lack consistently marked- up hyperlinks—posing a major challenge for accurate hyperlink extraction.

Without consistent structure and metadata, AI parsing has constrained value. Tools may mis-index or
mis-categorize evidence, leading to gaps in automated review or flawed analytics.

The Dual Design Challenge
The challenge for regulatory writers is designing links that serve two audiences at once.
Human-centered design:
  • Preview what the reader will find on the other side of the link.
  • Integrate the link into the sentence logically, so readers don’t lose context.
  • Minimize unnecessary toggling by including summaries or excerpts before the link.
Machine-centered design:
  • Use structured patterns—consistent numbering, explicit section identifiers.
  • Anchor to stable, persistent IDs rather than vague references.
  • Provide metadata or descriptive labels that help AI categorize the relationship.

These principles are complementary, not competing. Links designed well reduce cognitive load for
people and improve interpretability for machines.

Why This Matters Now
Two shifts make link design urgent:
  • Human workload is rising. Regulatory reviewers face expanding data volumes. Poor navigation multiplies their effort and increases the risk of oversight.
  • AI oversight is accelerating. Tools like Elsa are entering mainstream regulatory review. Documents that lack structured, machine-readable links may slow automated checks or create mistrust in sponsor submissions.
In short, link design is a risk management decision. If links distract the human or confuse the machine,
the regulatory argument weakens.

Design Principles Going Forward
  • Preview before you link. Give readers context so they know why they are leaving the page. This reduces split attention by keeping meaning in view and prevents misdirected attention by signaling whether the link is relevant.
  • Reduce toggling. Summarize supporting evidence inline before directing to an appendix. This keeps working memory free (limiting split attention) and ensures only essential links are followed (limiting misdirected attention).
  • Think metadata. Add descriptive tags or labels clarifying the link’s purpose—whether it supports evidence, methods, or regulatory precedent. This helps AI parse relationships and signals to human readers whether the detail is worth following.
  • Audit your links. Review for vague references, irrelevant detail, and inconsistent styles. Each unchecked problem adds either memory strain (split attention) or wasted effort (misdirected attention).
Bottom Line
Future-ready regulatory documents must support two modes of reading: fast, context-seeking human
review and precise, structure-dependent AI parsing. Writers who treat links as part of information
design—not just formatting—reduce cognitive load for reviewers today and build trust with AI systems
tomorrow.

The real question is not whether your documents contain links. The real question is: Are your links
designed for humans and machine?

https://www.linkedin.com/pulse/designing-document-links-humans-machines-getting-right-gregory-
cuppan-veeec
0 Comments

Why Readability Is the Hidden Currency of Medical and Regulatory Writing

9/2/2025

0 Comments

 
​Readability determines whether your document works—or fails—in practice. A protocol, briefing book,
or other regulatory submission document may be perfectly accurate, but if readers struggle to navigate
the document, they most likely waste time, may make mistakes, and likely lose confidence in the work.

Working definition: readability is the degree to which a document enables the intended readers to
quickly find, understand, and apply the information with minimal cognitive effort.

In regulatory submission documents and protocols, readability is not optional—it is risk mitigation. High readability:
  • Reduces cognitive load for busy reviewers.
  • Prevents operational errors in study conduct.
  • Speeds decision-making by presenting information logically and without clutter.
When readability breaks down, interpretive space grows, and that gap between what’s written and
what’s understood becomes dangerous.

The Hidden Cost of Poor Readability

Poorly designed documents:
  • Inflate reading timelines.
  • Trigger avoidable questions.
  • Increase site errors in clinical research protocols.
These costs often remain invisible until after the document is published and is use—when problems are
hardest to fix.

Readability in technical and regulatory documents is not a cosmetic feature—it is a competitive advantage. As Saul Carliner observed, this is the 2nd level of information design: enabling documents to perform reliably in real-world use. He also suggests that when your documents reduce cognitive strain, you build trust with readers.

The most successful submission documents I’ve reviewed are not only scientifically rigorous, they are
designed to be read.

The Writer’s Responsibility
Patricia Wright stresses the author’s role in designing readable documents: “The message is that the onus for achieving successful communication cannot be safely left to the reader. Writers need to see themselves as catalysts for the strategies that their readers adopt; and they need to be aware of the design features that promote the selection of particular strategies.”

Wright’s insight shifts accountability squarely onto the author’s shoulders. Too often, medical and regulatory writers assume that expert readers will “figure it out” even if a passage is dense or disorganized. That assumption is dangerous in regulatory contexts, where readers work under time constraints, juggle multiple documents, and must reach reliable conclusions.

​Readable writing is not about lowering standards—it is about following document design standards as
an act of responsibility. Authors shape the strategies readers use and they create conditions that
promote consistent, accurate, and rapid comprehension.
0 Comments

The Hidden Cognitive Cost of Hyperlinks in High-Stakes Documents

9/2/2025

0 Comments

 
​Hyperlinks are everywhere—in protocols, briefing books, submission documents, SOPs, policy manuals, and training guides. Their siren song promises speed, efficiency, and instant navigation.

In Greek mythology, the Sirens’ enchanting melodies lured sailors off course and onto rocky shores. Hyperlinks can work the same way: they invite you to click, to leave the safe harbor of your main discussion or argument in search of something interesting, only to risk losing your place, your context, and sometimes the point entirely.

In clinical research protocols, they may link eligibility criteria to lab thresholds, dosing schedules to product handling instructions, or definitions to appendices. For AI and other automated tools, these links are gold: they can map relationships between sections, create a machine-readable network of content, and cross-check for consistency.

For human readers, however, hyperlinks come with a trade-off: every link is a micro-decision. When you see a hyperlink, your brain has to ask:
  • Should I stay or should I go now? (to borrow a line from the music group The Clash)
  • Will I lose my place if I follow it?
  • Will it answer my question, or lead me somewhere irrelevant?
That momentary pause—repeated dozens of times in a dense protocol or Module 2 summary—splits attention, interrupts flow, and increases the risk of losing context. Cognitive science calls this the split attention effect. This is when a reader must divide focus between two information sources, the effort of mentally integrating them increases cognitive load and reduces comprehension. In high-stakes documents, that extra mental friction can mean slower decisions, missed connections, or misinterpretation of the intended message.

In operational documents like clinical trial protocols, GxP SOPs, or emergency response manuals, these interruptions are more than an inconvenience. They can delay decisions, increase errors, and erode compliance.

In regulatory briefing books and Module 2 documents, hyperlink interruptions can undermine both efficiency and precision. Each click risks pulling reviewers away from the main argument, breaking the logical chain that supports a decision. Re-orienting after navigating to annexes, study reports, or external references slows the evaluation process and can erode the clarity of your case. Strategic hyperlinking should serve the narrative—pointing to critical evidence only when it truly strengthens comprehension—rather than scattering attention across disconnected content.

Another trap in protocol and submission writing is embedding full source links directly into the main
narrative, as if the link itself proves transparency or credibility. In practice, this type of linkage clutters
the discussion, distracts the reader, and breaks the flow. Demonstrating a link to source is
essential—but the primary text should focus on instruction, reasoning, interpretation, or conclusion.

Make the appearance of secondary and tertiary links as subordinate as their intention. Use super script notation to link to footnotes, references, or appendices. The goal is to make these supportive connections accessible without becoming visual speed bumps for your reader.

Like the sailors of myth, regulatory and medical writers must recognize when the Sirens are singing. Hyperlinks can be useful guides, but if they tempt the reader away from the main course of the discussion or argument, the hyperlink risks wrecking clarity on the rocky shores of distraction. The safest passage is not to silence the Sirens, but to decide when their song strengthens the voyage—and when it should be left unheard.
0 Comments

​The Dangerous Habit of Letting Draft 1 Do Your Thinking

7/29/2025

0 Comments

 
Why the “Do” Must Precede the Draft in Regulatory Writing
Why do regulatory teams treat Draft 1 like a mirror—something to react to—rather than a blueprint for strategic thinking? Because reacting feels easier than planning. But that shortcut costs time, clarity, and purpose. It’s easier to critique a sentence than to commit to a message. Yet regulatory writing demands something more deliberate: the “do.”

The “do” is the document’s core function—what you want the regulatory reader to
understand, agree with, or act on. If that function isn’t clear, you’re not writing strategically.

You’re just filling pages.

When Draft 1 drives the thinking, documents lose purpose—and teams lose time. Real thinking often begins only after Draft 1 is written. By then, the window for deliberate planning has already narrowed.

I raised this point with my consulting partner, Stephen Bernhardt. We discussed creating a heuristic tool to guide early drafting. I pushed back: “That assumes teams are willing to think hard before they write.” In my experience, they are not. Most treat Draft 1 as the start of thinking—not the result of it.

Medical writers routinely build shells—basic outlines mistakenly called “prototypes” (a separate issue I’ll explore in a future article). These pseudo-prototypes often get ignored.

Why?
  • They feel incomplete: Skeleton drafts with headings or placeholder bullets don’t trigger urgency. Teams say, “This feels like an outline—we’ll get to it later.”
  • They feel premature: Teams often conflate writing with thinking. They wait to engage until the draft sounds polished. Prototypes, by contrast, demand early decisions.
  • They feel threatening: A good prototype raises hard questions: What’s the message? What’s the structure? What are we trying to prove? These questions shift stakeholders from passive reviewers to active contributors—before they feel ready.
Prototype reviews often fail because no one makes a formal ask. No guided questions. No
consequences for silence. The path of least resistance is to do nothing: “I’ll wait for the full
draft.”

Many teams also delay message planning until the full data set is locked. But by then, it's too
late to shape the narrative—only to react to it.

Another problem, if leadership doesn’t engage, neither will the rest of the team. Reviewers take their cue from the top.

The real issue? Teams skip the pre-draft choices that matter most: defining the document’s purpose, aligning on its message, and designing its logic. Teams avoid thinking about information design—the deliberate structuring of content to make key messages discoverable, logically sequenced, and aligned with the reader’s task.

Instead of planning, teams treat the first draft as a mirror—using it to react rather than to
think.

The Problem: Drafting Replaces Thinking
When Draft 1 stands in for strategy, documents suffer. People drop into surface-level edits. They tweak sentences, debate word choices, and micromanage formatting—without questioning the logic beneath it. Few ask: Should this section be saying these things in this way? Fewer still ask: What is this section supposed to do for the regulatory reader? Most cannot answer those questions. I know—because it’s the first thing I ask in every workshop I run.

In this dynamic, writing becomes the trigger for strategy, rather than the output of it. When
teams treat the draft as a heuristic for meaning, it signals deeper problems:
  • Avoidance of upstream decisions: Editing words is easier than confronting uncertainty about message, logic, or evidence.
  • Lack of document function clarity: Teams assume documents are meant to “report.” Report what? Data? A document should argue, justify, or demonstrate—not simply display.
  • Mistaking writing for thinking: Drafting is execution. The strategy should come first. This habit isn’t just inefficient—it’s risky. It leads to documents that:
  • Drift from purpose because none was defined
  • Respond to language rather than logic
  • Dilute the message under the illusion of refinement
By the time the team starts asking the right questions, the draft has already become an anchor. Once written, inertia sets in. The document looks finished—even though the thinking isn’t. That’s when teams
start rearranging deck chairs on the Titanic—refining detail while ignoring structural failure.

A Better Way Forward
The fix is simple—but not easy: stop letting Draft 1 lead the thinking.
Instead:
  • Define what the document must do—not just what it must say. Start with function.
  • Plan the message before writing. What’s the claim? What’s the evidence? What’s the logic?
  • Use Draft 1 for execution—not exploration.
  • Tackle the real questions early

Closing Thought

When documents are drafted before critical thinking is complete, teams end up managing words—rather than shaping arguments. Regulatory reviewers can feel the difference. I know this from years of feedback and firsthand conversations.

Before writing a single sentence, ask:
  • What do we want the regulatory reader to understand or do—because of this document?
  • If the answer isn’t clear, you’re not ready to draft.

Writing without purpose isn’t progress. It’s risk.
0 Comments

​Why Regulatory Writers Need to Prototype Smarter

7/29/2025

0 Comments

 
Drug development regulatory submission documents are among the most structured documents in
existence—and yet they are often developed in the most chaotic way. As I mentioned in my previous
article, teams wait too long to think, treating Draft 1 as the beginning rather than the outcome of
deliberate information design planning.

Instead of aligning early on logic, purpose, and reader needs, teams begin reacting to prose—often in
the form of a partial draft or outline labeled as a “prototype.”. The result? Wasted cycles, misaligned
arguments, poor document flow, and late-stage rework that drains time and undermines
confidence—especially in mission-critical documents like Clinical Study Reports and the eCTD Module
2.5 Clinical Overview.

I suggested in my previous article that development teams need to reconsider what they are generating
as document prototypes. Most teams mistake an outline for a prototype. But there’s a key difference:
  • An outline shows you what sections and details to fill in.
  • A prototype shows you how the thinking should unfold.
Outlines help establish the physical architecture of a document. However, I suggest what really counts is
the establishment of the intellectual architecture of the document. The “So what?” and the “Why?” that
are at the core of evaluating, justifying, and arguing.

This is the foundation of regulatory writing. After all, the purpose of a submission is not to describe—it is
to evaluate, justify, and argue. Those three writing actions sit at the heart of how health authorities
assess benefit–risk.

From “Primitive Forms” to Design Thinking
The word prototype comes from the Greek prototypon—meaning “primitive form” or “original model.”
Historically, the concept of building rough models or mock-ups has existed for centuries. However, it
was not until the late 1960s that prototyping emerged as a recognized approach in document
information design.

In software, prototyping gained traction in the early 1970s as an explicit strategy to manage evolving
requirements. Winston Royce—often miscredited with creating the Waterfall model—emphasized the
importance of incorporating prototyping within code writing to reduce risk. By the 1990s, “document
prototyping” became mainstream in many industries that demanded logic and tracebility.

Prototyping is not about language and shell tables. It is about early validation of logic, flow, and
alignment. A concept that is either poorly understood or undervalued in many Pharma houses.

Prototyping as Structured Thinking
A real prototype is not a template filled in with placeholder XX, Y% and p-value of 00.0X. A prototype is
the framework for judgment. A prototype helps teams clarify what each section must do—not just what
it must include.

​One powerful tool to use is a prototype planning matrix, which breaks down each section into five core
elements:
Heuristic Matrix
This is the logic scaffold that the decision-making regulatory readers are looking for.

​Let's walk through an example:
Picture

Keep in mind the “So what?”
The message is not a study endpoint. The message is a decision in sentence 
form. The data supports that decision, but the message must guide how the data is presented and evaluated.

Teams tell me all the time, “it is all about the data.” I get it. It is indeed about the data. So, during

prototyping is the time to make the data needs visible. Where do you need a table, figure, or reference?
Too many teams wait until late in the document lifecycle to consider what tables or figures will help
carry their argument in a Clinical Overview.

Writing Actions and Functional Thinking

In my approach, each part of the prototype needs an associated thinking task. What does this section
“do” for the reader. It is not about the reporting, it is about the doing:
  • Analyzing variability, relationships, trends
  • Evaluating strength of evidence
  • Justifying choices and interpretations
  • Synthesizing findings across studies
  • Arguing a benefit–risk conclusion
A prototype needs to be centered on function— is not about what the team wants to say. It’s about what the document must accomplish. That means designing—yes, designing—documents that are fit for function.

A strong prototype exposes reasoning, not just structure. It shows how the argument flows, where the

evidence lands, and whether the logic holds—before prose locks your thinking into place.

++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
New Article: Why Regulatory Writers Need to Prototype Smarter
In drug development, we create some of the most structured documents in the world—yet they’re often
developed in the most chaotic ways.

Too many teams treat Draft 1 as the start of thinking, rather than the outcome of it.

In my latest article, I explore why regulatory submission documents—especially eCTD Module
2.5—require prototypes that reveal logic, not just structure. I walk through the difference between
outlines and prototypes, introduce a five-part heuristic planning tool, and share examples of how to
build prototypes that support decision-making for regulatory readers.

It’s not about formatting—it’s about function.

It’s not about writing early—it’s about thinking early.

Read the full article here: https://www.linkedin.com/pulse/why-regulatory-writers-need-prototype-
smarter-gregory-cuppan-solmc

Let me know how your team approaches document prototyping—and what gets in the way of doing it

well.
0 Comments

When Elegant Writing Masks Flawed Thinking in Regulatory Documents

7/2/2025

0 Comments

 
​In regulatory writing, fluency is often praised. To a fault, I might add. Documents that “read well” are
seen as polished, professional, and persuasive. But fluency can be deceptive. A troubling phenomenon is
increasingly visible across high-stakes submission documents that I can access: writing that is elegant,
but logically unsound.

​The Problem: Elegant Nonsense
​

These are documents—often Clinical Overviews, Briefing and AdComm Books, or 2.7 Summaries—where
the language flows, the sentences vary in rhythm and length, and the vocabulary sounds convincing. But
when you interrogate the logic underneath, something’s missing. There is no connective reasoning. The
claims float. The evidence is absent, circular, or only loosely linked.

I call this elegant nonsense—writing that sounds intelligent but lacks logical structure or evidentiary
support.

Examples of Elegant but Hollow Writing
  • Circular Conclusions:
    “The results demonstrate strong efficacy, which is consistent with the robust outcomes observed.”
    (No new idea. The sentence loops.)
  • Vague Synthesis Disguised as Insight:
    “Taken together, the data provide a compelling case for a favorable benefit-risk profile.”
    (Taken together how? What trade-offs? What reasoning?)
  • Hedged Fluency:
    “The observed improvements, while preliminary, suggest a potentially meaningful clinical impact inselected populations.”
    (So… is there impact, or not? “Suggest” is a hedge added to another hedge “potential.” Is there any meat on the bone?)
  • Precision Without Purpose:
    “Statistically significant improvements were observed across all measured endpoints (p < 0.05).”
    (And the relevance is…? Statistical significance is not a conclusion—what does this observation mean clinically?)
  • Inference by Adjective:
    “In the largest study ever conducted in this patient population, these robust results demonstrate this novel therapeutic treatment offers a promising approach in a challenging disease setting.”
    (All the right words—none of the actual thinking. Largest study does not equate to being an appropriate and well controlled study. What makes the results ‘robust'? 'Novel' and 'promising' are evaluative fillers. What evidence supports this promise? What challenge is being satisfied?)
  • Fluent Reporting without Interpretation
    “Over a median duration of follow-up for the primary endpoint of 4.7 and 4.5 years, respectively, a primary endpoint event occurred in 17.2% (705/4089) of patients in the icosapent ethyl group, as compared to 22.0% (901/4090) of patients in the placebo group over the median 4.9 year follow-up (HR of 0.752 [95% CI: 0.682 to 0.830; p=0.00000001]; RRR of 24.8%; absolute risk reduction [ARR] of 4.8%; and number needed to treat [NNT] of 21). Thus, the primary endpoint was met, demonstrating a substantial and statistically significant lower risk of major adverse CV events with icosapent ethyl than with placebo.”
    (The passage piles up quantitative data but never explains why these results matter or how to interpret the magnitude of benefit. “Substantial and statistically significant” is a stylistic conclusion. What does 'substantial' mean in this clinical context? Substantial compared to what? For whom? At what cost or risk?)
These examples are not rare—they are systemic. But what makes this kind of writing so persistent in
high-stakes submission documents?

Why This Happens
  • Over-reliance on polished templates or legacy phrasing
  • Desire to “sound smart” in high-stakes documents
  • Misunderstanding fluency as a proxy for rigor
  • Fear of being too direct, especially with uncertain or marginal data
But the result is the same—writing that passes the eye test but fails the decision test.

Why It is Dangerous
Regulatory readers are trained to look beyond the language. They must detect bias, weigh risk, evaluate strength of evidence, and make yes/no decisions under pressure. When documents rely on polished language to mask weak thinking, reviewers lose confidence—not just in the section, but in the entire argument. This is the affective level of information design, as suggested by Saul Carliner.

Carliner noted that when writing feels effusive (demonstrative, lavish) or evasive, then the writing erodes the reader’s trust—not only in what is stated, but in why it is stated that way. In regulatory contexts, that erosion is consequential: it casts doubt on the sponsor’s judgment and forces the reviewer to work harder to separate signal from gloss.

Elegant writing that lacks rigor is not just ineffective—it can be misleading.

How to Spot and Fix It
Interrogate every conclusion
– What is this based on? Where’s the data? Is the reasoning clear?

Replace vague synthesis with structured logic
– Instead of “Taken together,” show how the pieces fit.

Simplify to clarify
– If the sentence reads like corporate poetry, ask: is it hiding uncertainty?

Separate fluency from function
– Does this paragraph sound good, or does it do its job?

Consider the phrase: “This study is adequate and well-controlled.”
The phrase appears routinely in regulatory documents—concise, confident, and aligned with regulatory
guidance language. But unless the concepts are substantiated, this sentence is nothing more than
elegant shorthand.

Why It Sounds Smart
It mimics the regulatory lexicon (such as 21 CFR 314.126).
It conveys confidence in the study's design.

It’s often used to bridge to conclusions about efficacy or labeling claims.

Why It May Be Elegant Nonsense
If the surrounding text contains no analysis of the “whys” for:
  • control group selection
  • blinding or randomization method
  • endpoint appropriateness
  • sample size justification
  • protocol amendments
  • protocol deviations
  • participant heterogeneity
Then the phrase is merely a performative placeholder—an assertion dressed up as evidence. The writing
must move from elegance to substance.

Another Example: “The Safety Profile Was Manageable”
This phrase shows up in nearly every Clinical Overview and Summary of Clinical Safety I read. The phrase
suggests confidence, but what does it actually mean?

Why It Sounds Smart

It’s concise and optimistic.
It implies clinical actionability—something prescribers and regulators care about.

Why It May Be Elegant Nonsense
“Manageable” sidesteps reasoning. The term offers a verdict without evidence. I suggest that without explaining what was managed and how, this phrase is:
  • Subjective (manageable for whom? under what conditions?)
  • Empty (not tied to severity, reversibility, or impact)
  • Misleading (may mask dose holidays or reductions)
The phrase has become a narrative crutch, offering the appearance of interpretation while avoiding the
informative work. “Manageable” carries a positive emotional valence (“feel good factor”) without
conveying analytical substance. It is language designed to reassure, not inform.

Bottom Line
Polished phrases like “adequate and well-controlled” or “manageable safety profile” only serve the
reader when tethered to evidence and logic. Elegant language that obscures complexity—or avoids
specifics—undermines trust in the message. In regulatory writing, clarity must be earned. You make a
claim, you better prove it.

A Better Standard
The goal of regulatory writing is not to sound polished. The goal is to communicate rationales for
interpretations and decisions—clearly, truthfully, and logically.

Regulatory decisions shape public health. Regulatory reviewers must decide under pressure, with
limited time. Our job as writers is not just to report and write elegant prose—but to demonstrate
thinking on the page.

Writing that hides uncertainty or skips reasoning does not just fail the reader. It fails the process.
  • Let elegance serve clarity.
  • Let language serve logic.
  • Let writing serve decisions.

Also published to LinkedIn July 2, 2025. https://www.linkedin.com/pulse/when-elegant-writing-masks-flawed-thinking-regulatory-gregory-
cuppan-qmmrc
0 Comments
<<Previous

    Author

    Gregory Cuppan is the Managing Principal of McCulley/Cuppan Inc., a group he co-founded. Mr. Cuppan has spent 30+ years working in the life sciences with 20+ years providing consulting and training services to pharmaceutical and medical device companies and other life science enterprises.

    View my profile on LinkedIn

    Archives

    November 2025
    October 2025
    September 2025
    July 2025
    June 2025
    April 2025
    August 2019
    April 2019
    March 2019
    May 2016

    Categories

    All

    RSS Feed

Services

Consulting
Training
Assessment

Company

About
Experience
Blog

Support

Contact
© COPYRIGHT 2022. ALL RIGHTS RESERVED.
Picture
  • Home
  • Books
  • Consulting
    • Strategic Review
    • Assessment Services
  • Training
    • Document Standards
    • Skills Development Workshops
  • About
    • Experience
    • Client List
    • Blog
  • Contact