MILITARY DOCTRINE APPLIED TO GTM KINDA SLAPS

Jan 23, 2025

Military Doctrine Applied to GTM kinda slaps


Defense in Depth is a strategy of layered defense with each layer meant to delay and slow down the enemy advance rather than stop it outright - this is based on the fact that advances lose momentum over time.


Basically you absorb until the advance cannot sustain itself. Each layer covers and mitigates the previous and upcoming layer. Think of it as covering your bases.


How does this apply to GTM? I've been working on an a research and discovery process I'm now calling:


Discovery in Depth


I mean sure others have might have similar approaches but haven't seen them nor have heard of a cool name for it so here it is lol


What is Discovery in Depth? Discovery in Depth:

Enables and fully leverages the power of AI-driven web research while completely mitigating the risks of hallucinations and false positives and negatives (the last part is critical for providing actionable intel points for GTM motions).

This is accomplished by:

  • establishing strong data provenance requirements so much so that it is built in (data proveance is just the ability to backtrace exactly where your data points came from and how they've been transformed in each stop of a workflow)

  • constraining model research prompts to a specific step by step process and tightly constraining output to a specific JSON schema

  • requiring each data point have a source url, confidence score and exact quote extracted from the source


We need to do this because:

  • Separate from AI hallucination, you will never be able to pull in 100% perfect, no errors, 100% confidence data

  • Too many workflows segment and score by a single or a handful of data points - assuming exactly the opposite of the above. When you inevitably get data gap or even worse bad data, your workflow is fucked

  • On top of all of this LLMs still hallucinate - especially when you give it a complex data with niche requirements that push up against edge cases


So the solution is an approach that:

  1. Mitigates false positives through multiple layers of validation/context

  2. Doesn't make any single data point mission-critical

  3. Builds a holistic "anthropological" picture through multiple contextual layers - making so even if a specific data point is wrong the whole picture of your contact/account is directionally correct

  4. Uses source attribution and confidence intervals to weight/discount data points

Each piece of intelligence/research is treated with appropriate skepticism and weighted based on its provenance - similar to how defense in depth is designed with the assumption that any single defensive position might fail.


So the end playbook for Discovery in Depth:

  1. Task/Process Constraints

  • Breaking research into highly specific, bounded tasks

  • Detailed step-by-step prompting

  • Enforced JSON schema for outputs

  • Replicable processes that can scale

  1. Data Attribution Layer

  • Each data point has provenance tracking

  • Confidence scoring is built into the schema

  • Creates clear paths for human verification when needed

  • Enables downstream weighting/processing based on confidence scores

It's like each research "control point" has:

  • Clear boundaries (what it can/cannot do)

  • Standard output format

  • Built-in provenance

  • Confidence metrics

Digital Leverage © 2024

Terms of Service

MISSION:LEVERAGE TECH TO FREE EACH OF US TO EMBRACE OUR HUMANITY.