Analysis Methodology

Reasoning frameworks used by the analysis system. Each skill defines a structured method that produces analysis outputs.

Fundamental Analysis

Fundamental Analysis

Purpose

Analyze the fundamental supply-demand picture for an asset by consuming event data and producing a structured factor decomposition with influence weights and directional synthesis.

Input Spec

Event TypeFilterPurpose
`geopolitical`Asset-relevant eventsSupply/demand disruptions, risk premium
`policy`Asset-relevant decisionsProduction quotas, monetary policy, sanctions
`economic-release`Macro indicatorsDemand signals (GDP, PMI, employment)
`inventory`Asset inventoriesPhysical supply-demand balance
`positioning`Asset positioningSentiment, crowding risk

Time range: Current date + lookback as needed for context (typically 1-4 weeks for active situations).

Asset filter: Load the asset skill from `assets/{asset}/SKILL.md` to understand which events are relevant.

Method

Step 1: Gather supply events → `supply-factors.md`

1. Read events from `geopolitical/`, `policy/`, `inventory/` that affect supply 2. For each, assess: what is the supply impact, how large, how long 3. Classify as: disruption, constraint, expansion, normalization 4. Write to `supply-factors.md` using the intermediate schema below

Step 2: Gather demand events → `demand-factors.md`

1. Read events from `economic-release/`, `policy/`, `geopolitical/` that affect demand 2. For each, assess: what is the demand impact, how large, how long 3. Classify as: growth, contraction, substitution, structural-shift 4. Write to `demand-factors.md` using the intermediate schema below

Step 3: Decompose key factors → `sub-factors.md`

1. For each high-significance supply or demand factor, decompose into verifiable sub-factors 2. Each sub-factor is a specific, checkable claim with a credibility type: `action` > `capability` > `deployment` > `constraint` > `precedent` > `intention` 3. Assess whether the parent factor's rating is supported by its sub-factors 4. Write to `sub-factors.md` using the intermediate schema below

Step 4: Weight influence → `influence-weights.md`

1. Assign influence weight (%) to each factor based on its current significance to price 2. Weights must sum to 100% 3. For each weight >= 10%, provide explicit rationale 4. Write to `influence-weights.md` using the intermediate schema below

Step 5: Synthesize → `result.md`

1. Using only the supply-factors, demand-factors, sub-factors, and influence-weights produced above 2. Form a directional view: overall bias (bullish/bearish/neutral), confidence, time horizon 3. Define 2-3 scenarios (base/bull/bear) with probabilities summing to ~100% 4. Identify key risks that would change the view 5. List monitoring priorities 6. Write to `result.md` using the output schema below

Intermediate Schemas

supply-factors.md

# Supply Factors — {asset} — {date}

Columns:

demand-factors.md

# Demand Factors — {asset} — {date}

Columns:

sub-factors.md

# Sub-Factors — {asset} — {date}

Columns:

influence-weights.md

# Influence Weights — {asset} — {date}

Columns:

Output Schema

result.md

# Fundamental Analysis Result — {asset} — {date}

## Temporal Validity

FieldValue
Created{ISO 8601 datetime, e.g., 2026-03-11T14:30:00Z}
Last Validated{ISO 8601 datetime — same as Created initially, updated on re-runs}
Valid From{YYYY-MM-DD — analysis date}
Valid To{YYYY-MM-DD — Valid From + shorter end of Time Horizon from Overall View}
Trading Days{business days between Valid From and Valid To}
Calendar Days{total days between Valid From and Valid To}
Data Window{earliest event date consumed} to {latest event date consumed}

## Overall View

FieldValue
Biasbullish / bearish / neutral
Confidencehigh / medium / low
Time Horizonperiod this view applies to
Key Drivermost important factor
Key Riskbiggest threat to the view

## Scenarios

## Risks to View

## Monitoring Priorities

## Change Log

Cross-references

Technical Analysis

Technical Analysis

Purpose

Analyze price action structure to identify who is trading, how price moves, and what patterns emerge. Operates purely on price-ohlcv event data.

Input Spec

Event TypeFilterPurpose
`price-ohlcv`Asset instrument, multiple timeframesRaw price data

Instruments: Use Oanda format (e.g., `BCO_USD`, `XAU_USD`). Timeframes: H1/H4 for intraday structure, D for swing, W for structural context. Time range: Typically 20-60 candles at each timeframe.

Method

Step 1: Analyze price velocity → `velocity.md`

1. Read price-ohlcv candles at H1 or H4 timeframe 2. Calculate velocity of each move (price change / time) 3. Classify moves: fast-up, slow-up, fast-down, slow-down 4. Calculate velocity ratio (avg upward velocity / avg downward velocity) 5. Determine dominant participant type from velocity signature:

6. Write to `velocity.md` using intermediate schema below

Step 2: Infer participants → `participants.md`

1. Using velocity analysis from Step 1 2. Read positioning events from `events/positioning/` (if available, for cross-check only) 3. Identify participant cohorts: institutional, commercial hedger, speculative, retail 4. For each cohort: inferred position (long/short/flat), confidence, evidence 5. Write to `participants.md` using intermediate schema below

Step 3: Identify patterns → `patterns.md`

1. Read daily and weekly price-ohlcv candles 2. Identify key levels: support, resistance, breakout/breakdown levels 3. Identify price patterns: trend, range, consolidation, reversal 4. Assess pattern reliability and expected resolution 5. Write to `patterns.md` using intermediate schema below

Step 4: Synthesize → `result.md`

1. Combine velocity, participant, and pattern analysis 2. Form a structural view: who controls price, what pattern dominates, expected behavior 3. Define price targets from pattern analysis 4. Estimate convergence speed from velocity regime 5. Write to `result.md` using output schema below

Intermediate Schemas

velocity.md

# Velocity Analysis — {instrument} — {date}

## Parameters

FieldValue
Instrument{instrument}
Timeframe{timeframe used}
Period{date range}
Candles Analyzed{count}

## Velocity Metrics

metricvalue
avg_upward_velocity$/hr or %/hr
avg_downward_velocity$/hr or %/hr
velocity_ratiodecimal
fast_move_threshold$/hr
slow_move_threshold$/hr
fast_up_pct% of up moves that are fast
slow_up_pct% of up moves that are slow
fast_down_pct% of down moves that are fast
slow_down_pct% of down moves that are slow

## Interpretation

{LLM narrative: what the velocity signature tells us about who is driving price}

participants.md

# Participant Inference — {instrument} — {date}

Columns:

patterns.md

# Price Patterns — {instrument} — {date}

## Temporal Validity

FieldValue
Created{ISO 8601 datetime}
Last Validated{ISO 8601 datetime — same as Created initially}
Valid From{YYYY-MM-DD — analysis date}
Valid To{YYYY-MM-DD — Valid From + shorter end of shortest active pattern timeframe}
Trading Days{business days between Valid From and Valid To}
Calendar Days{total days between Valid From and Valid To}
Data Window{earliest candle date consumed} to {latest candle date consumed}

## Key Levels

## Patterns

Columns (Key Levels):

Columns (Patterns):

Output Schema

result.md

# Technical Analysis Result — {instrument} — {date}

## Temporal Validity

FieldValue
Created{ISO 8601 datetime, e.g., 2026-03-11T15:00:00Z}
Last Validated{ISO 8601 datetime — same as Created initially, updated on re-runs}
Valid From{YYYY-MM-DD — analysis date}
Valid To{YYYY-MM-DD — Valid From + shorter end of convergence/target timeframes}
Trading Days{business days between Valid From and Valid To}
Calendar Days{total days between Valid From and Valid To}
Data Window{earliest candle date consumed} to {latest candle date consumed}

## Structural View

FieldValue
Dominant Participantwho controls price action
Price Regimetrending / ranging / volatile / compressing
Velocity Regimefast-trending / slow-trending / mean-reverting / choppy
Biasbullish / bearish / neutral
Confidencehigh / medium / low

## Price Targets

## Convergence Estimate

FieldValue
Current Price
Target
Estimated Time
Velocity Regime
Participant Phase

## Change Log

Cross-references

Regime Analysis

Regime Analysis

Purpose

Model product-specific, repeatable price behavior patterns (regimes) to generate forward-looking price paths and dynamically score them against actual price action. Unlike fundamental and technical analysis which are domain-general reasoning processes, regime analysis leverages asset-specific historical behavior archetypes to produce a regime-based price target for PF integration.

Input Spec

Required Intermediates

From `analyses/{asset}/{date}/`:

FileKey Data Used
`fundamental/result.md`Active fundamental drivers, scenarios, overall bias
`technical/velocity.md`Current velocity regime, momentum metrics
`technical/patterns.md`Key levels, support/resistance, active patterns
`technical/result.md`Structural view, price targets

Regime Archetypes

From `assets/{asset}/regimes/`:

Read ALL `.md` files in the directory. Each file describes a repeatable price behavior archetype with:

Asset Context

Read `assets/{asset}/SKILL.md` if it exists, to obtain:

Price Data

Fetch current price and recent candles from the Oanda API:

Archetype Generation Mode

When to use: Run this mode when `assets/{asset}/regimes/` does not exist or contains zero `.md` files. This generates initial regime archetypes from the asset's discovered market structure and fundamental context, enabling regime analysis to run on new assets.

Generation Inputs

SourceFileData Used
Market structure`assets/{asset}/fingerprint.md`Phase library — the discovered behavioral vocabulary (phase names, scales, descriptions)
Market structure`assets/{asset}/transitions.md`Transition matrix — how phases sequence, common multi-phase paths, anomaly detection
Fundamental`analyses/{asset}/{date}/fundamental/result.md`Active drivers, scenarios, key risks — what moves this asset
Asset context`assets/{asset}/SKILL.md`Instrument, asset class, key facts, structural characteristics

Generation Process

1. Read all 4 input files. The fingerprint provides the phase vocabulary (e.g., `supply-shock-rally`, `correction`, `crisis-crash`). The transitions provide the sequencing rules (which multi-phase paths actually occur). The fundamental result provides the driver context. The asset SKILL.md provides the asset class and structural info.

2. Identify 4 distinct regime narratives by combining:

3. For each archetype, generate the full specification:

a. Signature table — Trigger (fundamental/structural event that activates this regime), Velocity (fast/slow, symmetric/asymmetric, using the fingerprint's scale metrics), Duration (mapped from fingerprint's `avg_duration_months`), Frequency (how often this pattern occurs, from fingerprint instance counts and domain knowledge)

b. Phases table — Each row describes a narrative stage of the regime: numbered phase name (descriptive, e.g., "1. Trigger Shock", "2. Panic Buying"), description (asset-specific context), duration (informed by fingerprint's `avg_duration_months` for corresponding behavioral phases), price_action (specific to this regime context), key_signal (what confirms transition to next phase). A regime typically has 3-6 phases. Use the fingerprint's scale metrics (price_pct_scale, duration_scale) to calibrate velocity and duration, but phase names are narrative — they describe the regime's story, not the fingerprint's behavioral categories.

c. Historical Examples table — Use your domain knowledge to provide 2-4 real historical instances with: date_range, trigger event, entry price, peak/trough, resolution, duration. These should be genuine historical episodes — use real dates and approximate price levels from your training data.

d. Resolution Patterns — 2-4 paragraphs describing how this regime typically ends. Include transition signals that indicate the regime is concluding. Reference the transition matrix probabilities where relevant.

e. Participant Behavior — 3-5 paragraphs covering how different market participants behave during this regime: institutional/commercial, speculative/managed money, retail, options market. This is asset-class-specific domain knowledge.

4. Name each archetype with a descriptive slug (e.g., `supply-shock-breakout`, `rate-driven-selloff`, `safe-haven-flight`). The name should clearly convey the regime narrative.

5. Ensure coverage: The 4 archetypes should collectively cover the major behavioral modes of the asset:

6. Create the directory and write files: ``` mkdir -p assets/{asset}/regimes/ ``` Write each archetype to `assets/{asset}/regimes/{archetype-slug}.md`

Archetype File Format

Each file follows this exact structure (matching existing hand-authored archetypes):

```

{Archetype Name}

Signature

FieldValue
Trigger{fundamental/structural event that activates this regime — be specific to the asset}
Velocity{speed and asymmetry description, referencing fingerprint scale metrics}
Duration{typical duration range, derived from fingerprint phase durations in the sequence}
Frequency{how often, derived from fingerprint instance counts and domain knowledge}

Phases

phasedescriptiondurationprice_actionkey_signal
1. {narrative phase name}{asset-specific description}{duration}{specific price behavior}{confirmation signal}

Historical Examples

date_rangetriggerentrypeak/troughresolutionduration
{real dates}{what happened}{price}{price}{how it ended}{duration}

Resolution Patterns

{2-4 paragraphs on how this regime ends, with transition signals}

Participant Behavior

{3-5 paragraphs covering institutional, speculative, retail, and options market behavior} ```

Critical Constraints

After generation, proceed to the normal regime analysis method (Steps 1-5) using the newly created archetypes.

---

Method

Step 1: Context Assessment → `context.md`

1. Fetch current price from Oanda API (M1 granularity, count=1) 2. Read recent price-ohlcv events from `events/price-ohlcv/{date}/` for H4 and D timeframes 3. Read `technical/velocity.md` for current velocity regime and momentum 4. Read `technical/patterns.md` for key levels (support, resistance) and active patterns 5. Read `fundamental/result.md` for active drivers, scenarios, and overall bias 6. Identify active triggers by mapping current conditions against archetype trigger criteria 7. Summarize the current market structure context

Output format — write to `analyses/{asset}/{date}/regime/context.md`:

```

Regime Context

Current Price

FieldValue
Instrument{instrument}
Price{current price}
Timestamp{ISO 8601}
5-Day Change{percent change}
20-Day Change{percent change}

Velocity Regime

FieldValue
Current Regime{from velocity.md — trending/ranging/transitioning}
Momentum{direction and strength}
Volatility{high/medium/low relative to recent history}

Key Levels

level_typepricesignificancedistance_from_current
resistance{price}{high/medium/low}{+X.X%}
support{price}{high/medium/low}{-X.X%}

Active Fundamental Drivers

driverdirectionweightstatus
{from result.md key drivers}{bullish/bearish}{weight%}{active/fading/emerging}

Trigger Assessment

triggerpresentevidencerelevant_archetypes
{trigger condition from archetypes}{yes/no/partial}{brief evidence}{archetype names}

```

Step 2: Archetype Matching → `archetypes.md`

1. Load all archetype files from `assets/{asset}/regimes/` 2. For each archetype, evaluate its trigger conditions against the context from Step 1

[Truncated — full method has 426 lines]

Market Structure

Market Structure Analysis

Purpose

Classify which phase a product's market is in, using product-specific phases discovered from historical price data. Each product has a unique set of phases — their number, characteristics, and transition patterns emerge from the data, not from a generic template.

The skill has two modes: 1. Discovery mode — run once per product to discover phases and build the fingerprint 2. Classification mode — run each analysis cycle to identify the current phase

Input Spec

Data SourceFilterPurpose
`price-ohlcv`Asset instrument, M/W/D/H4 timeframesHistorical price structure
`assets/{product}/fingerprint.md`If existsPreviously discovered fingerprint (classification mode)

Instruments: Use Oanda format (e.g., `BCO_USD`, `XAU_USD`). Discovery: Candles for the specified timeframe(s), then validate with adjacent timeframes. Classification: Recent 60 candles at daily + weekly timeframes.

Timeframe Reference

TF CodeOanda GranularityTypical CandlesTypical History
MM27123 years
WW117723 years
DD500019 years
H4H450003 years
H1H150008 months
30mM30500010 weeks
15mM1550005 weeks
1mM150001 week

Mode Selection

1. Check if `assets/{product}/fingerprint.md` exists 2. If NO → run Discovery Mode (Steps D1-D5) 3. If YES → run Classification Mode (Steps C1-C3)

---

Discovery Mode

Step D1: Fetch Historical Data

Input: `{tf}` — the timeframe to discover phases for (e.g., M, W, D, H4, H1, 30m, 15m, 1m).

1. Look up `{tf}` in the Timeframe Reference table to get the Oanda granularity and typical candle count 2. Fetch candles from Oanda API:

3. Save to `events/price-ohlcv/discovery/{instrument}-{tf}.md` 4. For validation (Step D3), also fetch the two adjacent timeframes (one higher, one lower) from the Timeframe Reference table:

Report: "Fetched {N} {tf} candles + adjacent TF data for {instrument}."

Step D2: Discover Phases from {TF} Data → `discovery-{tf}.md`

Using the `{tf}` candle data:

1. For each consecutive sequence of candles, compute:

2. Identify natural breakpoints where price behavior changes character:

3. Group similar segments by their (price%, duration) characteristics 4. Each group = one phase — name it descriptively from its behavior 5. Pick the most common phase as baseline (scale = 1.0 for both parameters) 6. Calculate relative scales for all other phases:

7. Build transition matrix: count how often each phase follows each other phase

Note: If only 2-3 phases are discoverable from the data, that is valid. Do not force more phases than the data supports. Shorter timeframes with less history will naturally find fewer phases.

Write intermediate file `analyses/{asset}/{date}/market-structure/discovery-{tf}.md`:

# Phase Discovery ({tf}) — {instrument} — {date}

## Parameters

FieldValue
Instrument{instrument}
Timeframe{tf}
Period{start_date} to {end_date}
Candles Analyzed{count}
Phases Discovered{N}

## Segment Labels

## Phase Summary

## Transition Counts

## Reasoning {Narrative explaining why these phases were identified and how breakpoints were chosen}

Step D3: Validate on Adjacent Timeframes → `validation-{tf}.md`

Validation uses the two adjacent timeframes from the Timeframe Reference table:

For each adjacent timeframe:

1. Label the adjacent-TF data using the `{tf}` fingerprint:

2. Compute the relative scales at this timeframe 3. Calculate ratio correlation between this timeframe's scales and the discovery timeframe's scales 4. Record results

Validation criteria:

If validation fails: The problem is in the discovery, not the timeframe. Go back to Step D2 and adjust (wider cluster tolerance, different phase count, re-examine breakpoints). Do NOT simply mark the fingerprint as "non-fractal" and move on.

Write intermediate file `analyses/{asset}/{date}/market-structure/validation-{tf}.md`:

# Cross-Timeframe Validation ({tf}) — {instrument} — {date}

## Validation Results

timeframephases_founddiscovery_matchprice_rrange_rduration_rstatusnotes
{tf} (source){N}baselinebaseline
{adjacent_higher}{N}{N/N}r={X.XX}r={X.XX}r={X.XX}pass/investigate{notes}
{adjacent_lower}{N}{N/N}r={X.XX}r={X.XX}r={X.XX}pass/investigate{notes}

## Per-Timeframe Detail

### {adjacent_higher}

{Repeat for adjacent_lower}

## Diagnostic Notes {If any timeframe failed: which phases were problematic, what was investigated, what was adjusted}

Step D4: Out-of-Sample Test → `out-of-sample-{tf}.md`

1. Using the `{tf}` data, hold out the most recent 20% of candles 2. Discover phases using only the first 80% 3. Classify the held-out 20% using the discovered fingerprint 4. Compare: do phase labels match? Do transition probabilities hold? 5. Calculate classification accuracy

Shallow TF exception: For H1 and below (H1, 30m, 15m, 1m), if total segments < 15, skip the out-of-sample test and note "insufficient data for holdout test — {N} segments found, minimum 15 required" in the output file.

Write intermediate file `analyses/{asset}/{date}/market-structure/out-of-sample-{tf}.md`:

# Out-of-Sample Validation ({tf}) — {instrument} — {date}

FieldValue
Timeframe{tf}
Training Period{start} to {split_date}
Test Period{split_date} to {end}
Training Candles{N}
Test Candles{N}
Classification Accuracy{X}%

## Test Period Labels

## Assessment {Does the fingerprint generalize? Any drift or new behaviors in recent data?}

Step D5: Save Fingerprint and Transitions

If validation passes (≥2 of 3 metrics ≥80% per adjacent timeframe, out-of-sample ≥65% accuracy or skipped for shallow TFs):

File naming:

[Truncated — full method has 485 lines]

Anti Kelly

Anti-Kelly Analysis

Purpose

Model how a composite operator would engineer price action to reach targets identified by other analysis tracks, maximizing retail pain and minimizing time at favorable prices. Based on Wyckoff composite operator theory.

The composite operator is a useful mental model — not a literal single entity. It represents the aggregate behavior of informed capital (market makers, institutional flow, algorithmic liquidity providers) whose structural advantages (speed, size, information) create predictable manipulation patterns against retail participants.

Pipeline Position

Anti-kelly is a synthesis track that reads from all other analysis tracks (fundamental, technical, regime, market structure) but does NOT feed into any of them. It sits alongside the standard OPP as an adversarial lens.

The trader compares anti-kelly paths against the standard OPP to understand:

Anti-kelly does NOT replace the OPP. It pressure-tests it. If the OPP entry aligns with where smart money is accumulating, conviction increases. If the OPP entry sits at a retail trap zone, the trader adjusts timing or skips the tier.

Input Spec

The anti-kelly script (`scripts/anti-kelly.js`) preprocesses all analysis track outputs into a single `_llm_input.json`. The LLM receives:

SourceDataPurpose
`_llm_input.json` current_pricebid/ask/midReference point for all analysis
`_llm_input.json` all_targetsPrice targets from all tracksDestinations smart money drives toward
`_llm_input.json` liquidity_poolsStop clusters computed from S/R levelsPools smart money must sweep
`_llm_input.json` retail_trapsTrap zones from patterns/S/RWhere retail predictably enters
`_llm_input.json` wyckoff_templates5 manipulation sequence templatesStructure for path construction
`_llm_input.json` target_template_mapWhich templates apply to each targetPre-filtered for direction
`_llm_input.json` fundamental_summaryScenarios, overall view"Fair value" targets
`_llm_input.json` technical_summaryS/R, patterns, velocity, participants, targetsPrice structure context
`_llm_input.json` regime_summaryPaths, result, alignmentActive regime context
`_llm_input.json` market_structure_summaryTF stack, escalationStructural divergences

Method — Generate

Step 1: Primitive Selection

Given the landscape data, select which manipulation primitives are currently relevant. For each primitive, provide:

Criteria for selection:

Step 2: Path Construction

For each viable target, construct the optimal manipulation sequence. Rules:

For each step in the sequence, specify:

FieldDescription
Price levelWhere this step occurs
Duration estimateFrom velocity data — how long this phase takes
Retail experienceWhat retail thinks is happening (narrative)
Smart money actionWhat informed capital is actually doing (narrative)

Example step: "Price drops to $82.50 (2 days). Retail sees breakdown confirmation, sells. Smart money absorbs selling, accumulates at structural support."

Step 3: Probability & Time Profile

Assign probability to each path based on:

Individual path probabilities must be 0-100%. Paths are independent scenarios — total may exceed 100%.

Time profile: Calculate what percentage of total duration is retail-favorable vs smart-money-favorable. In a well-executed manipulation, retail-favorable time (where retail can enter at good prices) is < 20% of total duration. Smart money spends 60-80% of time in accumulation/distribution (appearing directionless) and 20-40% in markup/markdown (fast, decisive moves).

Efficiency: Calculate as `|target_price - current_price| / total_duration_in_days`. Higher efficiency means smart money reaches the target faster per unit time. Compare across paths — the most efficient path is often the most likely, since informed capital minimizes time exposure.

Step 4: Initial Alignment & Result

Assess where current price sits in each path's sequence:

CriterionWhat to evaluate
PriceIs price at the level predicted for this phase?
VelocityDoes current velocity match expected phase velocity (e.g., slow grinding = accumulation)?
VolumeDoes volume profile match (e.g., decreasing volume in accumulation, spike on spring)?
Retail behaviorAre retail participants behaving as predicted (e.g., exiting on shakeout, entering on false breakout)?

Produce the result summary:

Method — Update

Step U1: Review Mechanical Scores

Review mechanical alignment scores from `_update_input.json`. The script has already computed price/velocity/duration scores for each path by comparing predicted vs actual values.

Step U2: Override Mechanical Scores

Override scores where LLM judgment differs from mechanical calculation. The mechanical score may miss nuances — for example, a price deviation that actually confirms the path rather than diverging from it (a spring that goes deeper than predicted is still a spring, and may indicate even stronger accumulation).

Provide reasoning for each override.

Step U3: Update Probabilities

Update probabilities based on how well price tracked the predicted sequence:

Step U4: Mark Consumed Primitives

Mark consumed primitives — liquidity pools that have been swept, retail traps that have been sprung. Update landscape status:

Step U5: Generate New Paths (if needed)

If price action suggests a manipulation pattern not in existing paths, generate new path(s). This is rare but important for capturing emerging patterns. Triggers for new path generation:

Method — Review

Review scores expired anti-kelly analyses against actual price outcomes, extracting product-specific lessons for future analyses.

Step R1: Score Path Outcomes

For each predicted manipulation path, compare against actual H4 price data:

Override mechanical scores where LLM context adds value — e.g., a path that "missed" mechanically but captured the correct market dynamic is still a partial success.

Step R2: Assess Primitive Consumption

For each predicted manipulation primitive:

Step R3: Evaluate Trap Effectiveness

For each predicted retail trap:

Step R4: Extract Product-Specific Insights

Synthesize path, primitive, and trap outcomes into recurring patterns:

Step R5: Update Knowledge Base

Generate structured knowledge updates for merging into the knowledge file:

Method — Correlated Analysis

When historical knowledge exists (from prior reviews), the generate flow includes correlated analysis that adjusts standard output based on accumulated lessons.

Step C1: Historical Pattern Matching

Compare the current landscape (targets, pools, traps, market structure) against the manipulation fingerprint from the knowledge file:

Step C2: Probability Calibration

Adjust path probabilities based on historical hit rates:

[Truncated — full method has 244 lines]

Opportunity Analysis

Opportunity Analysis

Purpose

Identify trading opportunities by finding price points at the extremes of the probability distribution — prices unlikely to sustain — using the fundamental range as bounds, technical dynamics for precision, and regime nature for distribution shape. Outputs actionable entry/exit zones with JS-expressible trigger conditions for automated monitoring.

Input Spec

Required Intermediates

From `analyses/{asset}/{date}/`:

FileKey Data Used
`fundamental/result.md`Scenarios with probabilities + price ranges
`fundamental/influence-weights.md`Factor weights for conviction assessment
`technical/result.md`Price targets, bias, key levels
`technical/velocity.md`Current velocity regime, momentum
`technical/patterns.md`S/R levels, active patterns, pattern targets
`technical/participants.md`Dominant participants, positioning

Optional Intermediates

FileKey Data Used
`regime/result.md`Active regime, distribution shape, price target
`regime/paths.md`Active price paths with probabilities
`regime/alignment.md`Path tracking status

Current Price

Fetch from Oanda API: ``` GET https://api-fxpractice.oanda.com/v3/instruments/{instrument}/candles?granularity=M1&count=1&price=M Authorization: Bearer 618551d36d05948f75c12143303ccec4-9a77b49c0f8c42f1f90271f6022a3676 ```

Asset Context

Read `assets/{asset}/SKILL.md` for instrument code, currency, unit.

Method

Step 1: Range Bounds → `range-bounds.md`

Extract and probability-weight the fundamental price range.

1. Read `fundamental/result.md` — extract the scenarios table 2. For each scenario: extract probability, price_low, price_high, midpoint 3. Calculate:

4. Divide the range into probability zones:

5. Assess current price position: which zone is it in?

Output format — write to `trades/{asset-slug}/opportunities/OPP-{timestamp}/range-bounds.md`:

```markdown

Range Bounds

FieldValue
Tradeable Range Low${low}
Tradeable Range High${high}
Fair Value Center${center}
Tail Low${tail_low}
Tail High${tail_high}
Current Price${current}
Current Zone{core/extended-low/extended-high/tail-low/tail-high}

Probability Zones

zonelowhighprobabilitydescription
tail-low${tail_low}${ext_low_boundary}{prob}%Extreme undervaluation — high-conviction entry
extended-low${ext_low_boundary}${core_low}{prob}%Below fair value — moderate entry
core${core_low}${core_high}{prob}%Fair value range — no edge
extended-high${core_high}${ext_high_boundary}{prob}%Above fair value — moderate exit
tail-high${ext_high_boundary}${tail_high}{prob}%Extreme overvaluation — high-conviction exit

Scenario Mapping

scenarioprobabilitylowhighmidpoint
{scenario name}{prob}%${low}${high}${mid}

```

Step 2: Price Dynamics → `price-dynamics.md`

Map technical structure within the fundamental range.

1. Read `technical/patterns.md` — extract S/R levels, active patterns, targets 2. Read `technical/velocity.md` — current velocity regime, momentum direction 3. Read `technical/participants.md` — dominant participant, phase (accumulation/markup/distribution/markdown) 4. Read `technical/result.md` — technical price targets 5. Map S/R levels onto the probability zones from Step 1:

6. Identify overshoot zones:

7. Assess velocity context:

Output format — write to `trades/{asset-slug}/opportunities/OPP-{timestamp}/price-dynamics.md`:

```markdown

Price Dynamics

FieldValue
Velocity Regime{trending/ranging/transitioning}
Momentum{direction and strength}
Dominant Participant{cohort}
Participant Phase{accumulation/markup/distribution/markdown}

Key Levels Within Range

levelpricetypezone_alignmentstrengthovershoot_potential
{name}${price}{support/resistance}{which probability zone}{strong/moderate/weak}{high/medium/low}

Overshoot Zones

zoneentry_pricerevert_targetexpected_durationprobabilityrationale
{name}${price}${revert_to}{timeframe}{prob}%{why this overshoot is likely to revert}

Participant Context

{2-3 sentences on how dominant participant positioning affects entry/exit timing} ```

Step 3: Regime Adjustment → `regime-adjustment.md`

Adjust the probability distribution based on regime behavior.

1. Check if `regime/result.md` exists 2. If regime exists:

3. If no regime analysis exists:

4. Adjust time windows:

Output format — write to `trades/{asset-slug}/opportunities/OPP-{timestamp}/regime-adjustment.md`:

```markdown

Regime Adjustment

FieldValue
Active Regime{regime name or "None"}
Regime Confidence{high/medium/low or "N/A"}
Distribution Adjustment{skew-bullish/skew-bearish/tighten/widen/none}
Time Window Adjustment{extend/compress/none}

Adjusted Zones

zoneoriginal_loworiginal_highadjusted_lowadjusted_highadjusted_probadjustment_rationale
tail-low${orig}${orig}${adj}${adj}{prob}%{why adjusted}
extended-low..................
core..................
extended-high..................
tail-high..................

Regime Path Integration

pathprobabilitytargetstatusimplication_for_entry
{PATH ID}{prob}%${target}{tracking/diverging}{how this path affects entry/exit zones}

Time Windows

action_typebase_windowadjusted_windowrationale
Entry hold{base}{adjusted}{regime effect on hold time}
Exit target{base}{adjusted}{regime effect on target time}
Stop review{base}{adjusted}{regime effect on stop monitoring}

```

[Truncated — full method has 744 lines]

Point In Time Analysis

Point-in-Time Analysis

Purpose

Produce timestamped PAT/PF/PD snapshots that decompose the current price into weighted factor contributions, generate multi-track forecasts, and (if a prior snapshot exists) explain price changes between snapshots. Reads existing fundamental + technical + regime intermediates — never re-runs them.

Input Spec

Required Intermediates

From `analyses/{asset}/{date}/`:

FileKey Data Used
`fundamental/influence-weights.md`Factor names + weight_pct (sums to 100%)
`fundamental/result.md`Scenarios with probabilities + price targets
`technical/patterns.md`Key levels + pattern targets + reliability
`technical/participants.md`Cohort positions (institutional/commercial/speculative/retail)
`technical/result.md`Price targets with timeframes
`regime/result.md`Active regime, best path, price target, confidence (optional — used if regime analysis exists)

Asset Context

Read `assets/{asset}/SKILL.md` if it exists, to obtain:

Prior Snapshots

Check `analyses/{asset}/{date}/snapshots/` for existing PAT files to:

Method

Step 1: Fetch Current Price

Call the Oanda API for the asset's instrument:

``` GET https://api-fxpractice.oanda.com/v3/instruments/{instrument}/candles?granularity=M1&count=1&price=M Authorization: Bearer 618551d36d05948f75c12143303ccec4-9a77b49c0f8c42f1f90271f6022a3676 ```

Extract the latest mid close price. Record the timestamp in ISO 8601 format (e.g., `2026-03-08T14:00:00Z`).

Step 2: Determine IDs

Scan `analyses/{asset}/{date}/snapshots/` for existing files:

For PF IDs, check both the snapshots directory AND the consolidated file `analyses/{asset-slug}-{date}.md` (if it exists) for the highest existing PF ID. The consolidated file may contain analysis-generated PFs (PF001, PF002) from the web-consolidation skill. The next snapshot PF must use the highest existing PF number + 1 to avoid ID collisions.

If no snapshots directory exists and no consolidated file exists, start at PAT001, PF001, PD001.

Step 3: Generate PAT (Price Attribution)

Map each factor from `fundamental/influence-weights.md` to a price component:

1. Read influence-weights.md — each row has: factor, weight_pct, rationale 2. For each factor:

3. Sum all components 4. Residual = current_price - sum_of_components (should be small; format as `$X.XX/unit`) 5. Validation note = `Components sum to $X.XX vs actual $Y.YY`

Assign each component an ID: PA001, PA002, etc.

Output format — write to `analyses/{asset}/{date}/snapshots/PAT-{timestamp}.md`:

```markdown

PAT{NNN}

FieldValue
Instrument{instrument}
Price${price}/{unit}
Currency{currency}
Unit{unit}
Timestamp{ISO 8601 timestamp}
TriggerScheduled point-in-time snapshot
Trigger Ref-
idcomponentcategoryvaluepercentbasistrendconfidencereferences
PA001{factor name}{category}{+/-$X.XX}{weight}%{rationale}{trend}{confidence}-
PA002.....................-

Residual: {$X.XX/unit} Validation: Components sum to ${sum} vs actual ${price} ```

IMPORTANT: The file contains a single `### PAT{NNN}` section. The key-value table comes first, then the components table, then the Residual and Validation lines. This format matches what the web parser expects.

Step 4: Generate PF (Price Forecast)

Build a multi-track forecast combining fundamental, participant, and pattern tracks:

Fundamental track:

Participant track:

Pattern track:

Regime track (if regime analysis exists):

Track weights:

If regime analysis exists (4 tracks):

If no regime analysis (3 tracks, original weights):

Composite calculation:

Output format — write to `analyses/{asset}/{date}/snapshots/PF-{timestamp}.md`:

```markdown

Temporal Validity

FieldValue
Created{ISO 8601 datetime from snapshot timestamp}
Last Validated{ISO 8601 datetime — same as Created initially}
Valid From{forecastDate from PF table}
Valid To{targetDate from PF table}
Trading Days{business days between Valid From and Valid To}
Calendar Days{total days between Valid From and Valid To}
Data Window{forecastDate} to {forecastDate}

Price Forecasts

idinstrumentforecastDatetargetDatetargetTimeframecompositePricecompositeLowcompositeHighcompositeConfidencestatusactualCloseerrorerrorPercentreferences
PF{NNN}{instrument}{date}{target_date}2-4 trading days{composite}{low}{high}{confidence}active---PAT{NNN}

PF{NNN}

trackmethodpredictedPricepredictedLowpredictedHighconfidenceweightreasoningreferences
fundamentalscenario-weighted{price}{low}{high}{conf}40{reasoning}-
participantpositioning-flow{price}{low}{high}{conf}30{reasoning}-
patterntechnical-level{price}{low}{high}{conf}30{reasoning}-
regimehistorical-analog{price}{low}{high}{conf}20{reasoning}-

```

Note: The regime track row is only included when regime analysis exists. When present, use the 4-track weights (35/25/20/20). When absent, use the 3-track weights (40/30/30) and omit the regime row.

IMPORTANT: The PF file contains both the summary table under `## Price Forecasts` and the track detail under `### PF{NNN}`. The `references` column in the summary table links to the associated PAT snapshot ID.

Step 5: Generate PD (Price Delta) — Only If Prior PAT Exists

If a prior PAT snapshot exists in `analyses/{asset}/{date}/snapshots/`:

1. Read the most recent prior PAT file 2. For each component in the current PAT, find the matching component in the prior PAT (by component name) 3. Calculate delta = current_value - prior_value for each component 4. Determine status: `increased`, `decreased`, `unchanged`, `new` (no prior match), `removed` (in prior but not current) 5. Explained = sum of all component deltas

[Truncated — full method has 244 lines]

Cross Product Analysis

Cross-Product Analysis

Purpose

Discover inter-product relationships and produce spread/pair trading opportunities by analyzing relative price dynamics between products. Unlike directional analysis tracks, this track reasons about relative price changes — predicting spread widening/narrowing rather than up/down.

Pipeline Position

Input Spec

InputSourcePurpose
Fundamental results`analyses/{asset}/{date}/fundamental/result.md`Directional bias, key drivers, supply/demand factors
Technical results`analyses/{asset}/{date}/technical/result.md`Price velocity, participant behavior, patterns
Regime results`analyses/{asset}/{date}/regime/result.md`Product-specific behavior context
Raw events`events/{type}/{date}/*.md`Cross-product co-occurrence scanning
Asset skills`assets/{asset}/SKILL.md`Product names, aliases, tickers for keyword matching
Knowledge library`assets/cross-product/relationships.md`Persistent confirmed relationships
Discovered instruments`assets/cross-product/instruments.md`Oanda tradability for untracked products
Price dataOanda REST APISpread history, correlation matrices

Method

Step 1: Product Graph Construction → `graph/nodes.md`

1. Run `node scripts/cross-product.js prepare-graph` — parses all per-asset analyses, extracts prices/velocity/bias, scans events for product keyword co-occurrences, computes price correlation matrix 2. Script outputs `xp-graph-state.json` + `xp-graph-llm-input.json` to `analyses/.tmp/` 3. LLM reads `xp-graph-llm-input.json` and produces edge judgments (next step)

Step 2: Relationship Discovery → `graph/edges.md`

Script/LLM boundary:

Edge types: `substitution`, `supply_displacement`, `macro_cascade`, `input_cost`, `correlation_break`, `flow_rotation`

Prune edges with importance < 30.

1. LLM fills edge judgment JSON with: product_a, product_b, relationship_type, direction, importance (0-100), evidence, timeframe_days 2. Run `node scripts/cross-product.js assemble-graph` — graph traversal, chain tracing, pruning 3. Script outputs `nodes.md`, `edges.md`, `chains.md` to `analyses/_cross-product/{date}/graph/`

Step 3: Chain Tracing → `graph/chains.md`

Handled by `assemble-graph` script:

Step 4: Event Generation → `events/cross-product/{date}/`

For each significant relationship or chain: 1. LLM generates structured cross-product events using schema from `events/cross-product/_schema.md` 2. Columns: timestamp, source_event, products, relationship_type, direction, importance, timeframe, evidence 3. LLM also identifies information gaps → `source-recommendations.md`

Step 5: Spread Analysis → `spreads/{pair}/`

1. Run `node scripts/cross-product.js prepare-spreads` — fetches spread price history from Oanda, computes stats (mean, stddev, z-score, percentiles), lookback scaled by relationship type 2. Script outputs `xp-spread-state.json` + `xp-spread-llm-input.json` to `analyses/.tmp/` 3. LLM fills spread judgment JSON with: win_probability, target_spread, stop_spread, entry prices, leg directions, confidence, narrative, invalidation 4. Run `node scripts/cross-product.js assemble-spreads` — Kelly calculation, markdown assembly

Per-pair output:

Step 6: Synthesis → `result.md`

Handled by `assemble-spreads` script:

Output Files

FileLocationTemporal Validity
`nodes.md``_cross-product/{date}/graph/`Snapshot day (refreshed daily)
`edges.md``_cross-product/{date}/graph/`Until driving event expires
`chains.md``_cross-product/{date}/graph/`Bounded by weakest link
`relationship.md``_cross-product/{date}/spreads/{pair}/`Adaptive per relationship type
`spread-analysis.md``_cross-product/{date}/spreads/{pair}/`Matches relationship timeframe
`result.md``_cross-product/{date}/`Union of all spread windows
`source-recommendations.md``_cross-product/{date}/`Informational, no expiry
Cross-product events`events/cross-product/{date}/`Per event type defaults
Consolidated`analyses/cross-product-{date}.md`Matches result.md

Cross-references

Event Rollup

Event Roll-up Analysis

Purpose

Roll-up produces higher-level summary events from lower-level events. Each roll-up level is itself an event, stored in the appropriate timeframe file. The LLM decides what matters at each level.

Input Spec

Method

Step 1: Read the schema

Read `events/{type}/_schema.md`. Check the `Roll-up` section:

Step 2: Template roll-up (for structured types)

For template-based types (price-ohlcv, economic-release, inventory, positioning):

1. Read all lower-level event files for the roll-up period 2. Apply the aggregation rule defined in the schema:

3. Write the summary using the schema's summary template 4. Save to the appropriate timeframe file

Step 3: LLM roll-up (for complex types)

For LLM-based types (geopolitical, policy):

1. Read all lower-level event files for the roll-up period 2. Follow the roll-up instruction in the schema 3. Produce a structured summary that:

4. Save to the appropriate timeframe file

Step 4: Store the roll-up

Save the summary event in the correct location:

Output Schema

The output is a new event file following the same type schema, but at a higher timeframe. For template types, it's a table row. For LLM types, it's a summary section plus a table row.

LLM Roll-up Output Format

# {Type} {Period} Summary — {date range}

## Summary

{LLM-generated narrative summary}

## Events

| {same columns as source schema} |

| ... rolled-up/summarized rows ... |

Intermediate Files

None — roll-up is a single-step transformation.

Cross-references

None — roll-up operates within a single event type.