Analyze the fundamental supply-demand picture for an asset by consuming event data and producing a structured factor decomposition with influence weights and directional synthesis.
| Event Type | Filter | Purpose |
|---|---|---|
| `geopolitical` | Asset-relevant events | Supply/demand disruptions, risk premium |
| `policy` | Asset-relevant decisions | Production quotas, monetary policy, sanctions |
| `economic-release` | Macro indicators | Demand signals (GDP, PMI, employment) |
| `inventory` | Asset inventories | Physical supply-demand balance |
| `positioning` | Asset positioning | Sentiment, crowding risk |
Time range: Current date + lookback as needed for context (typically 1-4 weeks for active situations).
Asset filter: Load the asset skill from `assets/{asset}/SKILL.md` to understand which events are relevant.
1. Read events from `geopolitical/`, `policy/`, `inventory/` that affect supply 2. For each, assess: what is the supply impact, how large, how long 3. Classify as: disruption, constraint, expansion, normalization 4. Write to `supply-factors.md` using the intermediate schema below
1. Read events from `economic-release/`, `policy/`, `geopolitical/` that affect demand 2. For each, assess: what is the demand impact, how large, how long 3. Classify as: growth, contraction, substitution, structural-shift 4. Write to `demand-factors.md` using the intermediate schema below
1. For each high-significance supply or demand factor, decompose into verifiable sub-factors 2. Each sub-factor is a specific, checkable claim with a credibility type: `action` > `capability` > `deployment` > `constraint` > `precedent` > `intention` 3. Assess whether the parent factor's rating is supported by its sub-factors 4. Write to `sub-factors.md` using the intermediate schema below
1. Assign influence weight (%) to each factor based on its current significance to price 2. Weights must sum to 100% 3. For each weight >= 10%, provide explicit rationale 4. Write to `influence-weights.md` using the intermediate schema below
1. Using only the supply-factors, demand-factors, sub-factors, and influence-weights produced above 2. Form a directional view: overall bias (bullish/bearish/neutral), confidence, time horizon 3. Define 2-3 scenarios (base/bull/bear) with probabilities summing to ~100% 4. Identify key risks that would change the view 5. List monitoring priorities 6. Write to `result.md` using the output schema below
# Supply Factors — {asset} — {date}
Columns:
# Demand Factors — {asset} — {date}
Columns:
# Sub-Factors — {asset} — {date}
Columns:
# Influence Weights — {asset} — {date}
Columns:
# Fundamental Analysis Result — {asset} — {date}
## Temporal Validity
| Field | Value |
|---|---|
| Created | {ISO 8601 datetime, e.g., 2026-03-11T14:30:00Z} |
| Last Validated | {ISO 8601 datetime — same as Created initially, updated on re-runs} |
| Valid From | {YYYY-MM-DD — analysis date} |
| Valid To | {YYYY-MM-DD — Valid From + shorter end of Time Horizon from Overall View} |
| Trading Days | {business days between Valid From and Valid To} |
| Calendar Days | {total days between Valid From and Valid To} |
| Data Window | {earliest event date consumed} to {latest event date consumed} |
## Overall View
| Field | Value |
|---|---|
| Bias | bullish / bearish / neutral |
| Confidence | high / medium / low |
| Time Horizon | period this view applies to |
| Key Driver | most important factor |
| Key Risk | biggest threat to the view |
## Scenarios
## Risks to View
## Monitoring Priorities
## Change Log
Analyze price action structure to identify who is trading, how price moves, and what patterns emerge. Operates purely on price-ohlcv event data.
| Event Type | Filter | Purpose |
|---|---|---|
| `price-ohlcv` | Asset instrument, multiple timeframes | Raw price data |
Instruments: Use Oanda format (e.g., `BCO_USD`, `XAU_USD`). Timeframes: H1/H4 for intraday structure, D for swing, W for structural context. Time range: Typically 20-60 candles at each timeframe.
1. Read price-ohlcv candles at H1 or H4 timeframe 2. Calculate velocity of each move (price change / time) 3. Classify moves: fast-up, slow-up, fast-down, slow-down 4. Calculate velocity ratio (avg upward velocity / avg downward velocity) 5. Determine dominant participant type from velocity signature:
6. Write to `velocity.md` using intermediate schema below
1. Using velocity analysis from Step 1 2. Read positioning events from `events/positioning/` (if available, for cross-check only) 3. Identify participant cohorts: institutional, commercial hedger, speculative, retail 4. For each cohort: inferred position (long/short/flat), confidence, evidence 5. Write to `participants.md` using intermediate schema below
1. Read daily and weekly price-ohlcv candles 2. Identify key levels: support, resistance, breakout/breakdown levels 3. Identify price patterns: trend, range, consolidation, reversal 4. Assess pattern reliability and expected resolution 5. Write to `patterns.md` using intermediate schema below
1. Combine velocity, participant, and pattern analysis 2. Form a structural view: who controls price, what pattern dominates, expected behavior 3. Define price targets from pattern analysis 4. Estimate convergence speed from velocity regime 5. Write to `result.md` using output schema below
# Velocity Analysis — {instrument} — {date}
## Parameters
| Field | Value |
|---|---|
| Instrument | {instrument} |
| Timeframe | {timeframe used} |
| Period | {date range} |
| Candles Analyzed | {count} |
## Velocity Metrics
| metric | value |
|---|---|
| avg_upward_velocity | $/hr or %/hr |
| avg_downward_velocity | $/hr or %/hr |
| velocity_ratio | decimal |
| fast_move_threshold | $/hr |
| slow_move_threshold | $/hr |
| fast_up_pct | % of up moves that are fast |
| slow_up_pct | % of up moves that are slow |
| fast_down_pct | % of down moves that are fast |
| slow_down_pct | % of down moves that are slow |
## Interpretation
{LLM narrative: what the velocity signature tells us about who is driving price}
# Participant Inference — {instrument} — {date}
Columns:
# Price Patterns — {instrument} — {date}
## Temporal Validity
| Field | Value |
|---|---|
| Created | {ISO 8601 datetime} |
| Last Validated | {ISO 8601 datetime — same as Created initially} |
| Valid From | {YYYY-MM-DD — analysis date} |
| Valid To | {YYYY-MM-DD — Valid From + shorter end of shortest active pattern timeframe} |
| Trading Days | {business days between Valid From and Valid To} |
| Calendar Days | {total days between Valid From and Valid To} |
| Data Window | {earliest candle date consumed} to {latest candle date consumed} |
## Key Levels
## Patterns
Columns (Key Levels):
Columns (Patterns):
# Technical Analysis Result — {instrument} — {date}
## Temporal Validity
| Field | Value |
|---|---|
| Created | {ISO 8601 datetime, e.g., 2026-03-11T15:00:00Z} |
| Last Validated | {ISO 8601 datetime — same as Created initially, updated on re-runs} |
| Valid From | {YYYY-MM-DD — analysis date} |
| Valid To | {YYYY-MM-DD — Valid From + shorter end of convergence/target timeframes} |
| Trading Days | {business days between Valid From and Valid To} |
| Calendar Days | {total days between Valid From and Valid To} |
| Data Window | {earliest candle date consumed} to {latest candle date consumed} |
## Structural View
| Field | Value |
|---|---|
| Dominant Participant | who controls price action |
| Price Regime | trending / ranging / volatile / compressing |
| Velocity Regime | fast-trending / slow-trending / mean-reverting / choppy |
| Bias | bullish / bearish / neutral |
| Confidence | high / medium / low |
## Price Targets
## Convergence Estimate
| Field | Value |
|---|---|
| Current Price | |
| Target | |
| Estimated Time | |
| Velocity Regime | |
| Participant Phase |
## Change Log
Model product-specific, repeatable price behavior patterns (regimes) to generate forward-looking price paths and dynamically score them against actual price action. Unlike fundamental and technical analysis which are domain-general reasoning processes, regime analysis leverages asset-specific historical behavior archetypes to produce a regime-based price target for PF integration.
From `analyses/{asset}/{date}/`:
| File | Key Data Used |
|---|---|
| `fundamental/result.md` | Active fundamental drivers, scenarios, overall bias |
| `technical/velocity.md` | Current velocity regime, momentum metrics |
| `technical/patterns.md` | Key levels, support/resistance, active patterns |
| `technical/result.md` | Structural view, price targets |
From `assets/{asset}/regimes/`:
Read ALL `.md` files in the directory. Each file describes a repeatable price behavior archetype with:
Read `assets/{asset}/SKILL.md` if it exists, to obtain:
Fetch current price and recent candles from the Oanda API:
When to use: Run this mode when `assets/{asset}/regimes/` does not exist or contains zero `.md` files. This generates initial regime archetypes from the asset's discovered market structure and fundamental context, enabling regime analysis to run on new assets.
| Source | File | Data Used |
|---|---|---|
| Market structure | `assets/{asset}/fingerprint.md` | Phase library — the discovered behavioral vocabulary (phase names, scales, descriptions) |
| Market structure | `assets/{asset}/transitions.md` | Transition matrix — how phases sequence, common multi-phase paths, anomaly detection |
| Fundamental | `analyses/{asset}/{date}/fundamental/result.md` | Active drivers, scenarios, key risks — what moves this asset |
| Asset context | `assets/{asset}/SKILL.md` | Instrument, asset class, key facts, structural characteristics |
1. Read all 4 input files. The fingerprint provides the phase vocabulary (e.g., `supply-shock-rally`, `correction`, `crisis-crash`). The transitions provide the sequencing rules (which multi-phase paths actually occur). The fundamental result provides the driver context. The asset SKILL.md provides the asset class and structural info.
2. Identify 4 distinct regime narratives by combining:
3. For each archetype, generate the full specification:
a. Signature table — Trigger (fundamental/structural event that activates this regime), Velocity (fast/slow, symmetric/asymmetric, using the fingerprint's scale metrics), Duration (mapped from fingerprint's `avg_duration_months`), Frequency (how often this pattern occurs, from fingerprint instance counts and domain knowledge)
b. Phases table — Each row describes a narrative stage of the regime: numbered phase name (descriptive, e.g., "1. Trigger Shock", "2. Panic Buying"), description (asset-specific context), duration (informed by fingerprint's `avg_duration_months` for corresponding behavioral phases), price_action (specific to this regime context), key_signal (what confirms transition to next phase). A regime typically has 3-6 phases. Use the fingerprint's scale metrics (price_pct_scale, duration_scale) to calibrate velocity and duration, but phase names are narrative — they describe the regime's story, not the fingerprint's behavioral categories.
c. Historical Examples table — Use your domain knowledge to provide 2-4 real historical instances with: date_range, trigger event, entry price, peak/trough, resolution, duration. These should be genuine historical episodes — use real dates and approximate price levels from your training data.
d. Resolution Patterns — 2-4 paragraphs describing how this regime typically ends. Include transition signals that indicate the regime is concluding. Reference the transition matrix probabilities where relevant.
e. Participant Behavior — 3-5 paragraphs covering how different market participants behave during this regime: institutional/commercial, speculative/managed money, retail, options market. This is asset-class-specific domain knowledge.
4. Name each archetype with a descriptive slug (e.g., `supply-shock-breakout`, `rate-driven-selloff`, `safe-haven-flight`). The name should clearly convey the regime narrative.
5. Ensure coverage: The 4 archetypes should collectively cover the major behavioral modes of the asset:
6. Create the directory and write files: ``` mkdir -p assets/{asset}/regimes/ ``` Write each archetype to `assets/{asset}/regimes/{archetype-slug}.md`
Each file follows this exact structure (matching existing hand-authored archetypes):
```
| Field | Value |
|---|---|
| Trigger | {fundamental/structural event that activates this regime — be specific to the asset} |
| Velocity | {speed and asymmetry description, referencing fingerprint scale metrics} |
| Duration | {typical duration range, derived from fingerprint phase durations in the sequence} |
| Frequency | {how often, derived from fingerprint instance counts and domain knowledge} |
| phase | description | duration | price_action | key_signal |
|---|---|---|---|---|
| 1. {narrative phase name} | {asset-specific description} | {duration} | {specific price behavior} | {confirmation signal} |
| date_range | trigger | entry | peak/trough | resolution | duration |
|---|---|---|---|---|---|
| {real dates} | {what happened} | {price} | {price} | {how it ended} | {duration} |
{2-4 paragraphs on how this regime ends, with transition signals}
{3-5 paragraphs covering institutional, speculative, retail, and options market behavior} ```
After generation, proceed to the normal regime analysis method (Steps 1-5) using the newly created archetypes.
---
1. Fetch current price from Oanda API (M1 granularity, count=1) 2. Read recent price-ohlcv events from `events/price-ohlcv/{date}/` for H4 and D timeframes 3. Read `technical/velocity.md` for current velocity regime and momentum 4. Read `technical/patterns.md` for key levels (support, resistance) and active patterns 5. Read `fundamental/result.md` for active drivers, scenarios, and overall bias 6. Identify active triggers by mapping current conditions against archetype trigger criteria 7. Summarize the current market structure context
Output format — write to `analyses/{asset}/{date}/regime/context.md`:
```
| Field | Value |
|---|---|
| Instrument | {instrument} |
| Price | {current price} |
| Timestamp | {ISO 8601} |
| 5-Day Change | {percent change} |
| 20-Day Change | {percent change} |
| Field | Value |
|---|---|
| Current Regime | {from velocity.md — trending/ranging/transitioning} |
| Momentum | {direction and strength} |
| Volatility | {high/medium/low relative to recent history} |
| level_type | price | significance | distance_from_current |
|---|---|---|---|
| resistance | {price} | {high/medium/low} | {+X.X%} |
| support | {price} | {high/medium/low} | {-X.X%} |
| driver | direction | weight | status |
|---|---|---|---|
| {from result.md key drivers} | {bullish/bearish} | {weight%} | {active/fading/emerging} |
| trigger | present | evidence | relevant_archetypes |
|---|---|---|---|
| {trigger condition from archetypes} | {yes/no/partial} | {brief evidence} | {archetype names} |
```
1. Load all archetype files from `assets/{asset}/regimes/` 2. For each archetype, evaluate its trigger conditions against the context from Step 1
[Truncated — full method has 426 lines]
Classify which phase a product's market is in, using product-specific phases discovered from historical price data. Each product has a unique set of phases — their number, characteristics, and transition patterns emerge from the data, not from a generic template.
The skill has two modes: 1. Discovery mode — run once per product to discover phases and build the fingerprint 2. Classification mode — run each analysis cycle to identify the current phase
| Data Source | Filter | Purpose |
|---|---|---|
| `price-ohlcv` | Asset instrument, M/W/D/H4 timeframes | Historical price structure |
| `assets/{product}/fingerprint.md` | If exists | Previously discovered fingerprint (classification mode) |
Instruments: Use Oanda format (e.g., `BCO_USD`, `XAU_USD`). Discovery: Candles for the specified timeframe(s), then validate with adjacent timeframes. Classification: Recent 60 candles at daily + weekly timeframes.
| TF Code | Oanda Granularity | Typical Candles | Typical History |
|---|---|---|---|
| M | M | 271 | 23 years |
| W | W | 1177 | 23 years |
| D | D | 5000 | 19 years |
| H4 | H4 | 5000 | 3 years |
| H1 | H1 | 5000 | 8 months |
| 30m | M30 | 5000 | 10 weeks |
| 15m | M15 | 5000 | 5 weeks |
| 1m | M1 | 5000 | 1 week |
1. Check if `assets/{product}/fingerprint.md` exists 2. If NO → run Discovery Mode (Steps D1-D5) 3. If YES → run Classification Mode (Steps C1-C3)
---
Input: `{tf}` — the timeframe to discover phases for (e.g., M, W, D, H4, H1, 30m, 15m, 1m).
1. Look up `{tf}` in the Timeframe Reference table to get the Oanda granularity and typical candle count 2. Fetch candles from Oanda API:
3. Save to `events/price-ohlcv/discovery/{instrument}-{tf}.md` 4. For validation (Step D3), also fetch the two adjacent timeframes (one higher, one lower) from the Timeframe Reference table:
Report: "Fetched {N} {tf} candles + adjacent TF data for {instrument}."
Using the `{tf}` candle data:
1. For each consecutive sequence of candles, compute:
2. Identify natural breakpoints where price behavior changes character:
3. Group similar segments by their (price%, duration) characteristics 4. Each group = one phase — name it descriptively from its behavior 5. Pick the most common phase as baseline (scale = 1.0 for both parameters) 6. Calculate relative scales for all other phases:
7. Build transition matrix: count how often each phase follows each other phase
Note: If only 2-3 phases are discoverable from the data, that is valid. Do not force more phases than the data supports. Shorter timeframes with less history will naturally find fewer phases.
Write intermediate file `analyses/{asset}/{date}/market-structure/discovery-{tf}.md`:
# Phase Discovery ({tf}) — {instrument} — {date}
## Parameters
| Field | Value |
|---|---|
| Instrument | {instrument} |
| Timeframe | {tf} |
| Period | {start_date} to {end_date} |
| Candles Analyzed | {count} |
| Phases Discovered | {N} |
## Segment Labels
## Phase Summary
## Transition Counts
## Reasoning {Narrative explaining why these phases were identified and how breakpoints were chosen}
Validation uses the two adjacent timeframes from the Timeframe Reference table:
For each adjacent timeframe:
1. Label the adjacent-TF data using the `{tf}` fingerprint:
2. Compute the relative scales at this timeframe 3. Calculate ratio correlation between this timeframe's scales and the discovery timeframe's scales 4. Record results
Validation criteria:
If validation fails: The problem is in the discovery, not the timeframe. Go back to Step D2 and adjust (wider cluster tolerance, different phase count, re-examine breakpoints). Do NOT simply mark the fingerprint as "non-fractal" and move on.
Write intermediate file `analyses/{asset}/{date}/market-structure/validation-{tf}.md`:
# Cross-Timeframe Validation ({tf}) — {instrument} — {date}
## Validation Results
| timeframe | phases_found | discovery_match | price_r | range_r | duration_r | status | notes |
|---|---|---|---|---|---|---|---|
| {tf} (source) | {N} | baseline | — | — | — | baseline | — |
| {adjacent_higher} | {N} | {N/N} | r={X.XX} | r={X.XX} | r={X.XX} | pass/investigate | {notes} |
| {adjacent_lower} | {N} | {N/N} | r={X.XX} | r={X.XX} | r={X.XX} | pass/investigate | {notes} |
## Per-Timeframe Detail
### {adjacent_higher}
{Repeat for adjacent_lower}
## Diagnostic Notes {If any timeframe failed: which phases were problematic, what was investigated, what was adjusted}
1. Using the `{tf}` data, hold out the most recent 20% of candles 2. Discover phases using only the first 80% 3. Classify the held-out 20% using the discovered fingerprint 4. Compare: do phase labels match? Do transition probabilities hold? 5. Calculate classification accuracy
Shallow TF exception: For H1 and below (H1, 30m, 15m, 1m), if total segments < 15, skip the out-of-sample test and note "insufficient data for holdout test — {N} segments found, minimum 15 required" in the output file.
Write intermediate file `analyses/{asset}/{date}/market-structure/out-of-sample-{tf}.md`:
# Out-of-Sample Validation ({tf}) — {instrument} — {date}
| Field | Value |
|---|---|
| Timeframe | {tf} |
| Training Period | {start} to {split_date} |
| Test Period | {split_date} to {end} |
| Training Candles | {N} |
| Test Candles | {N} |
| Classification Accuracy | {X}% |
## Test Period Labels
## Assessment {Does the fingerprint generalize? Any drift or new behaviors in recent data?}
If validation passes (≥2 of 3 metrics ≥80% per adjacent timeframe, out-of-sample ≥65% accuracy or skipped for shallow TFs):
File naming:
[Truncated — full method has 485 lines]
Model how a composite operator would engineer price action to reach targets identified by other analysis tracks, maximizing retail pain and minimizing time at favorable prices. Based on Wyckoff composite operator theory.
The composite operator is a useful mental model — not a literal single entity. It represents the aggregate behavior of informed capital (market makers, institutional flow, algorithmic liquidity providers) whose structural advantages (speed, size, information) create predictable manipulation patterns against retail participants.
Anti-kelly is a synthesis track that reads from all other analysis tracks (fundamental, technical, regime, market structure) but does NOT feed into any of them. It sits alongside the standard OPP as an adversarial lens.
The trader compares anti-kelly paths against the standard OPP to understand:
Anti-kelly does NOT replace the OPP. It pressure-tests it. If the OPP entry aligns with where smart money is accumulating, conviction increases. If the OPP entry sits at a retail trap zone, the trader adjusts timing or skips the tier.
The anti-kelly script (`scripts/anti-kelly.js`) preprocesses all analysis track outputs into a single `_llm_input.json`. The LLM receives:
| Source | Data | Purpose |
|---|---|---|
| `_llm_input.json` current_price | bid/ask/mid | Reference point for all analysis |
| `_llm_input.json` all_targets | Price targets from all tracks | Destinations smart money drives toward |
| `_llm_input.json` liquidity_pools | Stop clusters computed from S/R levels | Pools smart money must sweep |
| `_llm_input.json` retail_traps | Trap zones from patterns/S/R | Where retail predictably enters |
| `_llm_input.json` wyckoff_templates | 5 manipulation sequence templates | Structure for path construction |
| `_llm_input.json` target_template_map | Which templates apply to each target | Pre-filtered for direction |
| `_llm_input.json` fundamental_summary | Scenarios, overall view | "Fair value" targets |
| `_llm_input.json` technical_summary | S/R, patterns, velocity, participants, targets | Price structure context |
| `_llm_input.json` regime_summary | Paths, result, alignment | Active regime context |
| `_llm_input.json` market_structure_summary | TF stack, escalation | Structural divergences |
Given the landscape data, select which manipulation primitives are currently relevant. For each primitive, provide:
Criteria for selection:
For each viable target, construct the optimal manipulation sequence. Rules:
For each step in the sequence, specify:
| Field | Description |
|---|---|
| Price level | Where this step occurs |
| Duration estimate | From velocity data — how long this phase takes |
| Retail experience | What retail thinks is happening (narrative) |
| Smart money action | What informed capital is actually doing (narrative) |
Example step: "Price drops to $82.50 (2 days). Retail sees breakdown confirmation, sells. Smart money absorbs selling, accumulates at structural support."
Assign probability to each path based on:
Individual path probabilities must be 0-100%. Paths are independent scenarios — total may exceed 100%.
Time profile: Calculate what percentage of total duration is retail-favorable vs smart-money-favorable. In a well-executed manipulation, retail-favorable time (where retail can enter at good prices) is < 20% of total duration. Smart money spends 60-80% of time in accumulation/distribution (appearing directionless) and 20-40% in markup/markdown (fast, decisive moves).
Efficiency: Calculate as `|target_price - current_price| / total_duration_in_days`. Higher efficiency means smart money reaches the target faster per unit time. Compare across paths — the most efficient path is often the most likely, since informed capital minimizes time exposure.
Assess where current price sits in each path's sequence:
| Criterion | What to evaluate |
|---|---|
| Price | Is price at the level predicted for this phase? |
| Velocity | Does current velocity match expected phase velocity (e.g., slow grinding = accumulation)? |
| Volume | Does volume profile match (e.g., decreasing volume in accumulation, spike on spring)? |
| Retail behavior | Are retail participants behaving as predicted (e.g., exiting on shakeout, entering on false breakout)? |
Produce the result summary:
Review mechanical alignment scores from `_update_input.json`. The script has already computed price/velocity/duration scores for each path by comparing predicted vs actual values.
Override scores where LLM judgment differs from mechanical calculation. The mechanical score may miss nuances — for example, a price deviation that actually confirms the path rather than diverging from it (a spring that goes deeper than predicted is still a spring, and may indicate even stronger accumulation).
Provide reasoning for each override.
Update probabilities based on how well price tracked the predicted sequence:
Mark consumed primitives — liquidity pools that have been swept, retail traps that have been sprung. Update landscape status:
If price action suggests a manipulation pattern not in existing paths, generate new path(s). This is rare but important for capturing emerging patterns. Triggers for new path generation:
Review scores expired anti-kelly analyses against actual price outcomes, extracting product-specific lessons for future analyses.
For each predicted manipulation path, compare against actual H4 price data:
Override mechanical scores where LLM context adds value — e.g., a path that "missed" mechanically but captured the correct market dynamic is still a partial success.
For each predicted manipulation primitive:
For each predicted retail trap:
Synthesize path, primitive, and trap outcomes into recurring patterns:
Generate structured knowledge updates for merging into the knowledge file:
When historical knowledge exists (from prior reviews), the generate flow includes correlated analysis that adjusts standard output based on accumulated lessons.
Compare the current landscape (targets, pools, traps, market structure) against the manipulation fingerprint from the knowledge file:
Adjust path probabilities based on historical hit rates:
[Truncated — full method has 244 lines]
Identify trading opportunities by finding price points at the extremes of the probability distribution — prices unlikely to sustain — using the fundamental range as bounds, technical dynamics for precision, and regime nature for distribution shape. Outputs actionable entry/exit zones with JS-expressible trigger conditions for automated monitoring.
From `analyses/{asset}/{date}/`:
| File | Key Data Used |
|---|---|
| `fundamental/result.md` | Scenarios with probabilities + price ranges |
| `fundamental/influence-weights.md` | Factor weights for conviction assessment |
| `technical/result.md` | Price targets, bias, key levels |
| `technical/velocity.md` | Current velocity regime, momentum |
| `technical/patterns.md` | S/R levels, active patterns, pattern targets |
| `technical/participants.md` | Dominant participants, positioning |
| File | Key Data Used |
|---|---|
| `regime/result.md` | Active regime, distribution shape, price target |
| `regime/paths.md` | Active price paths with probabilities |
| `regime/alignment.md` | Path tracking status |
Fetch from Oanda API: ``` GET https://api-fxpractice.oanda.com/v3/instruments/{instrument}/candles?granularity=M1&count=1&price=M Authorization: Bearer 618551d36d05948f75c12143303ccec4-9a77b49c0f8c42f1f90271f6022a3676 ```
Read `assets/{asset}/SKILL.md` for instrument code, currency, unit.
Extract and probability-weight the fundamental price range.
1. Read `fundamental/result.md` — extract the scenarios table 2. For each scenario: extract probability, price_low, price_high, midpoint 3. Calculate:
4. Divide the range into probability zones:
5. Assess current price position: which zone is it in?
Output format — write to `trades/{asset-slug}/opportunities/OPP-{timestamp}/range-bounds.md`:
```markdown
| Field | Value |
|---|---|
| Tradeable Range Low | ${low} |
| Tradeable Range High | ${high} |
| Fair Value Center | ${center} |
| Tail Low | ${tail_low} |
| Tail High | ${tail_high} |
| Current Price | ${current} |
| Current Zone | {core/extended-low/extended-high/tail-low/tail-high} |
| zone | low | high | probability | description |
|---|---|---|---|---|
| tail-low | ${tail_low} | ${ext_low_boundary} | {prob}% | Extreme undervaluation — high-conviction entry |
| extended-low | ${ext_low_boundary} | ${core_low} | {prob}% | Below fair value — moderate entry |
| core | ${core_low} | ${core_high} | {prob}% | Fair value range — no edge |
| extended-high | ${core_high} | ${ext_high_boundary} | {prob}% | Above fair value — moderate exit |
| tail-high | ${ext_high_boundary} | ${tail_high} | {prob}% | Extreme overvaluation — high-conviction exit |
| scenario | probability | low | high | midpoint |
|---|---|---|---|---|
| {scenario name} | {prob}% | ${low} | ${high} | ${mid} |
```
Map technical structure within the fundamental range.
1. Read `technical/patterns.md` — extract S/R levels, active patterns, targets 2. Read `technical/velocity.md` — current velocity regime, momentum direction 3. Read `technical/participants.md` — dominant participant, phase (accumulation/markup/distribution/markdown) 4. Read `technical/result.md` — technical price targets 5. Map S/R levels onto the probability zones from Step 1:
6. Identify overshoot zones:
7. Assess velocity context:
Output format — write to `trades/{asset-slug}/opportunities/OPP-{timestamp}/price-dynamics.md`:
```markdown
| Field | Value |
|---|---|
| Velocity Regime | {trending/ranging/transitioning} |
| Momentum | {direction and strength} |
| Dominant Participant | {cohort} |
| Participant Phase | {accumulation/markup/distribution/markdown} |
| level | price | type | zone_alignment | strength | overshoot_potential |
|---|---|---|---|---|---|
| {name} | ${price} | {support/resistance} | {which probability zone} | {strong/moderate/weak} | {high/medium/low} |
| zone | entry_price | revert_target | expected_duration | probability | rationale |
|---|---|---|---|---|---|
| {name} | ${price} | ${revert_to} | {timeframe} | {prob}% | {why this overshoot is likely to revert} |
{2-3 sentences on how dominant participant positioning affects entry/exit timing} ```
Adjust the probability distribution based on regime behavior.
1. Check if `regime/result.md` exists 2. If regime exists:
3. If no regime analysis exists:
4. Adjust time windows:
Output format — write to `trades/{asset-slug}/opportunities/OPP-{timestamp}/regime-adjustment.md`:
```markdown
| Field | Value |
|---|---|
| Active Regime | {regime name or "None"} |
| Regime Confidence | {high/medium/low or "N/A"} |
| Distribution Adjustment | {skew-bullish/skew-bearish/tighten/widen/none} |
| Time Window Adjustment | {extend/compress/none} |
| zone | original_low | original_high | adjusted_low | adjusted_high | adjusted_prob | adjustment_rationale |
|---|---|---|---|---|---|---|
| tail-low | ${orig} | ${orig} | ${adj} | ${adj} | {prob}% | {why adjusted} |
| extended-low | ... | ... | ... | ... | ... | ... |
| core | ... | ... | ... | ... | ... | ... |
| extended-high | ... | ... | ... | ... | ... | ... |
| tail-high | ... | ... | ... | ... | ... | ... |
| path | probability | target | status | implication_for_entry |
|---|---|---|---|---|
| {PATH ID} | {prob}% | ${target} | {tracking/diverging} | {how this path affects entry/exit zones} |
| action_type | base_window | adjusted_window | rationale |
|---|---|---|---|
| Entry hold | {base} | {adjusted} | {regime effect on hold time} |
| Exit target | {base} | {adjusted} | {regime effect on target time} |
| Stop review | {base} | {adjusted} | {regime effect on stop monitoring} |
```
[Truncated — full method has 744 lines]
Produce timestamped PAT/PF/PD snapshots that decompose the current price into weighted factor contributions, generate multi-track forecasts, and (if a prior snapshot exists) explain price changes between snapshots. Reads existing fundamental + technical + regime intermediates — never re-runs them.
From `analyses/{asset}/{date}/`:
| File | Key Data Used |
|---|---|
| `fundamental/influence-weights.md` | Factor names + weight_pct (sums to 100%) |
| `fundamental/result.md` | Scenarios with probabilities + price targets |
| `technical/patterns.md` | Key levels + pattern targets + reliability |
| `technical/participants.md` | Cohort positions (institutional/commercial/speculative/retail) |
| `technical/result.md` | Price targets with timeframes |
| `regime/result.md` | Active regime, best path, price target, confidence (optional — used if regime analysis exists) |
Read `assets/{asset}/SKILL.md` if it exists, to obtain:
Check `analyses/{asset}/{date}/snapshots/` for existing PAT files to:
Call the Oanda API for the asset's instrument:
``` GET https://api-fxpractice.oanda.com/v3/instruments/{instrument}/candles?granularity=M1&count=1&price=M Authorization: Bearer 618551d36d05948f75c12143303ccec4-9a77b49c0f8c42f1f90271f6022a3676 ```
Extract the latest mid close price. Record the timestamp in ISO 8601 format (e.g., `2026-03-08T14:00:00Z`).
Scan `analyses/{asset}/{date}/snapshots/` for existing files:
For PF IDs, check both the snapshots directory AND the consolidated file `analyses/{asset-slug}-{date}.md` (if it exists) for the highest existing PF ID. The consolidated file may contain analysis-generated PFs (PF001, PF002) from the web-consolidation skill. The next snapshot PF must use the highest existing PF number + 1 to avoid ID collisions.
If no snapshots directory exists and no consolidated file exists, start at PAT001, PF001, PD001.
Map each factor from `fundamental/influence-weights.md` to a price component:
1. Read influence-weights.md — each row has: factor, weight_pct, rationale 2. For each factor:
3. Sum all components 4. Residual = current_price - sum_of_components (should be small; format as `$X.XX/unit`) 5. Validation note = `Components sum to $X.XX vs actual $Y.YY`
Assign each component an ID: PA001, PA002, etc.
Output format — write to `analyses/{asset}/{date}/snapshots/PAT-{timestamp}.md`:
```markdown
| Field | Value |
|---|---|
| Instrument | {instrument} |
| Price | ${price}/{unit} |
| Currency | {currency} |
| Unit | {unit} |
| Timestamp | {ISO 8601 timestamp} |
| Trigger | Scheduled point-in-time snapshot |
| Trigger Ref | - |
| id | component | category | value | percent | basis | trend | confidence | references |
|---|---|---|---|---|---|---|---|---|
| PA001 | {factor name} | {category} | {+/-$X.XX} | {weight}% | {rationale} | {trend} | {confidence} | - |
| PA002 | ... | ... | ... | ... | ... | ... | ... | - |
Residual: {$X.XX/unit} Validation: Components sum to ${sum} vs actual ${price} ```
IMPORTANT: The file contains a single `### PAT{NNN}` section. The key-value table comes first, then the components table, then the Residual and Validation lines. This format matches what the web parser expects.
Build a multi-track forecast combining fundamental, participant, and pattern tracks:
Fundamental track:
Participant track:
Pattern track:
Regime track (if regime analysis exists):
Track weights:
If regime analysis exists (4 tracks):
If no regime analysis (3 tracks, original weights):
Composite calculation:
Output format — write to `analyses/{asset}/{date}/snapshots/PF-{timestamp}.md`:
```markdown
| Field | Value |
|---|---|
| Created | {ISO 8601 datetime from snapshot timestamp} |
| Last Validated | {ISO 8601 datetime — same as Created initially} |
| Valid From | {forecastDate from PF table} |
| Valid To | {targetDate from PF table} |
| Trading Days | {business days between Valid From and Valid To} |
| Calendar Days | {total days between Valid From and Valid To} |
| Data Window | {forecastDate} to {forecastDate} |
| id | instrument | forecastDate | targetDate | targetTimeframe | compositePrice | compositeLow | compositeHigh | compositeConfidence | status | actualClose | error | errorPercent | references |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
| PF{NNN} | {instrument} | {date} | {target_date} | 2-4 trading days | {composite} | {low} | {high} | {confidence} | active | - | - | - | PAT{NNN} |
| track | method | predictedPrice | predictedLow | predictedHigh | confidence | weight | reasoning | references |
|---|---|---|---|---|---|---|---|---|
| fundamental | scenario-weighted | {price} | {low} | {high} | {conf} | 40 | {reasoning} | - |
| participant | positioning-flow | {price} | {low} | {high} | {conf} | 30 | {reasoning} | - |
| pattern | technical-level | {price} | {low} | {high} | {conf} | 30 | {reasoning} | - |
| regime | historical-analog | {price} | {low} | {high} | {conf} | 20 | {reasoning} | - |
```
Note: The regime track row is only included when regime analysis exists. When present, use the 4-track weights (35/25/20/20). When absent, use the 3-track weights (40/30/30) and omit the regime row.
IMPORTANT: The PF file contains both the summary table under `## Price Forecasts` and the track detail under `### PF{NNN}`. The `references` column in the summary table links to the associated PAT snapshot ID.
If a prior PAT snapshot exists in `analyses/{asset}/{date}/snapshots/`:
1. Read the most recent prior PAT file 2. For each component in the current PAT, find the matching component in the prior PAT (by component name) 3. Calculate delta = current_value - prior_value for each component 4. Determine status: `increased`, `decreased`, `unchanged`, `new` (no prior match), `removed` (in prior but not current) 5. Explained = sum of all component deltas
[Truncated — full method has 244 lines]
Discover inter-product relationships and produce spread/pair trading opportunities by analyzing relative price dynamics between products. Unlike directional analysis tracks, this track reasons about relative price changes — predicting spread widening/narrowing rather than up/down.
| Input | Source | Purpose |
|---|---|---|
| Fundamental results | `analyses/{asset}/{date}/fundamental/result.md` | Directional bias, key drivers, supply/demand factors |
| Technical results | `analyses/{asset}/{date}/technical/result.md` | Price velocity, participant behavior, patterns |
| Regime results | `analyses/{asset}/{date}/regime/result.md` | Product-specific behavior context |
| Raw events | `events/{type}/{date}/*.md` | Cross-product co-occurrence scanning |
| Asset skills | `assets/{asset}/SKILL.md` | Product names, aliases, tickers for keyword matching |
| Knowledge library | `assets/cross-product/relationships.md` | Persistent confirmed relationships |
| Discovered instruments | `assets/cross-product/instruments.md` | Oanda tradability for untracked products |
| Price data | Oanda REST API | Spread history, correlation matrices |
1. Run `node scripts/cross-product.js prepare-graph` — parses all per-asset analyses, extracts prices/velocity/bias, scans events for product keyword co-occurrences, computes price correlation matrix 2. Script outputs `xp-graph-state.json` + `xp-graph-llm-input.json` to `analyses/.tmp/` 3. LLM reads `xp-graph-llm-input.json` and produces edge judgments (next step)
Script/LLM boundary:
Edge types: `substitution`, `supply_displacement`, `macro_cascade`, `input_cost`, `correlation_break`, `flow_rotation`
Prune edges with importance < 30.
1. LLM fills edge judgment JSON with: product_a, product_b, relationship_type, direction, importance (0-100), evidence, timeframe_days 2. Run `node scripts/cross-product.js assemble-graph` — graph traversal, chain tracing, pruning 3. Script outputs `nodes.md`, `edges.md`, `chains.md` to `analyses/_cross-product/{date}/graph/`
Handled by `assemble-graph` script:
For each significant relationship or chain: 1. LLM generates structured cross-product events using schema from `events/cross-product/_schema.md` 2. Columns: timestamp, source_event, products, relationship_type, direction, importance, timeframe, evidence 3. LLM also identifies information gaps → `source-recommendations.md`
1. Run `node scripts/cross-product.js prepare-spreads` — fetches spread price history from Oanda, computes stats (mean, stddev, z-score, percentiles), lookback scaled by relationship type 2. Script outputs `xp-spread-state.json` + `xp-spread-llm-input.json` to `analyses/.tmp/` 3. LLM fills spread judgment JSON with: win_probability, target_spread, stop_spread, entry prices, leg directions, confidence, narrative, invalidation 4. Run `node scripts/cross-product.js assemble-spreads` — Kelly calculation, markdown assembly
Per-pair output:
Handled by `assemble-spreads` script:
| File | Location | Temporal Validity |
|---|---|---|
| `nodes.md` | `_cross-product/{date}/graph/` | Snapshot day (refreshed daily) |
| `edges.md` | `_cross-product/{date}/graph/` | Until driving event expires |
| `chains.md` | `_cross-product/{date}/graph/` | Bounded by weakest link |
| `relationship.md` | `_cross-product/{date}/spreads/{pair}/` | Adaptive per relationship type |
| `spread-analysis.md` | `_cross-product/{date}/spreads/{pair}/` | Matches relationship timeframe |
| `result.md` | `_cross-product/{date}/` | Union of all spread windows |
| `source-recommendations.md` | `_cross-product/{date}/` | Informational, no expiry |
| Cross-product events | `events/cross-product/{date}/` | Per event type defaults |
| Consolidated | `analyses/cross-product-{date}.md` | Matches result.md |
Roll-up produces higher-level summary events from lower-level events. Each roll-up level is itself an event, stored in the appropriate timeframe file. The LLM decides what matters at each level.
Read `events/{type}/_schema.md`. Check the `Roll-up` section:
For template-based types (price-ohlcv, economic-release, inventory, positioning):
1. Read all lower-level event files for the roll-up period 2. Apply the aggregation rule defined in the schema:
3. Write the summary using the schema's summary template 4. Save to the appropriate timeframe file
For LLM-based types (geopolitical, policy):
1. Read all lower-level event files for the roll-up period 2. Follow the roll-up instruction in the schema 3. Produce a structured summary that:
4. Save to the appropriate timeframe file
Save the summary event in the correct location:
The output is a new event file following the same type schema, but at a higher timeframe. For template types, it's a table row. For LLM types, it's a summary section plus a table row.
# {Type} {Period} Summary — {date range}
## Summary
{LLM-generated narrative summary}
## Events
| {same columns as source schema} |
| ... rolled-up/summarized rows ... |
None — roll-up is a single-step transformation.
None — roll-up operates within a single event type.