About Almanac
We're cartographers, not prophets. We map the territory of possible futures with no stake in which one arrives.
What This Is
Almanac is an AI-powered forecasting system that tracks how software engineering is changing over the next 3-30 years. Every day, our pipeline scrapes 17+ data sources, filters hundreds of signals through tiered AI models, and produces a confidence-scored forecast. We track 55 specific, falsifiable predictions and update them daily using Bayesian likelihood ratio analysis with a 10-persona forecaster panel.
What This Is Not
We are not an oracle. We are not financial advisors. Our predictions are probabilistic estimates based on available evidence, and they are frequently wrong. The value isn't in any single prediction being correct — it's in the discipline of tracking, updating, and honestly reporting our accuracy over time.
How We're Different
- 55 SWE-specific predictions
- Daily automated Bayesian updates
- 10-persona panel with evidence trails
- Free, open methodology
- Broad topics (not SWE-focused)
- Crowd-sourced human forecasters
- Large resolved question library
- Free, community-driven
- Financial prediction markets
- Real money at stake
- Mostly politics/crypto/events
- Not SWE career-focused
Methodology
Data Collection
Every day at 06:00 UTC, automated scrapers collect data from 17+ sources: Hacker News, arXiv, 5 RSS feeds (TechCrunch, Ars Technica, MIT Tech Review, The Verge, Wired), GitHub Trending, Stack Overflow, Semantic Scholar, FRED economic data, SEC EDGAR filings, Reddit, Lobsters, investor relations feeds, and X/Twitter. This yields 500+ raw items per day.
Signal Filtering
The AI model evaluates each item for relevance to the future of software engineering (3-30 year horizon). Items scoring below 0.5 recency-adjusted relevance are discarded. Typically 20-30 signals survive filtering. Each signal is classified by type: AI coding, job market, skills, regulation, tooling, or paradigm.
Narrative Synthesis
The top signals are synthesized into a coherent daily narrative. The AI is provided with yesterday's executive summary for continuity. It identifies the day's most significant developments and explains their 3-30 year implications.
Prediction Update
Each of our 55 standing predictions is evaluated against the day's evidence via Bayesian likelihood ratio analysis. A 10-persona forecaster panel votes independently. Confidence can move up to +/-5 percentage points per day. All movements are logged with evidence trails.
Publication
The report is committed to our private repository (with prediction deltas in the commit message), then pushed to the public site. The entire pipeline runs in under 10 minutes. Every report is timestamped and immutable.
The 10 Forecaster Personas
Each prediction is independently evaluated by 10 AI personas with different worldviews. The median likelihood ratio is used — no single perspective dominates.
Techno-Optimist
Believes AI will accelerate everything. Weights positive adoption signals heavily.
Labor Economist
Focuses on employment data, wage trends, historical automation parallels.
Security Hawk
Highlights risks, vulnerabilities, and regulatory responses to AI-generated code.
Contrarian
Systematically challenges consensus. Asks 'what if the opposite happens?'
Base Rate Empiricist
Anchors on historical base rates. Skeptical of 'this time is different' narratives.
Startup Founder
Bullish on disruption speed. Weights funding/valuation signals. Believes incumbents are slow.
Enterprise Architect
Conservative on adoption timelines. Knows procurement cycles take 18-24 months. Demo ≠ deployed.
Open Source Advocate
Bullish on community-driven development. Believes open-source always wins long-term.
Regulatory Watcher
Believes regulation is coming faster than industry expects. EU AI Act, liability trends.
Developer Educator
Focused on skills, bootcamps, CS enrollment. Believes talent pipeline adapts faster than pessimists predict.
Design Decisions
The numbers behind our methodology aren't arbitrary — here's why we chose them.
At 1.0, a single day's evidence from a noisy source causes multi-percentage-point swings that reverse the next day. At 0.3, it takes consistent evidence across several days to meaningfully shift a prediction — which matches how real-world trends work.
Prevents overreaction to single events. Even genuinely significant developments (a major acquisition, a breakthrough paper) need time for second-order effects to become clear. Big shifts should accumulate over weeks, not happen overnight.
Using the median (not mean) of 10 likelihood ratios is robust to outlier personas. If the Techno-Optimist gives LR=5.0 and everyone else gives ~1.0, the median ignores the outlier. This prevents any single worldview from dominating.
Nothing is ever truly 0% or 100% — there's always a chance we're wrong about the question itself, or the world changes in ways nobody predicted. Hard bounds acknowledge irreducible uncertainty.
Try the Formula Yourself
This calculator runs the exact same Bayesian update formula our pipeline uses every day. Use the presets to see how source credibility and evidence strength interact.
Bayesian Update Calculator
Mirrors the exact formula used by our daily pipeline. Adjust sliders to see how evidence moves a prediction.
Reading Confidence Scores
Strong evidence base, clear trend direction. We'd be surprised if this doesn't happen.
Good evidence but significant uncertainty remains. Could go either way with new developments.
Genuinely uncertain. Included because the question matters, not because we're confident.
Confidence scores are clamped to prevent overreaction to single-day signals. Big shifts accumulate over weeks.
Tech Stack
Open Source
The website frontend is open source. The forecasting pipeline methodology, predictions, and accuracy record are all published publicly so you can evaluate our work without needing the code.