Quick Start: your first 10 minutes with mcprojsim¶
This guide is for end users who want to get from installation to a first simulation as quickly as possible.
It intentionally follows one tactical happy path: one install method, one tiny example, one simulation command, and one report export. For broader install options, deeper explanations, and extended workflows, continue with the User Guide chapter at docs/user_guide/getting_started.md.
In the next few minutes you will:
- install
mcprojsim - create a tiny project file
- validate it
- run a simulation
- open the generated report
Before you start¶
You need:
- Python 3.13 or newer
- a terminal
pipxfor the easiest CLI install
If you do not have pipx yet, install it first:
1. Install mcprojsim¶
Verify that it works:
2. Create your first project file¶
The quickest way to create a project file is to describe your project in plain text and let mcprojsim generate produce the YAML for you.
Create a file named description.txt:
Project name: Website Refresh
Description: Small example project
Start date: 2026-04-01
Task 1:
- Design updates
- Size: S
Task 2:
- Frontend changes
- Depends on Task 1
- Size: M
Generate the project file:
That is it — the generated quickstart_project.yaml is ready for validation and simulation. You can use T-shirt sizes (XS, S, M, L, XL, XXL), including qualified forms like epic.M, story points, or explicit low/expected/high estimates. Bare T-shirt values resolve via the configured default category. See the MCP Server & Natural Language Input guide for the full input format.
Alternative: write the YAML by hand
If you prefer full control, create quickstart_project.yaml manually where you can specify all available fields. The minimum required fields are project.name, project.start_date, and at least one task with an estimate. In the previous example we used T-shirt sizes for the estimates, but here is the same project with explicit low/expected/high estimates in days:
project:
name: "Website Refresh"
description: "Small example project"
start_date: "2026-04-01"
confidence_levels: [50, 80, 90]
tasks:
- id: "task_001"
name: "Design updates"
estimate:
low: 2
expected: 3
high: 5
unit: "days"
- id: "task_002"
name: "Frontend changes"
estimate:
low: 4
expected: 6
high: 10
unit: "days"
dependencies: ["task_001"]
See the project file reference for all available fields.
This example is intentionally small — two tasks, one dependency, but that is enough for a meaningful first simulation!
3. Validate the file¶
Before simulating, validate the input:
Expected result:
If validation fails, read the reported field name and fix the YAML file before continuing.
4. Run your first simulation¶
What this does:
- runs the default number of simulation iterations
- uses
--seed 42so the result is reproducible - prints a summary to the terminal
You should see a summary with values such as:
- mean duration (in hours and working days)
- median (
P50) - higher-confidence targets such as
P80andP90 - projected delivery dates (weekends excluded)
The output will now be the following:
(Example output only: exact version string, timing, and numeric values depend on your installed release and random inputs.)
% mcprojsim simulate quickstart_project.yaml --seed 42 --table
mcprojsim, version 0.11.2
Progress: 100.0% (10000/10000)
Simulation time: 0.62 seconds
Peak simulation memory: 852.00 KiB
=== Simulation Results ===
Project Overview:
┌────────────────────┬─────────────────┐
│ Field │ Value │
├────────────────────┼─────────────────┤
│ Project │ Website Refresh │
│ Hours per Day │ 8.0 │
│ Max Parallel Tasks │ 1 │
│ Schedule Mode │ dependency_only │
└────────────────────┴─────────────────┘
Calendar Time Statistical Summary:
┌──────────────────────────┬────────────────────────────────┐
│ Metric │ Value │
├──────────────────────────┼────────────────────────────────┤
│ Mean │ 126.93 hours (16 working days) │
│ Median (P50) │ 125.74 hours │
│ Std Dev │ 17.68 hours │
│ Minimum │ 78.43 hours │
│ Maximum │ 184.27 hours │
│ Coefficient of Variation │ 0.1393 │
│ Skewness │ 0.2267 │
│ Excess Kurtosis │ -0.4206 │
└──────────────────────────┴────────────────────────────────┘
Project Effort Statistical Summary:
┌──────────────────────────┬──────────────────────────────────────┐
│ Metric │ Value │
├──────────────────────────┼──────────────────────────────────────┤
│ Mean │ 126.93 person-hours (16 person-days) │
│ Median (P50) │ 125.74 person-hours │
│ Std Dev │ 17.68 person-hours │
│ Minimum │ 78.43 person-hours │
│ Maximum │ 184.27 person-hours │
│ Coefficient of Variation │ 0.1393 │
│ Skewness │ 0.2267 │
│ Excess Kurtosis │ -0.4206 │
└──────────────────────────┴──────────────────────────────────────┘
Calendar Time Confidence Intervals:
┌──────────────┬─────────┬────────────────┬────────────┐
│ Percentile │ Hours │ Working Days │ Date │
├──────────────┼─────────┼────────────────┼────────────┤
│ P50 │ 125.74 │ 16 │ 2026-04-23 │
│ P80 │ 142.59 │ 18 │ 2026-04-27 │
│ P90 │ 151.18 │ 19 │ 2026-04-28 │
└──────────────┴─────────┴────────────────┴────────────┘
Effort Confidence Intervals:
┌──────────────┬────────────────┬───────────────┐
│ Percentile │ Person-Hours │ Person-Days │
├──────────────┼────────────────┼───────────────┤
│ P50 │ 125.74 │ 16 │
│ P80 │ 142.59 │ 18 │
│ P90 │ 151.18 │ 19 │
└──────────────┴────────────────┴───────────────┘
Sensitivity Analysis (top contributors):
┌──────────┬───────────────┐
│ Task │ Correlation │
├──────────┼───────────────┤
│ task_002 │ +0.8911 │
│ task_001 │ +0.4236 │
└──────────┴───────────────┘
Schedule Slack:
┌──────────┬─────────────────┬──────────┐
│ Task │ Slack (hours) │ Status │
├──────────┼─────────────────┼──────────┤
│ task_001 │ 0.00 │ Critical │
│ task_002 │ 0.00 │ Critical │
└──────────┴─────────────────┴──────────┘
Most Frequent Critical Paths:
1. task_001 -> task_002 (10000/10000, 100.0%)
Staffing (based on mean effort): 1 people recommended (mixed team), 19 working days
Total effort: 127 person-hours (16 person-days) | Parallelism ratio: 1.0
No export formats specified. Use -f to export results to files.
5. Generate HTML report and open it¶
By default, no export formats are specified, so the results are only printed to the terminal.
To get the full report with all details, use the -f flag.
The supported formats are json, csv, and html. The HTML report is the most user-friendly for a first look, as it includes all the details and visualizations.
Opening the generated HTML report (Website Refresh_results.html) will give you a detailed view of the results, including:
- project summary
- confidence intervals
- the full distribution of outcomes
- sensitivity analysis showing which tasks contribute most to uncertainty
- schedule slack information
- critical path analysis
- risk impact analysis
- statistical distribution metrics such as skewness and kurtosis
- and more!
On macOS you can open the HTML report using the following command:
The first part of the generated HTML is shown below to give you a preview of what to expect:

6. Useful next commands¶
Run again with more iterations:
Use a custom configuration file:
Suppress progress output:
What the main results mean¶
P50: about a 50% chance of finishing within this many elapsed hoursP80: a more conservative planning targetP90: a high-confidence planning target
The simulator reports all results in hours (the canonical internal unit). It also shows working days (hours ÷ hours_per_day, rounded up) and projected delivery dates (skipping weekends from the project's start_date).
A common practical pattern is:
- use
P50for internal discussion - use
P80orP90for commitments where lateness matters
If you want to go further¶
After this first run, move to the fuller User Guide path below. This Quick Start stays intentionally short and tactical; the chapters below are the long-form, maintained references:
- Getting Started — a fuller walkthrough
- Introduction — Monte Carlo concepts
- Your First Project — build richer project files step by step
- Project Files — project file reference
- Configuration — uncertainty factors and config
- Examples — example projects
Need a different installation path?¶
This guide intentionally focuses on the fastest end-user path for a first successful run.
If pipx is not the right fit, see:
- Getting Started for basic install and first-run material
If you are a developer, see Development for instructions on setting up a local development environment, running tests, and contributing to the project.