Read Results
Experiment.run() returns a facade over saved artifacts. Use the facade for normal analysis, and use artifact paths when exact files are needed.
Single run:
import macroforecast as mf
result = mf.forecast(
"fred_md",
target="INDPRO",
horizons=[1, 3],
start="1980-01",
end="2019-12",
)
forecasts = result.forecasts
metrics = result.metrics
comparison = result.comparison
manifest = result.manifest
Sweep:
result = (
mf.Experiment(
dataset="fred_md",
target="INDPRO",
horizons=[1],
start="1980-01",
end="2019-12",
)
.compare_models(["ridge", "lasso"])
.run()
)
ranking = result.ranking # or result.mean(metric="mse")
forecasts = result.forecasts
variants = result.metrics
manifest = result.manifest
Common attributes:
result.forecasts: forecast rows frompredictions.csvresult.predictions: same table asforecastsresult.metrics: one row per horizon, or per variant and horizon for sweepsresult.comparison: compact comparison tableresult.manifest: provenance dictionaryresult.artifact_dir: comparison-cell artifact directoryresult.output_root: sweep artifact root
Single-run raw artifact access:
result.metrics_json
result.comparison_json
result.file_path("predictions.csv")
result.read_json("manifest.json")
Sweep variant access:
table = result.metrics
best = result.ranking # or result.mean(metric="mse").iloc[0]
variant = result.variant(best["variant_id"])
variant.forecasts
variant.manifest
Design rule: every number shown to a user should be traceable to a recipe, run, and artifact.