Reproducible artifact and evaluation package for the research paper on AI-assisted facilitation in high-stakes workshops.
Paper: [arXiv link - to be added after submission]
facilcopilot/- Core Python modulesdata/scenarios/- Replay evaluation scenarios (D2 dataset)data/examples/- Example transcript schemaexperiments/configs/- Experiment configuration filesexperiments/runs/- Evaluation results (for verification)scripts/- Evaluation and analysis scriptstests/- Unit and integration tests
# Install dependencies
pip install -e .
# Run tests
make test
# Run evaluation pipelines
make eval_detection
make eval_replay
# Generate paper tables
make build_paper_tablesAll experiments use fixed random seeds. Configs, hyperparameters, and scenario definitions are version-controlled. The paper (available on arXiv) documents all testable claims and evidence requirements. Evaluation results in experiments/runs/ can be reproduced by running the scripts in scripts/ with the provided configuration files.
If you use this code or data, please cite our paper:
[Citation will be added after arXiv publication][To be determined]