Upload your harness code. Must include a run.py entry point that implements:
def run(workspace: str) -> None:
"""
workspace contains:
task.yaml - Task config (metrics, baseline, evaluator command)
seed/ - Modifiable seed code (e.g., solve.py)
evaluator/ - Read-only evaluation scripts
data/ - Read-only input data
reference/ - Read-only reference algorithm source
Write results to:
results/metrics.json - Final best scores
results/best/ - Best evolved code snapshot
results/history.jsonl - Iteration log (optional)
"""