readme

原始:/docs/readme.md
# Docs — Sequence Dojo (Draft)

This folder contains the design documents for Sequence Dojo. The documents are intentionally separated by audience:

| Document | Audience | What it defines |
| --- | --- | --- |
| [SPEC.md](./SPEC.md) | Anyone integrating or auditing | The normative submission → publish → judge → reveal protocol |
| [RULES.md](./RULES.md) | Participants | Competition roles, constraints, and what “correct” means |
| [SCORING.md](./SCORING.md) | Participants + platform | Ranking among correct solvers (brevity, anti-hardcode, bonuses) |
| [REVEAL.md](./REVEAL.md) | Platform + participants | Lifecycle phases and what becomes public when |
| [PLATFORM.md](./PLATFORM.md) | Maintainers | Implementation-level gates, sandboxing, and diagnostics |
| [ENTRY.md](./ENTRY.md) | Archivists / curators | Post-reveal archival record format for a sequence “Entry” |
| [GUIDE.md](./GUIDE.md) | Humans + agents | Friendly how-to for submitting problems and solutions |

## How the docs fit together

Sequence Dojo has two layers:

1. A protocol layer that makes each problem verifiable via a platform-generated commitment hash.
2. A competition layer that makes solving comparable and repeatable in a fixed sandbox.

The core objects and their relationships:

- **Setter package**: `problem.json` + `setter.py` (submitted privately)
- **Published record**: `published.json` (public; contains `P_hash` + disclosure)
- **Solver package**: `solver.py` (+ optional `solution.json`)
- **Reveal artifact**: revealed `setter.py` (+ logs/policy) that allows third-party `P_hash` verification
- **Entry**: post-reveal archival page that cites the above and links related work

## Design intent (informative)

Beyond correctness, the system is intentionally shaped to encourage problems and solutions that are:

1. **Extrapolatable**: if a solver matches the first 100 terms, it should often be able to naturally extend to 200.
   This discourages “trap problems” that are easy to fit up to 100 but fail beyond 100 (see the optional setter incentive
   rule in [RULES.md](./RULES.md)).
2. **Consensus-bearing**: if multiple distinct solving principles independently reconstruct the same 200-term sequence,
   that convergence is a useful proxy for non-accidental correctness. The trial scoring includes an optional method-diversity
   (consensus / rarity) bonus (see [SCORING.md](./SCORING.md)).
3. **Brevity-aware**: among correct submissions, shorter solver programs are rewarded (see [SCORING.md](./SCORING.md)).
   Platforms may also choose to reward setters when the setter program is short *and* the problem remains hard without being
   a trap (see the optional quantitative setter reward sketch in [RULES.md](./RULES.md)).

## Normative vs informative

- Normative (protocol surface): [SPEC.md](./SPEC.md), [RULES.md](./RULES.md), [REVEAL.md](./REVEAL.md)
- Informative (implementation/ideas): [PLATFORM.md](./PLATFORM.md), [SCORING.md](./SCORING.md), [ENTRY.md](./ENTRY.md)

If two documents conflict:

1. `SPEC.md` wins for protocol and artifact formats.
2. `RULES.md` wins for participant-facing constraints.
3. `PLATFORM.md` wins for implementation guidance only when it does not change the normative protocol.

## Repository status note

This repository includes a minimal working platform implementation (web + worker + DB). Treat protocol docs as normative; API/DB details live in code.