first working version
This commit is contained in:
37
docs/angular-frontend-spec.md
Normal file
37
docs/angular-frontend-spec.md
Normal file
@@ -0,0 +1,37 @@
|
||||
# Angular Frontend Specification (Minimal)
|
||||
|
||||
## Purpose
|
||||
Render exams from JSON, preserve progress with autosave, support resume, and submit to produce bundled output.
|
||||
|
||||
## Routes
|
||||
- `/` → list of available exams (published − finished)
|
||||
- `/exam/:examId` → exam player (start or resume)
|
||||
- `/done/:attemptId` → submission confirmation with output file reference
|
||||
|
||||
## Data Loading
|
||||
- On `/exam/:examId`, call `GET /api/exams/:examId` to load exam
|
||||
- Start/resume via `POST /api/exams/:examId}/attempt` (idempotent: returns existing in-progress attempt if any)
|
||||
|
||||
## State & Autosave
|
||||
- Local reactive form/state per question
|
||||
- Autosave every N seconds and on blur/change via `PUT /api/attempts/:attemptId`
|
||||
- Show "Saved just now" status with debounce
|
||||
|
||||
## Timer & Submit
|
||||
- Countdown from `durationMinutes`
|
||||
- Auto-submit on expiry
|
||||
- Manual submit → POST submit → redirect to `/done/:attemptId`
|
||||
|
||||
## Resume Behavior
|
||||
- On load, hydrate answers from attempt JSON
|
||||
- Scroll to last answered question; restore timer based on `startedAt` and `durationMinutes`
|
||||
|
||||
## Edge Cases
|
||||
- Version drift: if exam JSON `metadata.version` differs from attempt’s `examVersion`, show non-blocking warning
|
||||
- Connectivity loss: queue autosaves and replay when online
|
||||
- Double tabs: server enforces single active attempt; UI warns
|
||||
|
||||
## Accessibility & UX (minimal)
|
||||
- Keyboard-first navigation, ARIA roles
|
||||
- Clear focus states and error messages
|
||||
- Save status and timer announced politely
|
||||
35
docs/deploy-minimal.md
Normal file
35
docs/deploy-minimal.md
Normal file
@@ -0,0 +1,35 @@
|
||||
# Minimal Deployment (Nginx + Django + Angular)
|
||||
|
||||
## Overview
|
||||
Serve Angular as static assets via Nginx and proxy `/api/` to Django. Use file system storage on a single host.
|
||||
|
||||
## Build Outputs
|
||||
- Angular build → `/var/www/app` (or similar)
|
||||
- Django app → `/srv/api` (Gunicorn/Uvicorn on 127.0.0.1:8000)
|
||||
- Data folders: `/srv/data/input`, `/srv/data/attempts`, `/srv/data/output`, `/srv/data/progress`, `/srv/data/manifest.json`
|
||||
|
||||
## Nginx (concept)
|
||||
- `/` → Angular index.html + assets
|
||||
- `/api/` → proxy to Django (127.0.0.1:8000)
|
||||
- Cache static; no caching for `/api/`
|
||||
|
||||
## Django Configuration
|
||||
- Env vars for folder paths (INPUT_DIR, ATTEMPTS_DIR, OUTPUT_DIR, PROGRESS_DIR, MANIFEST_FILE)
|
||||
- CORS/CSRF: allow Angular origin
|
||||
- Logging to files under `/var/log/app`
|
||||
|
||||
## Health
|
||||
- `/api/health` endpoint returns `{ ok: true }`
|
||||
- Nginx upstream failover not required (single host)
|
||||
|
||||
## Backups
|
||||
- Periodic tar of `/srv/data` (retain N days)
|
||||
|
||||
## TLS
|
||||
- Terminate HTTPS at Nginx (e.g., with Let's Encrypt)
|
||||
|
||||
## Rollout Steps (high level)
|
||||
1. Build Angular → copy to web root
|
||||
2. Run Django server → behind Nginx
|
||||
3. Create data folders and set permissions
|
||||
4. Verify `/api/exams` and basic start/submit flows
|
||||
53
docs/django-backend-spec.md
Normal file
53
docs/django-backend-spec.md
Normal file
@@ -0,0 +1,53 @@
|
||||
# Django Backend Specification (Minimal)
|
||||
|
||||
## Purpose
|
||||
Provide a tiny REST API to read exams from files, manage attempts, autosave progress, and produce an output JSON bundle.
|
||||
|
||||
## Endpoints
|
||||
- GET `/api/exams` → list published exams (from `manifest.json` and `input/`)
|
||||
- GET `/api/exams/{examId}` → return exam JSON (from `input/`)
|
||||
- POST `/api/exams/{examId}/attempt` → start or resume attempt; returns attempt JSON
|
||||
- PUT `/api/attempts/{attemptId}` → autosave answers and timestamps; returns updated attempt
|
||||
- POST `/api/attempts/{attemptId}/submit` → finalize, write output bundle, mark finished; returns output path
|
||||
- GET `/api/progress/me` → return progress snapshot for current user
|
||||
|
||||
## Storage Conventions
|
||||
- `input/{examId}.json` — canonical exam file
|
||||
- `attempts/{userId}/{examId}/{attemptId}.json` — active/resumed attempt
|
||||
- `output/{examId}_{attemptId}.json` — bundled `{ exam, attempt }`
|
||||
- `progress/{userId}.json` — progress summary
|
||||
- `manifest.json` — published flags and per-user finished/active sets
|
||||
|
||||
## Attempt JSON shape
|
||||
```
|
||||
{
|
||||
attemptId, userId, examId, status, startedAt, updatedAt, submittedAt?,
|
||||
answers: [ { questionId, response, timeSec? } ]
|
||||
}
|
||||
```
|
||||
|
||||
## Rules
|
||||
- One active attempt per exam per user (unless configured otherwise)
|
||||
- Use temp file + atomic rename for all writes
|
||||
- Validate exam exists and is published before starting
|
||||
- Resume uses most recent attempt by `updatedAt`
|
||||
|
||||
## Autosave
|
||||
- Accept partial answers; update progress percent = answered/total
|
||||
- Return server `updatedAt` for client reconciliation
|
||||
|
||||
## Submit
|
||||
- Change status to `submitted`, then `finished`
|
||||
- Write bundle `{ exam, attempt }` to `output/`
|
||||
- Update manifest and progress
|
||||
|
||||
## Security (minimal)
|
||||
- Cookie-based session with `userId`
|
||||
- CSRF for state-changing requests
|
||||
- CORS allow Angular origin
|
||||
|
||||
## Errors (examples)
|
||||
- 404 `EXAM_NOT_FOUND`
|
||||
- 409 `ATTEMPT_EXISTS`
|
||||
- 400 `INVALID_PAYLOAD`
|
||||
- 423 `EXAM_NOT_PUBLISHED`
|
||||
183
docs/exam-format.md
Normal file
183
docs/exam-format.md
Normal file
@@ -0,0 +1,183 @@
|
||||
# Exam Format Specification (Minimal)
|
||||
|
||||
## Purpose
|
||||
A simple, portable exam format. The system reads an exam JSON, renders it online, collects answers, and returns a JSON that bundles the original exam plus the answers (and optionally a basic result summary).
|
||||
|
||||
## Top-level Exam Structure
|
||||
- **examId**: string (unique)
|
||||
- **subject**: string
|
||||
- **title**: string
|
||||
- **difficulty**: one of `beginner | intermediate | advanced`
|
||||
- **durationMinutes**: integer ≥ 1
|
||||
- **sections**: array of sections, each containing questions
|
||||
- **metadata**: optional object (version, createdAt, etc.)
|
||||
|
||||
## Question Types (required support)
|
||||
- Single Choice (one correct answer)
|
||||
- Multiple Choices (multiple correct answers)
|
||||
- True/False
|
||||
- Essay
|
||||
- Simple Coding
|
||||
- Coding Exercise
|
||||
|
||||
## "I Don't Know" Option
|
||||
For `single_choice`, `multiple_choices`, and `true_false` questions, an "I don't know" option is automatically available. This allows honest assessment without guessing.
|
||||
|
||||
## Section Structure
|
||||
- **id**: string
|
||||
- **title**: string
|
||||
- **questions**: array of questions (of any supported type)
|
||||
|
||||
## Common Question Fields
|
||||
- **id**: string
|
||||
- **type**: `single_choice | multiple_choices | true_false | essay | code_simple | code_exercise`
|
||||
- **prompt**: string (supports simple Markdown)
|
||||
- **points**: integer ≥ 0
|
||||
- **allowIDK**: boolean (optional, default true for single_choice, multiple_choices, true_false)
|
||||
|
||||
## Type-specific Fields
|
||||
- **single_choice** (one correct answer)
|
||||
- `choices`: array of `{ key: string, text: string }`
|
||||
- `answer`: string (the correct `key`)
|
||||
- `allowIDK`: boolean (default true) - adds "I don't know" option
|
||||
|
||||
- **multiple_choices** (multiple correct answers)
|
||||
- `choices`: array of `{ key: string, text: string }`
|
||||
- `answer`: array of strings (all correct `key`s, e.g., ["A", "C"])
|
||||
- `allowIDK`: boolean (default true) - adds "I don't know" option
|
||||
- `partialCredit`: boolean (default true) - award partial points for some correct
|
||||
|
||||
- **true_false**
|
||||
- `answer`: boolean
|
||||
- `allowIDK`: boolean (default true) - adds "I don't know" option (scores as wrong)
|
||||
|
||||
- **essay**
|
||||
- `rubric`: `{ criteria: [{ name: string, weight: number }], maxPoints: integer }`
|
||||
- Notes: `answer` omitted; scored manually or later
|
||||
|
||||
- **code_simple**
|
||||
- `language`: `python | typescript | javascript`
|
||||
- `tests`: array of `{ input: string, expected: string, visibility?: public|hidden }`
|
||||
|
||||
- **code_exercise**
|
||||
- `language`: `python | typescript | javascript`
|
||||
- `tests`: array of `{ input: string, expected: string, visibility?: public|hidden, weight?: number }`
|
||||
- `rubric`: `{ criteria: [{ name: string, weight: number }], maxPoints: integer }`
|
||||
- `constraints?`: string (optional)
|
||||
- `starterCode?`: string (optional)
|
||||
|
||||
## Minimal Valid Exam JSON (example)
|
||||
```json
|
||||
{
|
||||
"examId": "sample-exam-v1",
|
||||
"subject": "python",
|
||||
"title": "Sample Exam",
|
||||
"difficulty": "beginner",
|
||||
"durationMinutes": 60,
|
||||
"sections": [
|
||||
{
|
||||
"id": "sec-1",
|
||||
"title": "Single Choice",
|
||||
"questions": [
|
||||
{
|
||||
"id": "q1",
|
||||
"type": "single_choice",
|
||||
"prompt": "Which is a valid list literal?",
|
||||
"choices": [
|
||||
{ "key": "A", "text": "(1, 2, 3)" },
|
||||
{ "key": "B", "text": "{1, 2, 3}" },
|
||||
{ "key": "C", "text": "[1, 2, 3]" }
|
||||
],
|
||||
"answer": "C",
|
||||
"points": 2
|
||||
}
|
||||
]
|
||||
},
|
||||
{
|
||||
"id": "sec-2",
|
||||
"title": "True / False",
|
||||
"questions": [
|
||||
{ "id": "q2", "type": "true_false", "prompt": "Tuples are immutable.", "answer": true, "points": 2 }
|
||||
]
|
||||
},
|
||||
{
|
||||
"id": "sec-3",
|
||||
"title": "Essay",
|
||||
"questions": [
|
||||
{
|
||||
"id": "q3",
|
||||
"type": "essay",
|
||||
"prompt": "Explain decorators and a common use case.",
|
||||
"rubric": { "criteria": [{ "name": "Correctness", "weight": 0.6 }, { "name": "Clarity", "weight": 0.4 }], "maxPoints": 8 },
|
||||
"points": 8
|
||||
}
|
||||
]
|
||||
},
|
||||
{
|
||||
"id": "sec-4",
|
||||
"title": "Simple Coding",
|
||||
"questions": [
|
||||
{
|
||||
"id": "q4",
|
||||
"type": "code_simple",
|
||||
"language": "python",
|
||||
"prompt": "Implement squares(n) returning list of squares 0..n.",
|
||||
"tests": [ { "input": "squares(3)", "expected": "[0, 1, 4, 9]", "visibility": "hidden" } ],
|
||||
"points": 10
|
||||
}
|
||||
]
|
||||
},
|
||||
{
|
||||
"id": "sec-5",
|
||||
"title": "Coding Exercise",
|
||||
"questions": [
|
||||
{
|
||||
"id": "q5",
|
||||
"type": "code_exercise",
|
||||
"language": "python",
|
||||
"prompt": "Implement paginate(items, page, per_page). Return items, page, per_page, total, total_pages.",
|
||||
"constraints": "O(n) acceptable; validate inputs (page>=1, per_page>=1).",
|
||||
"tests": [
|
||||
{ "input": "paginate([1,2,3,4,5], 2, 2)", "expected": "{items:[3,4],page:2,per_page:2,total:5,total_pages:3}", "visibility": "hidden", "weight": 2 },
|
||||
{ "input": "paginate([], 1, 10)", "expected": "{items:[],page:1,per_page:10,total:0,total_pages:0}", "visibility": "hidden" }
|
||||
],
|
||||
"rubric": { "criteria": [{ "name": "Correctness", "weight": 0.6 }, { "name": "Structure", "weight": 0.2 }, { "name": "EdgeCases", "weight": 0.2 }], "maxPoints": 20 },
|
||||
"points": 20
|
||||
}
|
||||
]
|
||||
}
|
||||
]
|
||||
}
|
||||
```
|
||||
|
||||
## Output Shape (what the system returns)
|
||||
- Echoes the input exam JSON as `exam`
|
||||
- Captured answers as `attempt`
|
||||
|
||||
```json
|
||||
{
|
||||
"exam": { "...": "(same as input)" },
|
||||
"attempt": {
|
||||
"attemptId": "attempt-001",
|
||||
"startedAt": "2025-10-20T10:00:00Z",
|
||||
"submittedAt": "2025-10-20T10:45:00Z",
|
||||
"answers": [
|
||||
{ "questionId": "q1", "response": "C", "timeSec": 25 },
|
||||
{ "questionId": "q2", "response": true, "timeSec": 10 },
|
||||
{ "questionId": "q3", "response": "Decorators wrap functions to add behavior...", "timeSec": 180 },
|
||||
{ "questionId": "q4", "response": { "code": "def squares(n): ..." }, "timeSec": 420 },
|
||||
{ "questionId": "q5", "response": { "code": "def paginate(items, page, per_page): ..." }, "timeSec": 900 }
|
||||
]
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
## Validation Checklist
|
||||
- Required top-level fields present
|
||||
- Each section and question has a unique `id`
|
||||
- `points` ≥ 0
|
||||
- Type-specific fields present according to question type
|
||||
|
||||
## Versioning
|
||||
- Use semantic versioning in `metadata.version` for exam files
|
||||
- New optional fields are allowed without breaking existing behavior
|
||||
55
docs/json-io-and-state.md
Normal file
55
docs/json-io-and-state.md
Normal file
@@ -0,0 +1,55 @@
|
||||
# JSON I/O, State Machine, and Folders (Minimal)
|
||||
|
||||
## Folders
|
||||
- `input/` — source exam JSON files
|
||||
- `attempts/{userId}/{examId}/{attemptId}.json` — current attempt files
|
||||
- `output/{examId}_{attemptId}.json` — final bundle `{ exam, attempt }`
|
||||
- `progress/{userId}.json` — per-user snapshot
|
||||
- `manifest.json` — registry of published exams and user completion
|
||||
|
||||
## Input Exam JSON
|
||||
- Must conform to `docs/exam-format.md`
|
||||
|
||||
## Attempt JSON
|
||||
```
|
||||
{
|
||||
"attemptId": "<id>",
|
||||
"userId": "<user>",
|
||||
"examId": "<exam>",
|
||||
"status": "in_progress|submitted|finished",
|
||||
"startedAt": "ISO-8601",
|
||||
"updatedAt": "ISO-8601",
|
||||
"submittedAt": "ISO-8601?",
|
||||
"answers": [ { "questionId": "q1", "response": <any>, "timeSec": 25 } ]
|
||||
}
|
||||
```
|
||||
|
||||
## Output JSON
|
||||
```
|
||||
{
|
||||
"exam": { /* original exam JSON */ },
|
||||
"attempt": { /* final attempt JSON */ }
|
||||
}
|
||||
```
|
||||
|
||||
## State Machine
|
||||
- `draft` → `published` → `in_progress` → `submitted` → `finished`
|
||||
|
||||
## Publish & Finish
|
||||
- Publish: `manifest.json` marks `{ examId, published: true }`
|
||||
- Finish for a user: attempt.status `finished` AND output bundle exists AND manifest.users[userId].finished includes examId
|
||||
|
||||
## Autosave & Integrity
|
||||
- Write to temp file then atomic rename for attempts/progress/output/manifest
|
||||
- Server returns `updatedAt` for reconciliation
|
||||
- One active attempt per exam per user (simple lock)
|
||||
|
||||
## Naming
|
||||
- `attemptId = <userId>-<examId>-<timestamp>`
|
||||
- Output file name: `<examId>_<attemptId>.json`
|
||||
|
||||
## Versioning
|
||||
- Store `examVersion` in attempt; warn on resume if drift
|
||||
|
||||
## Permissions (minimal)
|
||||
- Local file permissions: read-only for `input/`; write for others
|
||||
284
docs/multiple-choices-and-idk.md
Normal file
284
docs/multiple-choices-and-idk.md
Normal file
@@ -0,0 +1,284 @@
|
||||
# Multiple Choices and "I Don't Know" Option
|
||||
|
||||
## New Question Type: multiple_choices
|
||||
|
||||
### Purpose
|
||||
Allows questions with multiple correct answers (e.g., "Select all that apply").
|
||||
|
||||
### Format
|
||||
```json
|
||||
{
|
||||
"id": "q1",
|
||||
"type": "multiple_choices",
|
||||
"prompt": "Which of the following are mutable data types in Python? (Select all that apply)",
|
||||
"choices": [
|
||||
{ "key": "A", "text": "list" },
|
||||
{ "key": "B", "text": "tuple" },
|
||||
{ "key": "C", "text": "dict" },
|
||||
{ "key": "D", "text": "str" },
|
||||
{ "key": "E", "text": "set" }
|
||||
],
|
||||
"answer": ["A", "C", "E"],
|
||||
"partialCredit": true,
|
||||
"allowIDK": true,
|
||||
"points": 10
|
||||
}
|
||||
```
|
||||
|
||||
### Scoring
|
||||
|
||||
**Partial Credit (default):**
|
||||
```
|
||||
Score = (Correct Selections / Total Correct Answers) × Points
|
||||
```
|
||||
|
||||
Example:
|
||||
- Correct answers: A, C, E (3 total)
|
||||
- Student selects: A, C (2 correct)
|
||||
- Score = (2/3) × 10 = 6.67 points
|
||||
|
||||
**All or Nothing:**
|
||||
```json
|
||||
"partialCredit": false
|
||||
```
|
||||
- Gets full points only if ALL correct answers selected
|
||||
- Any mistake = 0 points
|
||||
|
||||
**With Wrong Selections:**
|
||||
- Penalty for selecting incorrect options
|
||||
- Score = max(0, (Correct - Wrong) / Total Correct × Points)
|
||||
|
||||
Example:
|
||||
- Student selects: A, C, D (2 correct, 1 wrong)
|
||||
- Score = (2 - 1) / 3 × 10 = 3.33 points
|
||||
|
||||
## "I Don't Know" Option
|
||||
|
||||
### Purpose
|
||||
- Encourages honest assessment
|
||||
- Prevents random guessing
|
||||
- Better measures actual knowledge
|
||||
- Can be scored differently (0 points vs penalty)
|
||||
|
||||
### Availability
|
||||
|
||||
**Automatically added to:**
|
||||
- `single_choice` questions
|
||||
- `multiple_choices` questions
|
||||
- `true_false` questions
|
||||
|
||||
**Not added to:**
|
||||
- `essay` questions (can leave blank)
|
||||
- `code_simple` / `code_exercise` (can leave blank)
|
||||
|
||||
### Format
|
||||
|
||||
**Enable (default):**
|
||||
```json
|
||||
{
|
||||
"type": "single_choice",
|
||||
"allowIDK": true,
|
||||
"prompt": "What is...?",
|
||||
...
|
||||
}
|
||||
```
|
||||
|
||||
**Disable:**
|
||||
```json
|
||||
{
|
||||
"type": "single_choice",
|
||||
"allowIDK": false,
|
||||
"prompt": "What is...?",
|
||||
...
|
||||
}
|
||||
```
|
||||
|
||||
### UI Behavior
|
||||
|
||||
**Single Choice:**
|
||||
```
|
||||
○ A. Option A
|
||||
○ B. Option B
|
||||
○ C. Option C
|
||||
○ ? I don't know
|
||||
```
|
||||
|
||||
**Multiple Choices:**
|
||||
```
|
||||
☐ A. Option A
|
||||
☐ B. Option B
|
||||
☐ C. Option C
|
||||
☐ ? I don't know (if checked, clears other selections)
|
||||
```
|
||||
|
||||
**True/False:**
|
||||
```
|
||||
○ True
|
||||
○ False
|
||||
○ I don't know
|
||||
```
|
||||
|
||||
### Scoring Rules
|
||||
|
||||
**"I don't know" selected:**
|
||||
- Treated as incorrect (0 points)
|
||||
- NOT penalized (better than guessing wrong)
|
||||
- Honest indicator of knowledge gaps
|
||||
|
||||
**Comparison:**
|
||||
- Wrong guess: 0 points + false confidence
|
||||
- "I don't know": 0 points + honest gap identification
|
||||
|
||||
## Complete Examples
|
||||
|
||||
### Example 1: Single Choice with IDK
|
||||
|
||||
```json
|
||||
{
|
||||
"id": "q1",
|
||||
"type": "single_choice",
|
||||
"prompt": "Which decorator is used for static methods?",
|
||||
"choices": [
|
||||
{ "key": "A", "text": "@staticmethod" },
|
||||
{ "key": "B", "text": "@classmethod" },
|
||||
{ "key": "C", "text": "@property" }
|
||||
],
|
||||
"answer": "A",
|
||||
"allowIDK": true,
|
||||
"points": 5
|
||||
}
|
||||
```
|
||||
|
||||
**Possible responses:**
|
||||
- "A" → 5 points (correct)
|
||||
- "B" or "C" → 0 points (incorrect)
|
||||
- "IDK" → 0 points (honest)
|
||||
|
||||
### Example 2: Multiple Choices with Partial Credit
|
||||
|
||||
```json
|
||||
{
|
||||
"id": "q2",
|
||||
"type": "multiple_choices",
|
||||
"prompt": "Which are valid Python keywords? (Select all that apply)",
|
||||
"choices": [
|
||||
{ "key": "A", "text": "class" },
|
||||
{ "key": "B", "text": "function" },
|
||||
{ "key": "C", "text": "import" },
|
||||
{ "key": "D", "text": "include" },
|
||||
{ "key": "E", "text": "def" }
|
||||
],
|
||||
"answer": ["A", "C", "E"],
|
||||
"partialCredit": true,
|
||||
"allowIDK": true,
|
||||
"points": 9
|
||||
}
|
||||
```
|
||||
|
||||
**Possible responses:**
|
||||
- ["A", "C", "E"] → 9 points (all correct)
|
||||
- ["A", "C"] → 6 points (2/3 correct with partial credit)
|
||||
- ["A", "C", "D"] → 3 points (2 correct - 1 wrong)
|
||||
- ["IDK"] → 0 points (honest)
|
||||
|
||||
### Example 3: True/False with IDK
|
||||
|
||||
```json
|
||||
{
|
||||
"id": "q3",
|
||||
"type": "true_false",
|
||||
"prompt": "Python supports tail call optimization.",
|
||||
"answer": false,
|
||||
"allowIDK": true,
|
||||
"points": 3
|
||||
}
|
||||
```
|
||||
|
||||
**Possible responses:**
|
||||
- false → 3 points (correct)
|
||||
- true → 0 points (incorrect)
|
||||
- "IDK" → 0 points (honest)
|
||||
|
||||
### Example 4: Disable IDK (Force Answer)
|
||||
|
||||
```json
|
||||
{
|
||||
"id": "q4",
|
||||
"type": "single_choice",
|
||||
"prompt": "What is 2 + 2?",
|
||||
"choices": [
|
||||
{ "key": "A", "text": "3" },
|
||||
{ "key": "B", "text": "4" },
|
||||
{ "key": "C", "text": "5" }
|
||||
],
|
||||
"answer": "B",
|
||||
"allowIDK": false,
|
||||
"points": 2
|
||||
}
|
||||
```
|
||||
|
||||
**UI shows only A, B, C (no IDK option)**
|
||||
|
||||
## Answer Format
|
||||
|
||||
### Response for single_choice
|
||||
```json
|
||||
{
|
||||
"questionId": "q1",
|
||||
"response": "A"
|
||||
}
|
||||
// or
|
||||
{
|
||||
"questionId": "q1",
|
||||
"response": "IDK"
|
||||
}
|
||||
```
|
||||
|
||||
### Response for multiple_choices
|
||||
```json
|
||||
{
|
||||
"questionId": "q2",
|
||||
"response": ["A", "C", "E"]
|
||||
}
|
||||
// or
|
||||
{
|
||||
"questionId": "q2",
|
||||
"response": ["IDK"]
|
||||
}
|
||||
```
|
||||
|
||||
### Response for true_false
|
||||
```json
|
||||
{
|
||||
"questionId": "q3",
|
||||
"response": true
|
||||
}
|
||||
// or
|
||||
{
|
||||
"questionId": "q3",
|
||||
"response": "IDK"
|
||||
}
|
||||
```
|
||||
|
||||
## Benefits
|
||||
|
||||
### For Learners:
|
||||
- Honest self-assessment
|
||||
- Identify knowledge gaps
|
||||
- Avoid false confidence from lucky guesses
|
||||
- Better learning outcome tracking
|
||||
|
||||
### For Instructors:
|
||||
- Clearer picture of student knowledge
|
||||
- Identify commonly unknown topics
|
||||
- Better curriculum adjustment
|
||||
- More accurate assessment
|
||||
|
||||
## Implementation Notes
|
||||
|
||||
- IDK option rendered in UI automatically when `allowIDK` is true
|
||||
- Selecting IDK clears any other selections (for multiple_choices)
|
||||
- Scoring treats IDK as incorrect (0 points)
|
||||
- Analytics can track IDK rate per question
|
||||
- High IDK rate = topic needs more coverage
|
||||
|
||||
36
docs/stack-architecture.md
Normal file
36
docs/stack-architecture.md
Normal file
@@ -0,0 +1,36 @@
|
||||
# Minimal Architecture (Django + Angular)
|
||||
|
||||
## Goals
|
||||
- Read exam JSON from `input/`, render online, autosave progress, submit, and write bundled JSON to `output/`.
|
||||
- Keep logic simple, deterministic, and file-backed.
|
||||
|
||||
## Components
|
||||
- Frontend: Angular SPA (exam UI, autosave, timer, resume)
|
||||
- Backend: Django REST API (file I/O, attempt state, publish/finish checks)
|
||||
- Storage: File system folders (`input/`, `attempts/`, `output/`, `progress/`, `manifest.json`)
|
||||
- Web: Nginx (serve Angular, proxy `/api/` to Django)
|
||||
|
||||
## Data Flow
|
||||
1. Angular requests `/api/exams` → Django lists published exams by reading `manifest.json` + `input/`.
|
||||
2. Start/resume attempt: `/api/exams/{examId}/attempt` → Django reads/writes `attempts/` and returns current attempt JSON.
|
||||
3. Autosave: PUT `/api/attempts/{attemptId}` → backend persists answers and updates `progress/`.
|
||||
4. Submit: POST `/api/attempts/{attemptId}/submit` → backend writes `output/{examId}_{attemptId}.json` with `{ exam, attempt }` and marks finished.
|
||||
|
||||
## Minimal State Machine
|
||||
- draft → published → in_progress → submitted → finished
|
||||
|
||||
## Files & Folders
|
||||
- `input/` — source exam JSON files
|
||||
- `attempts/{userId}/{examId}/{attemptId}.json` — current attempt
|
||||
- `output/{examId}_{attemptId}.json` — final bundle `{ exam, attempt }`
|
||||
- `progress/{userId}.json` — per-user progress snapshot
|
||||
- `manifest.json` — registry of exams (published flags) and per-user finished set
|
||||
|
||||
## Security (minimal)
|
||||
- Auth: simple token cookie, userId in session
|
||||
- CORS/CSRF configured for Angular origin
|
||||
- No external services required
|
||||
|
||||
## Observability
|
||||
- Request logs, error logs
|
||||
- Autosave frequency metric (client)
|
||||
Reference in New Issue
Block a user