From a49bce4a5a03253e4bd7fac2fb890cdf01eafcb8 Mon Sep 17 00:00:00 2001 From: lila Date: Wed, 1 Apr 2026 01:22:21 +0200 Subject: [PATCH] adding tasks --- documentation/notes.md | 3 ++- documentation/roadmap.md | 2 +- 2 files changed, 3 insertions(+), 2 deletions(-) diff --git a/documentation/notes.md b/documentation/notes.md index 470b974..364d75c 100644 --- a/documentation/notes.md +++ b/documentation/notes.md @@ -3,7 +3,8 @@ ## tasks - pinning dependencies in package.json files -- add this to drizzle migrartions file: +- rethink organisation of datafiles and wordlists +- add this to drizzle migrations file: ✅ ALTER TABLE terms ADD CHECK (pos IN ('noun', 'verb', 'adjective', etc)); ## openwordnet diff --git a/documentation/roadmap.md b/documentation/roadmap.md index 24c71ae..5fba9ee 100644 --- a/documentation/roadmap.md +++ b/documentation/roadmap.md @@ -25,12 +25,12 @@ Goal: Word data lives in the DB and can be queried via the API. Done when: `GET /api/decks/1/terms?limit=10` returns 10 terms from a specific deck. [x] Run `extract-en-it-nouns.py` locally → generates `datafiles/en-it-nouns.json` --- Import ALL available OMW noun synsets (no frequency filtering) [x] Write Drizzle schema: `terms`, `translations`, `language_pairs`, `term_glosses`, `decks`, `deck_terms` [x] Write and run migration (includes CHECK constraints for `pos`, `gloss_type`) [x] Write `packages/db/src/seed.ts` (imports ALL terms + translations, NO decks) [x] Download CEFR A1/A2 noun lists (from GitHub repos) [ ] Write `scripts/build_decks.ts` (reads external CEFR lists, matches to DB, creates decks) +[ ] check notes.md [ ] Run `pnpm db:seed` → populates terms [ ] Run `pnpm db:build-decks` → creates curated decks [ ] Define Zod response schemas in `packages/shared`