SI-Core for Individualized Learning and Developmental Support - From Raw Logs to Goal-Aware Support Plans
Draft v0.1 — Non-normative supplement to SI-Core / PLB / Metrics docs
This document is non-normative. It describes how to use SI-Core concepts (GCS, Effect Ledger, PLB, [ETH], [OBS], [MEM]) in individualized learning and developmental support settings.
Normative contracts live in SI-Core / SI-NOS specs, the metrics pack, and the GDPR / ethics supplements. This text is an implementation guide for people building real systems around learners and support staff.
1. Why bring SI-Core into learning and developmental support?
Most current “AI in education / support” systems look like this:
App → LLM / recommender → Logs (maybe) → Human (sometimes)
They can be useful, but they often lack:
- Explicit goals — “Improve test scores” is not the same as “Increase reading fluency without burning the learner out.”
- Traceability — Why did the system choose this exercise or this feedback, and did it actually help?
- Ethics runtime — How do we prevent subtle harm to vulnerable learners (e.g., overloading, shaming, biased expectations)?
- Structured learning loops — How do we learn from what works and what does not, across many learners, without erasing their individuality?
SI-Core gives us:
- Goal Contribution Scores (GCS) — a structured way to encode what we are trying to improve and how we measure it.
- An Effect Ledger — an append-only record of what support actions we took, for whom, in what context, and with what result.
- [ETH] ethics overlay — a runtime gate for what is allowed, not just a policy PDF.
- Pattern-Learning Bridge (PLB) — a disciplined way to let the system propose changes to its own behaviour, under governance.
This document shows how to map those pieces into the world of individualized learning and developmental support, with a bias toward practical PoC architectures.
2. Goals, GCS, and “what success looks like”
Before touching algorithms, you need a goal model.
In an SI-Core view, a learner or supported person is not a “test score generator.” They are a whole agent with multiple goals attached.
2.1 Example goal surface for a learner
A non-exhaustive set of goals:
learner.skill.reading_fluencylearner.skill.reading_accuracylearner.skill.math_fluencylearner.wellbeing.stress_minimizationlearner.wellbeing.self_efficacylearner.relationship.trust_with_adultslearner.autonomy.support_for_self_direction
Rule: avoid shorthand IDs (e.g., “stress_minimization”); always reference the fully-qualified goal ID used in the declared goal surface.
For each goal (g), a GCS estimator attempts to answer:
“Did this sequence of support actions move goal (g) in the right direction, given the learner’s context?”
Examples:
- After 4 weeks of reading support, reading fluency improved, but stress markers and avoidance behaviour went up. GCS should reflect this trade-off, not just the fluency gain.
- A social skills intervention improved peer interaction but temporarily increased short-term discomfort; for some goals, this might be acceptable and expected.
The important properties:
- Multi-goal: we never optimize a single scalar; we track a vector of contributions.
- Context-aware: what counts as “good” depends on learner profile, environment, and support plan.
- Ethics-compatible: some goals (e.g., stress minimization, respect for autonomy) are hard constraints, not “nice to have.”
You do not need a perfect GCS model on day one. You do need:
- names and rough definitions for the goals you care about;
- a baseline way to estimate progress (tests, observations, self-report);
- a plan for gradually improving those estimators over time.
3. Effects in learning: what goes into the Effect Ledger?
In SI-Core, an effect is a structured description of something the system did that changed the world.
In a learning / developmental support context, effects might include:
Content-level actions
- “Assigned reading passage X at level Y.”
- “Recommended math exercise set Z.”
- “Adjusted difficulty from B1 to A2 on next session.”
Interaction-level actions (LLM / agent behaviour)
- “Provided scaffolded hint instead of direct answer.”
- “Switched to multiple-choice instead of open-ended writing.”
- “Used encouragement pattern E when learner hesitated.”
Schedule / plan actions
- “Moved session time from evening to afternoon.”
- “Increased/ decreased session duration.”
Escalation / human handoff actions
- “Flagged repeated distress; notified teacher.”
- “Requested parent / guardian check-in.”
Each effect entry should minimally contain:
- Who — learner ID (pseudonymized), supporting agent, human staff.
- When & where — timestamps, session context.
- What — type of effect (content choice, feedback style, schedule).
- Why — which goal(s) and internal state led to this choice (policy ID, model version, GCS estimates, constraints checked).
- Outcome hooks — pointers to future outcome measurements (e.g., “check reading_fluency in 2 weeks”).
The Effect Ledger is not a surveillance tool; it is a learning and accountability tool:
- For the learner and family: “What was tried? What seemed to help?”
- For educators / therapists: “Which intervention patterns work for which profiles?”
- For PLB: “What structural changes to policies / content selection seem worth proposing?”
4. Ethics overlay [ETH] for vulnerable learners
Working with children or people with developmental differences requires stronger-than-usual runtime ethics.
Static policies are not enough; we need an Ethics Overlay that can gate effectful actions in real time.
4.1 Typical ethical constraints
Non-normative examples of [ETH] rules:
Respect load and fatigue
- Do not increase task difficulty when recent stress markers are high.
- Enforce rest breaks after N minutes of sustained effort.
Respect dignity
- Do not use shaming or negative labels, ever.
- Avoid “comparison to peers” unless explicitly configured.
Bias and fairness
- Avoid systematically offering fewer rich opportunities to certain learner profiles.
- Monitor recommendation patterns for demographic / diagnostic bias.
Consent and transparency
- Some data (e.g., sensitive diagnostic info) must never flow into external models.
- Learners and guardians must understand the kind of decisions the system makes and how to question them.
In runtime terms, [ETH] should be able to:
Reject a proposed effect (e.g. “increase difficulty by 3 levels”) if it violates constraints.
Suggest safer alternatives (e.g. “increase by 1 level, not 3”).
Log an EthicsTrace entry, so we can later audit:
- which policy version was used,
- which constraints fired,
- what alternatives were considered.
4.2 Teacher / guardian as part of [ETH]
In many cases, the highest authority is not the system but the human ecosystem around the learner.
A reasonable pattern:
For low-risk micro-decisions (order of exercises), [ETH] can decide autonomously, using configs approved by staff.
For higher-risk decisions (significant plan changes, sensitive topics), [ETH] routes decisions into a human-in-loop queue:
- “Proposed: switch to new reading program X; expected benefits, trade-offs, GCS estimate here. Approve / modify / reject.”
All such decisions are recorded with [ID] = human approver and [ETH] context.
This makes ethics a runtime collaboration between:
- encoded policies (ethics configs, constraints), and
- human educators / clinicians.
4.3 Integration with existing assessments
Schools and clinics already have rich assessment ecosystems. SI-Core should integrate, not replace.
Pattern 1 — Assessments as [OBS] sources
Standardized test results arrive as observation units:
obs_unit:
type: standardized_assessment
source: state_reading_test_2028
payload:
score: 582
percentile: 67
subscales:
fluency: 72
comprehension: 61
timestamp: 2028-03-15
confidence: high
These feed into GCS estimation and PLB patterns, but do not override ongoing day-to-day observations.
Pattern 2 — GCS → report cards and narratives
Report cards remain teacher-authored:
- GCS trajectories and Effect Ledger summaries inform, but do not auto-generate, grades.
- Teacher narratives can be enriched with concrete pattern insights.
Example narrative snippet:
“Sarah has shown strong growth in reading fluency (GCS +0.23 over the semester). System data suggests she responds well to personally relevant topics and shorter passages. We recommend continuing this pattern in the next term.”
Pattern 3 — Aligning with IEP goals
IEP goal:
“Student will read 120 wpm with 95% accuracy by May.”
Mapping into SI-Core:
iep_goal:
id: IEP-2028-sarah-reading-01
si_core_goals:
- learner.skill.reading_fluency
- learner.skill.reading_accuracy
target_values:
reading_wpm: 120
accuracy_pct: 95
deadline: 2028-05-31
progress_tracking: weekly
Dashboard shows:
- Trajectory toward the IEP goal.
- Interventions tried, with measured effects.
- Recommended adjustments for the next period.
Pattern 4 — MTSS / RTI integration
Tier 1 (universal):
- All learners have SI-Core support.
- GCS trajectories monitored against broad thresholds.
Tier 2 (targeted):
- Learners below certain thresholds flagged.
- PLB proposes intensified, evidence-based interventions.
- Teacher check-ins more frequent.
Tier 3 (intensive):
- Highly individualized plans.
- Stronger [ETH] constraints and more human oversight.
- Daily or session-level monitoring and specialist involvement.
Key principle:
SI-Core augments existing assessment systems and professional judgment; it does not replace them with automated scoring.
5. Observation [OBS]: what do we observe, and how?
To do anything useful, SI-Core needs structured observations.
In a learning / support context, observations might include:
Performance signals
- correctness on tasks,
- time-to-complete,
- types of mistakes.
Engagement signals
- time-on-task,
- idle time,
- drop-offs,
- voluntary extra practice.
Affective / stress proxies (carefully handled)
- self-report (“this was easy / hard / stressful”),
- simple interaction features (e.g. rapid repeated errors),
- only physiological signals if explicitly consented and governed.
Contextual signals
- time of day,
- environment (school / home / clinic),
- device type / accessibility settings.
[OBS] should:
turn raw interaction logs into semantic units like
session_performance_snapshot,engagement_state,stress_risk_state,
attach confidence and coverage metadata,
record parsing / coverage status:
if
Observation-Status != PARSED, the Jump enters UNDER_OBSERVED mode:
- it may produce alerts, requests for more observation, or conservative recommendations,
- but it MUST NOT execute any effectful ops (no RML-1/2/3; no automated plan changes).
- high-risk decisions should additionally route into human review even for “pure” recommendations.
This is where semantic compression (art-60-007) comes in: we do not ship every keypress, but we do preserve the goal-relevant structure.
6. Human-in-the-loop patterns for support systems
A realistic learning / support system is never fully autonomous.
We want clear, repeatable patterns where humans and SI-Core cooperate.
6.1 Teacher / therapist dashboards
A non-normative example of what a staff dashboard might show:
Per-learner view
- Current goals and GCS trajectory.
- Recent effects (interventions) and outcomes.
- Alerts (stress, non-response, unusual patterns).
Group / classroom view
- Aggregate GCS trends.
- Fairness / bias indicators (are certain learners consistently getting less challenging or less supportive interventions?).
What-if tools
- Simulated effect bundles and projected GCS changes.
- Suggestions from PLB that await approval.
Staff should be able to:
- approve / reject proposed plan changes;
- override specific effects (“do not use this pattern with this learner”);
- adjust goal weights (within constitutional bounds);
- mark manual interventions so they also enter the Effect Ledger.
6.2 Answering “Why did the system do X?”
When a teacher or guardian asks, “Why did the system give this exercise?” the system should be able to answer from:
- the Effect Ledger entry for that decision;
- the relevant [OBS] snapshot (performance, engagement);
- [ETH] checks (constraints that were applied);
- GCS estimates and trade-offs considered.
The answer does not need to be mathematically dense; it should be:
- structural: which goals, which constraints, which policy;
- transparent: when we are unsure, say so;
- actionable: what human can change, and how.
6.3 Training educators and therapists to work with SI-Core
SI-Core is not a “plug in and forget” system. It assumes trained humans who understand what they are looking at and when to override it.
Why training matters
- Dashboards require interpretation, not just reading numbers.
- “Why did the system do X?” explanations must be contextualized.
- Ethics overrides require informed judgment.
- PLB proposals need domain-expert scrutiny before adoption.
Suggested training modules
Module 1 — Mental model of SI-Core (≈2 hours)
- GCS as multi-goal trajectories, not grades.
- Effect Ledger as a structured history of supports.
- [ETH] as a runtime partner, not a PDF.
- PLB as a collaborator that proposes changes.
Module 2 — Dashboard literacy (≈3 hours)
- Reading GCS trajectories over weeks and months.
- Interpreting alerts (stress spikes, non-response, suspected bias).
- Using the “Why X?” feature with learners and parents.
- Approving or rejecting plan changes proposed by PLB.
Module 3 — Ethics collaboration (≈2 hours)
- When to override system suggestions.
- How to document manual interventions for later review.
- Recognizing potential bias in recommendations.
- When and how to escalate to governance / ethics teams.
Module 4 — Working with PLB (≈2 hours)
- Understanding pattern claims (“for learners like X, Y tends to help”).
- Reading sandbox results for proposed changes.
- Approving safe vs risky changes under budget constraints.
- Providing structured feedback back into PLB.
Ongoing support
- Monthly “office hours” with SI-Core experts.
- Peer learning circles for teachers / therapists.
- Regular case-study reviews (“what worked, what did not”).
- UI/UX improvements driven by educator feedback.
For non-technical professionals
- No programming required for day-to-day use.
- Visual interfaces for all core tasks.
- Plain-language explanations (avoid jargon in the UI).
- Inline help, guided workflows, and “safe defaults” everywhere.
6.4 Learner agency and voice
Core principle:
Learners are not objects to optimize; they are agents with preferences, rights, and histories.
Learner-facing interfaces
Age 7–12
Simple feedback:
- “I liked this / I didn’t like this.”
- “Too hard / too easy / just right.”
Visual progress views that are encouraging, not manipulative.
Very clear “stop / take a break” buttons.
Age 13–17
- Simplified “Why did I get this activity?” explanations.
- Goal preference sliders (“focus more on reading / on math / on social skills”).
- Challenge level controls (“today I want easier / normal / harder”).
- Opt-out from specific content types where appropriate.
18+
- Full transparency into their own data and history.
- Ability to adjust goal weights within policy bounds.
- Optional access to PLB proposals that affect them.
- Easy export of personal data and logs.
Contestation mechanisms
Learners can flag:
- “This exercise doesn’t make sense for me.”
- “I don’t understand why I got this.”
- “This feels unfair or biased.”
When flagged:
- The system presents a “Why X?” explanation in learner-friendly language.
- If the learner still contests, issue is escalated to a teacher/therapist.
- Human and learner review the case together, with Effect Ledger context.
- They decide to override, adjust, or keep the plan, and log the reasoning.
- All contestations feed into PLB as higher-priority patterns.
Supporting self-advocacy
The system should explicitly teach learners how to:
- Read and interpret their own progress views.
- Request particular types of support.
- Question recommendations safely.
- Report discomfort, stress, or perceived unfairness.
With scaffolds such as:
- “How to talk to your teacher about this dashboard.”
- “Understanding what these graphs mean.”
- “Your rights when using this system.”
Cultural configuration:
- Some cultures emphasize deference to authority; some emphasize autonomy.
- SI-Core should allow configuration of “how strong” learner controls appear.
- But core dignity and safety rights are non-negotiable in all configurations.
7. Pattern-Learning Bridge (PLB) in education / support
PLB is SI-Core’s way of letting systems learn how to change themselves, without giving them unrestricted self-edit rights.
In this context, PLB might:
analyze the Effect Ledger and outcomes,
discover patterns like:
- “For learners with profile P, pattern of interventions A tends to work better than B on goal
learner.skill.reading_fluency, with no harm tolearner.wellbeing.stress_minimization,”
- “For learners with profile P, pattern of interventions A tends to work better than B on goal
propose controlled changes to:
- content selection policies,
- sequencing rules,
- hint strategies,
- schedule heuristics.
7.1 PLB budgets for learning systems
We do not want an unconstrained system rewriting its own support logic around vulnerable learners.
A reasonable self-modification budget for education / support:
Scope budget
- PLB can propose changes in recommendation / sequencing logic.
- PLB cannot touch constitutional / ethics core, or override hard safety constraints.
Magnitude budget
- e.g., “In one proposal, do not change more than 10% of a policy’s weights,” or “do not increase pressure metrics (time, difficulty) by more than X% for any learner subgroup.”
Rate budget
- at most N accepted proposals per month;
- each proposal must run through sandbox + human review.
This keeps PLB in a coach / assistant role, not an uncontrollable self-rewriter.
8. Privacy, dignity, and data minimization
Educational and developmental data is deeply personal. Applying SI-Core here demands strong privacy and dignity protections.
Non-normative patterns:
Data minimization
- Only log what is needed for goals and safety.
- Avoid collecting sensitive signals (e.g., fine-grained facial analysis) unless absolutely necessary and consented.
Pseudonymization and compartmentalization
- Separate identifiers used for learning analytics from those used operationally.
- Restrict cross-context linking (school vs home vs clinic).
Right to explanation and contestation
- For significant plan changes, learners / guardians should be able to see the rationale and contest it.
Right to erasure
- Align Effect Ledger and model training pipelines with erasure / redaction mechanisms (see GDPR-related docs).
SI-Core’s [MEM] and redaction mechanisms should be configured so that:
- removal of a learner’s data can be propagated to models and analytics;
- audit trails still retain enough structure to demonstrate that the removal happened (without re-identifying the learner).
8.1 Consent mechanisms for learners and families
In education and developmental support, consent is not a one-time checkbox. It is an ongoing, age-appropriate relationship between the learner, their guardians, and the system.
Age-appropriate consent
Under 13
- Parent/guardian consent is required.
- Child assent is requested with age-appropriate explanations.
- Learner UI surfaces simplified “Why?” views and very limited controls.
Age 13–17
- Teen consent plus parent/guardian notification.
- Pathways to contest parent decisions where local law requires it.
- Gradual autonomy increase (e.g., teen can adjust goals within safe bounds).
18+
- Full autonomy for the learner.
- Optional family involvement if the learner explicitly opts in.
Consent granularity
Core consent (required to use the system):
- Use of the system for learning / developmental support.
- Collection of performance and interaction data.
- Creation of Effect Ledger entries tied to the learner.
Optional consents (configurable):
- Sharing anonymized patterns for research or model improvement.
- Use of affective signals (stress proxies, e.g. typing speed, hesitation).
- Cross-context linking (e.g. school + home interventions).
Example initial consent record:
consent_request:
learner_id: student_042
timestamp: 2028-04-15
consents:
- type: core_system_use
status: granted
grantor: parent
- type: stress_monitoring
status: granted
grantor: parent
conditions: ["teacher_oversight", "monthly_review"]
- type: research_sharing
status: declined
grantor: parent
Ongoing consent management
- Parent/learner dashboards to view and change consents at any time.
- Periodic consent review reminders (e.g. every 3–6 months).
- Instant revocation capability for each optional consent.
- Clear impact explanations, e.g. “If you revoke X, Y will stop working.”
Withdrawal flow:
- Learner/parent requests withdrawal.
- System shows exactly what will stop and what will be deleted.
- Optional grace period (e.g. 7 days) for reconsideration.
- Apply erasure via redaction/crypto-shred (tombstone + proofs) across:
- Effect Ledger partitions,
- derived analytics stores,
- training / fine-tuning datasets, subject to legal constraints and retention obligations.
- Confirmation + audit trail recorded in [MEM] (without re-identifying the learner).
Special considerations:
- Neurodivergent learners may need AAC-friendly consent UX.
- Multilingual, culturally sensitive consent materials.
- Explicit checks that consent mechanisms themselves do not pressure or manipulate.
9. Migration path: from today’s systems to SI-wrapped support
You do not need a full SI-Core stack to start.
A pragmatic path for a PoC:
9.1 Phase 0 — Existing tutor / support agent
- App + LLM-based tutor / support agent.
- Ad-hoc logs, limited structure.
9.2 Phase 1 — Structured logging and goals
Define a minimal goal surface (the GCS dimensions you care about):
- e.g., reading fluency, engagement, stress.
Introduce structured session logs with:
- effects (what content, what feedback),
- simple outcomes (scores, self-reports),
- context (time, environment).
9.3 Phase 2 — Effect Ledger + basic [ETH]
Turn session logs into a proper Effect Ledger.
Implement a basic Ethics Overlay:
- maximum difficulty jump per session,
- rest-break enforcement,
- pattern bans for certain profiles.
Expose a teacher dashboard for oversight.
9.4 Phase 3 — PLB suggestions
Run PLB offline over the ledger:
- detect patterns where simple changes (e.g. different content ordering) helped.
Propose small, scoped policy patches:
- run them in sandbox;
- show projected impact to staff;
- apply only with human approval.
9.5 Phase 4 — SI-Core L2 integration
Wrap the tutor / support agent in an SI-Core L2 runtime:
- [OBS] receives semantic learner state;
- [ETH] and [EVAL] gate effectful actions;
- [MEM] records jumps and effects;
- PLB operates within strict budgets.
At each step, the core guarantees improve:
- more traceability,
- clearer ethics boundaries,
- safer and more interpretable adaptation.
10. Concrete use cases
To keep this from staying abstract, here are three concrete, illustrative patterns for SI-Core in learning and developmental support (examples, not universal claims).
Use case 1: Reading fluency support (ages 7–9)
Goals:
learner.skill.reading_fluency(primary)learner.skill.reading_comprehensionlearner.skill.reading_accuracylearner.wellbeing.stress_minimizationlearner.wellbeing.self_efficacy
Typical effects:
- Passage selection (level, topic, length).
- Hint provision (phonics hints vs context hints).
- Pacing (short sessions, scheduled breaks).
- Gentle progress feedback (non-comparative).
[ETH] constraints:
- Maximum sustained reading time (e.g. ≤ 15 minutes without break).
- No difficulty increase while error rate > 30%.
- Topic selection must respect learner interests and cultural constraints.
- No shaming language (“easy”, “everyone else can do this”).
Patterns PLB tends to discover:
- Short passages (100–150 words) outperform long ones for many learners.
- Personalized topics often increase engagement (exact lift varies by cohort, implementation, and measurement window).
- Phonics-oriented hints often help decoding more than generic “try again” prompts.
Teacher dashboard typically shows:
- Weekly trajectory of reading fluency and comprehension.
- Stress-related incidents (e.g. abrupt disengagement, negative feedback).
- Suggested next passages, with Why X? views referencing GCS and Effect Ledger snippets.
Use case 2: Math problem solving (middle school)
Goals:
learner.skill.math_fluency.algebralearner.skill.problem_solvinglearner.wellbeing.challenge_tolerancelearner.autonomy.self_direction
Typical effects:
- Problem set selection (topic, difficulty, representation).
- Scaffolding level (worked examples → hints → independent).
- Collaboration suggestions (pair work, small groups).
- Time-boxing and pacing controls.
[ETH] constraints:
- Honor “challenge preference” settings (e.g. “no surprise tests”).
- Respect accommodations (e.g. “no timed tests”, “no mental arithmetic”).
- Avoid comparative feedback that induces shame (“others solved this faster”).
Patterns PLB tends to discover:
- Gradual difficulty ramps outperform sudden jumps.
- Early use of worked examples, then hints, then independent practice yields better retention.
- Peer collaboration helps ~60% of learners, but can hurt some anxious learners → must be goal-conditioned.
Use case 3: Social skills support (autism spectrum)
Goals:
learner.skill.social.conversation_turn_takinglearner.skill.social.emotion_recognitionlearner.wellbeing.anxiety_minimizationlearner.autonomy.safe_practice_space
Typical effects:
- Scenario presentation (video / cartoon / text / interactive role-play).
- Feedback timing (immediate micro-feedback vs delayed session summary).
- Repetition schedules for consolidation.
- Notification hooks to parents / therapists.
[ETH] constraints:
- Respect sensory sensitivities (no loud or flashing content).
- Never surface “practice failures” in public or group contexts.
- Anxiety exposure must be gradual and reversible (no forced flooding).
- Explicit “pause / stop now” control for the learner.
Patterns PLB tends to discover:
- Video-based scenarios often outperform pure text for some learners; for others, low-stimulus text is safer.
- Delayed, session-level feedback can reduce anxiety compared to real-time correction.
- 3–5 repetitions of the same scenario under slightly varied conditions are often needed for stable skill transfer.
Teacher / therapist dashboard:
- Scenario completion patterns and time-on-task.
- Anxiety markers by scenario type (from explicit feedback and indirect signals).
- Suggestions for in-person follow-ups with parents / therapists.
Each use case follows the same template:
- Goal surface (which GCS dimensions matter).
- Effect types (what the system is allowed to do).
- [ETH] rules (what it is not allowed to do).
- PLB insights (what patterns emerged).
- Human oversight views (what professionals see and can override).
11. Summary
Structured Intelligence Core is not just for cities and power grids. It can — and arguably should — be used wherever systems interact with vulnerable humans over long periods.
For individualized learning and developmental support, SI-Core helps answer:
- What are we really trying to optimize? (goal surface / GCS vector)
- What exactly did we do, and what seemed to help? (Effect Ledger)
- Are we staying inside ethical and safety boundaries? ([ETH])
- How can the system learn better support patterns without going off the rails? (PLB with budgets)
The goal is not to replace teachers, therapists, or families. It is to build systems that:
- make their decisions explicit and auditable;
- respect the dignity and autonomy of each learner;
- can be tuned and questioned by the humans who care about them;
- get better over time, under governance, rather than “mysteriously drifting.”
That is what SI-Core for individualized learning and developmental support aims to provide.