Introducing Cereby Tutor: Your AI Study Partner Inside Every Quiz
Formative tutoring inside the quiz modal: context-aware, integrity-preserving, and tied to the material you actually studied.
The problem with generic chat in a quiz
Picture the usual workaround. You're stuck on question 7. You open a separate chat window, type out the question from memory, and hope the AI reconstructs the same stem and options you're looking at. Sometimes it works. Often it does not, because the model never saw your specific distractors, has no idea whether you've already submitted, and has no reason not to just tell you the answer.
That third part is the real problem. Assessment is useful only if you've genuinely tried. A general-purpose assistant has no policy about when to hold back.
Cereby had another issue on top of that. Quizzes are generated from your own notes and readings. A generic assistant cannot anchor its explanations to the same document the quiz came from, so it guesses at citations instead of pointing to the paragraph you uploaded.
What we built
Cereby Tutor lives directly in the quiz modal. You open it with the "Open quiz Cereby" button on any quiz item. From that point, it knows the full structured context of the active question: the stem, the answer options, the official explanation, and which question index you're on. It also knows your submission state, so it can apply different rules before and after you commit your answer.
Sessions are per-question. Clear the thread and you start fresh on that item. Move to question 8 and the history from question 7 does not follow you.
Two modes, one policy decision
The design pivots on a single split: what can the tutor say before you submit, and what can it say after.
Before you submit, the tutor follows a "Quiz help" policy. It will clarify terms in the stem, offer general strategy (spotting negation, ruling out options that contradict each other), and, when the quiz is linked to a resource, draw on that reading to give context. What it will not do is tell you which option is correct. You still have to make that call.
After the UI reveals the official answer, the constraints relax. The tutor explains why the keyed answer is right. If you chose something else, it contrasts your pick against the correct one using the question metadata. It can expand or shorten explanations, include math or markdown, and, when source material is attached, point you to the passage that supports the answer. Replies are prefixed with "Question N:" so the thread stays anchored to the item you're on.
Source grounding, when it applies
When a quiz is created from or linked to a Cereby resource, the tutor can receive excerpts from that material. Learners can ask where the text supports the answer, why their choice diverged from what the source says, or for quotes tied to specific passages. Quizzes without an attached resource still work; the tutor falls back to the stem and official explanation fields.
Before and after
| Area | Generic chat | Cereby Tutor |
|---|---|---|
| Quiz structure | Paraphrased from memory | Full structured stem, options, and explanation |
| Integrity before submit | No policy | Quiz help mode: hints only, no answer disclosure |
| After submit | Ad hoc | Keyed explanation plus wrong-option contrast |
| Source alignment | Guessed citations | Grounded in the linked document when one exists |
What we learned building it
Binding context to the attempt is what makes any of this work. A tutor that does not have the same payload the UI does recreates exactly the ambiguity we were trying to eliminate.
Integrity rules need to be explicit. A vague instruction to "be careful not to give away answers" is harder to test and easier to circumvent than a clear pre/post-reveal policy baked into the prompt structure.
Per-question threads matter more than they sound. One long conversation mixing questions 3, 5, and 7 erodes trust fast. Separate history per index keeps the thread legible and the feedback relevant.
