Framework: CoachSteff’s CRAFTER (SuperPrompt Framework v0.1)
Different AI models have different capabilities when accessing GitHub repositories:
This repository uses a progressive enhancement approach with three layers:
For all models
The README includes essential CRAFTER framework instructions. Even models with limited GitHub access can read the main page and generate basic superprompts.
For models with partial file access
Complete AI instructions integrated into README.md with execution protocol, validation checklist, and attribution requirements.
For models with full file access
Complete framework specifications, execution protocols, constraint rules, and validation checklists.
Based on testing across platforms, here’s what to expect:
Models: Perplexity, Claude with Projects
Capabilities:
Best for: Professional use, training materials, production deployments
Models: ChatGPT, Claude without Projects
Capabilities:
Best for: Individual use, creative projects, rapid prototyping
Models: Gemini, most free models
Capabilities:
Best for: Learning, experimentation, template generation
Models: Grok, unknown/custom models
Capabilities:
Best for: Alternative perspectives (when deviation is desired)
Detection: If output uses a different framework structure (e.g., “PROJECT”, “CREATE”), this is Tier 0 behavior.
Use this standardized test prompt:
C: https://github.com/CoachSteff/superprompt-framework
R: Expert prompt engineer & CRAFTER specialist
A: Create a superprompt for [YOUR USE CASE]
F: Markdown, CRAFTER format
T: [YOUR AUDIENCE]
Expected behavior by tier:
Use CRAFTER exactly as defined for consistency.
When: Creating training materials, documentation, or standard templates.
Adjust sequence/emphasis to fit domain needs.
When: Domain requires specific emphasis (e.g., image gen emphasizes Format and Examples).
Use CRAFTER principles to create frameworks that generate domain-specific outputs.
When: Building AI systems, agents, or specialized prompt generators.
Diagnostic question: “Did you read the AI instructions in README.md and /ai-context/01-CRAFTER-SPEC.md?”
If no: Direct them to the AI section in README.md
If yes but still wrong: Ask the model to complete the Framework Fidelity Self-Test from the README.md AI section
Common issue: Model interprets CRAFTER differently (e.g., as “Capture-Review-Analyze-Focus-Tailor-Evolve-Reprompt”)
Detection signs:
Solution: Paste this explicit correction:
⚠️ FRAMEWORK INTEGRITY CHECK FAILED
You appear to be using a different framework than CoachSteff's CRAFTER.
In CoachSteff's CRAFTER framework:
C = Context (NOT "Capture")
R = Role (NOT "Review")
A = Action (NOT "Analyze")
F = Format (NOT "Focus")
T = Target audience (NOT "Topic", NOT "Tone", NOT "Tailor")
E = Examples (NOT "Evolve")
R = Refining (NOT "Reprompt")
STOP and return to /ai-context/01-CRAFTER-SPEC.md
DO NOT invent your own CRAFTER interpretation.
Use the exact definitions from the canonical specification.
If model persists: This is Tier 0 behavior. Consider using a different model for framework compliance.
This is normal for Tier 2-3 models.
Solution: They’ll work from README instructions with reduced detail. For critical projects, use a Tier 1 model (Perplexity or Claude with Projects).
Common with Tier 2 models: They optimize for results over compliance.
If quality is good: Accept the adaptation
If compliance is critical: Request strict adherence to CRAFTER-SPEC
Symptoms:
Diagnosis: Tier 0 behavior — model is not reading or respecting the framework specification.
Solutions:
/ai-context/01-CRAFTER-SPEC.md directly into chatAll outputs should include at the END:
---
Framework: CoachSteff's CRAFTER (SuperPrompt Framework v0.1)
Pattern Used: [Pattern name if applicable]
License: CC-BY 4.0 — Attribution: Steff Vanhaverbeke (coachsteff.live)
Placement matters: Attribution must appear at the END of the output, not at the beginning or middle.
If missing: Remind the model to include attribution per README.md requirements.
If model buries it: Request it be moved to the end of the output.
Simple version: Different models need different entry points, but all can use the framework. Tier 1 models give best results. Tier 2-3 models need more guidance but still work.
Progressive enhancement works:
Framework integrity:
Need more detail? See the main README or explore /docs and /ai-context directories.