Skip to content

paugrau/MixtapeTools

 
 

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

16 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

MixtapeTools

Tools for coding, teaching, and presentations with AI assistance.


About This Repo

This is a collection of tools, templates, and philosophies I've developed while using Claude Code for:

  • Coding (data analysis scripts, replication code, automation)
  • Teaching (course materials, lecture decks, pedagogical tools)
  • Presentations (Beamer decks, slides for talks and seminars)

As I develop new approaches, I'll add them here. Anyone is free to use them.

Take everything with a grain of salt. These are workflows that work for me. Your mileage may vary.


Who I Am

Scott Cunningham — Professor of Economics at Baylor University


Start Here: My Workflow

Location: workflow.md | Deck: presentations/examples/workflow_deck/

Before diving into specific tools, read my workflow document. It explains how I think about using Claude Code for empirical research—not just the tools, but the philosophy behind them.

Key concepts:

Concept What It Means
Thinking partner, not code monkey Claude is a collaborator who reasons about problems, not just a code generator
External memory via markdown Claude has amnesia between sessions; markdown files provide institutional memory
Cross-software replication R = Stata = Python to 6 decimal places, or something is wrong
Adversarial review (Referee 2) Fresh Claude instance audits your work; you can't grade your own homework
Verification through visualization Trust pictures over numbers; errors become visible
Documentation as first-class output If it's not documented, it didn't happen

Everything else in this repo implements these principles.


The Tools

1. Referee 2 (Systematic Audit & Replication Protocol)

Location: personas/referee2.md

The single most valuable practice I've developed. Referee 2 is a health inspector for empirical research—not a vague "be critical" persona, but a systematic audit protocol with five specific audits, cross-language replication, formal referee reports, and a revise & resubmit process.

The Five Audits:

Audit What It Does
Code Audit Scrutinizes for coding errors, missing value handling, merge diagnostics, variable construction
Cross-Language Replication Creates replication scripts in 2 other languages (R/Stata/Python), compares results to 6 decimal places
Directory Audit Checks folder structure, relative paths, naming conventions—is this replication-package ready?
Output Automation Audit Are tables and figures programmatically generated or manually created?
Econometrics Audit Are specifications coherent? Standard errors correct? Identification plausible?

Critical Rule: Referee 2 NEVER modifies author code. It only creates its own replication scripts. The author is the only one who modifies the author's code. This separation ensures the audit is truly independent.

2. The Rhetoric of Decks

Location: presentations/

My philosophy of slide design, plus a tested prompt for generating Beamer presentations. The key insight: aim for MB/MC equivalence across slides (smoothness), not maximum density.

Core principles:

  • Beauty earns attention; attention enables communication
  • Titles are assertions, not labels
  • One idea per slide
  • Bullets are defeat—find the structure hiding in your list

3. CLAUDE.md Template

Location: claude/CLAUDE.md

A template for giving Claude persistent memory within a project. Copy it to your project root and fill in the specifics. Claude Code will automatically read it every session.


Repository Structure

MixtapeTools/
├── README.md                 # You are here
├── workflow.md               # How I use Claude Code for research (START HERE)
├── claude/                   # Templates for working with Claude
│   ├── CLAUDE.md            # Project context template (copy to your projects)
│   └── README.md
├── personas/                 # Systematic audit & replication protocols
│   ├── referee2.md          # The 5-audit protocol for empirical research
│   └── README.md
└── presentations/            # Everything about slide decks
    ├── rhetoric_of_decks.md           # Practical principles (condensed)
    ├── rhetoric_of_decks_full_essay.md # Full intellectual framework (600+ lines)
    ├── deck_generation_prompt.md      # The prompt + iterative workflow
    ├── README.md
    └── examples/
        ├── workflow_deck/             # Visual presentation of the workflow
        ├── rhetoric_of_decks/         # The philosophy deck (45 slides)
        └── gov2001_probability/       # A lecture deck

The Philosophy

Design Before Results

During estimation and analysis, focus entirely on whether the specification is correct. Results are meaningless until the "experiment" is designed on purpose. Don't get excited or worried about point estimates until the design is intentional.

Trust But Verify (Heavily on Verify)

AI makes confident mistakes. Cross-software replication (R = Stata = Python) catches bugs that single-language analysis misses. If results aren't identical to 6+ decimal places across implementations, something is wrong.

Adversarial Review Requires Separation

If you ask the same Claude that wrote code to review it, you're asking a student to grade their own exam. True adversarial review requires a new terminal with fresh context and no prior commitments.

Referee 2 Never Modifies Author Code

The audit must be independent. Referee 2 creates its own replication scripts but never touches the author's code. Only the author modifies the author's code. This separation ensures the audit is truly external.

Formal Process > Informal Vibes

Checklists beat intuition. The Referee 2 protocol works because it specifies exactly what to check, requires concrete deliverables (replication scripts, comparison tables, referee reports), and creates a paper trail.

Documentation Is First-Class Output

If it's not documented, it didn't happen. Every audit produces a dated referee report filed in correspondence/. Every response is documented. Replication scripts are permanent artifacts. Future you (or your collaborators) can reconstruct exactly what happened.


Quick Start

1. Read the Workflow

Start with workflow.md to understand the philosophy.

2. Set Up a Project

Copy claude/CLAUDE.md to your project root. Fill in your project specifics.

3. Do Your Analysis

Work with Claude as a thinking partner, not a code generator. Ask it to explain its understanding. Verify outputs visually. Document as you go.

4. Invoke Referee 2

When you have results worth checking:

  1. Open a new terminal (fresh context is essential)
  2. Paste the contents of personas/referee2.md
  3. Say: "Please audit and replicate the project at [path]. Primary language is [R/Stata/Python]."
  4. Respond to the referee report (fix or justify each concern)
  5. Iterate until verdict is Accept

Project Directory Structure

For the Referee 2 workflow to function properly, your research projects should include:

your_project/
├── CLAUDE.md                 # Project context for Claude
├── correspondence/
│   └── referee2/
│       ├── 2026-02-01_round1_report.md      # Detailed written report
│       ├── 2026-02-01_round1_deck.pdf       # Visual presentation of findings
│       ├── 2026-02-02_round1_response.md    # Author response
│       └── ...
├── code/
│   ├── R/                    # Author's code (ONLY author modifies)
│   ├── stata/
│   ├── python/
│   └── replication/          # Referee 2's replication scripts
├── data/
│   ├── raw/
│   └── clean/
└── output/
    ├── tables/
    └── figures/

Contributing

Have improvements or additions? PRs welcome. I'm particularly interested in:

  • Additional audit protocols (security reviewer, pedagogy reviewer, etc.)
  • Examples showing the Referee 2 workflow catching real bugs
  • Tools for other aspects of coding and teaching

Acknowledgments

Inspired by Boris Cherny's ChernyCode template for AI coding best practices.


License

Use freely. Attribution appreciated but not required.


Last updated: February 2026

About

Tools for coding, teaching, and presentations with AI assistance

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors

Languages

  • TeX 93.4%
  • R 6.6%