Study Guide: Alex for AI Researchers

Your reference for using AI assistants thoughtfully in AI research. Ready-to-run prompts for literature review, experiment design, and scientific writing.


What This Guide Is Not

This is not a habit formation guide (see Self-Study Guide for that). This is a research toolkit — the specific ways AI assistants can accelerate your workflow while maintaining scientific rigor, and the prompts that work.


Core Principle for AI Researchers

You’re studying the thing you’re using. This creates unique opportunities and unique risks.

The opportunity: You have deeper intuitions about what these systems can and can’t do. You can probe more effectively.

The risk: You know better than anyone how these outputs can mislead. You understand that fluent text doesn’t mean correct reasoning. You must hold yourself to a higher standard of verification.

Use Alex for information retrieval, brainstorming, and drafting — never for reasoning you haven’t verified. Your reviewers (and your conscience) will know the difference.


The Seven Use Cases

1. Literature Discovery and Mapping

When to use: Starting a new research direction, comprehensive related work, or staying current.

Prompt pattern:

I'm researching [topic/technique/problem].

What I know:
- Foundational papers: [list the papers you're already aware of]
- Key figures: [researchers whose work you're familiar with]
- Current understanding: [your mental model of the landscape]

Help me expand my view:
1. What subfields or adjacent areas might have relevant work?
2. What search terms/keywords might I be missing?
3. What conferences or journals beyond [venues you know] cover this?
4. What methodological approaches from other fields address similar problems?
5. What's the historical arc of this research area?

Follow-up prompts:

I keep finding work from [specific community]. What other communities tackle this differently?
What's the most contrarian or unpopular perspective on this topic?
What are the open problems that are well-known but unsolved?

Critical Note: Always verify specific paper claims, citations, and author attributions. LLMs hallucinate papers and misattribute ideas. Use this for discovery, then verify in Semantic Scholar, Google Scholar, or primary sources.


2. Research Question Refinement

When to use: When you have an intuition but not yet a crisp research question.

Prompt pattern:

I'm interested in [broad topic or phenomenon].

My intuition: [what you've observed or suspect]
Why it matters: [potential significance]
What I've tried: [any preliminary investigations]

Help me sharpen this into a research question:
1. What's the specific, testable version of this intuition?
2. What would a positive result show? A negative result?
3. What's the minimal experiment that would be informative?
4. What assumptions am I making that I should state explicitly?
5. How would I know if this question is already answered?

Follow-up prompts:

Is this question too big? How do I scope it down?
Is this question too incremental? What's the bolder version?
What would reviewers at [target venue] object to?

3. Experimental Design Critique

When to use: Before running expensive experiments. Catching design flaws early.

Prompt pattern:

Here's my experimental setup:

Research question: [what you're testing]
Hypothesis: [your prediction]
Method: [describe the experiment]
Baselines: [what you're comparing against]
Metrics: [how you'll measure success]
Dataset: [what data you're using]

Critique this design:
1. What confounds could explain positive results other than my hypothesis?
2. What's missing from my baselines?
3. Are my metrics actually measuring what I claim?
4. What sample sizes/compute requirements am I underestimating?
5. What will the reviewers' first objection be?

Follow-up prompts:

What's the simplest ablation that would strengthen causal claims?
How do I handle [specific dataset limitation]?
What's the worst-case interpretation of positive results?

4. Technical Writing Assistance

When to use: Drafting papers, clarifying explanations, improving flow.

Prompt pattern:

I need help with [section of a paper]:

Section: [intro / related work / method / results / discussion]
Target venue: [conference or journal]
Current draft:

[paste your text]

Help me:
1. What's the clearest way to structure this section?
2. Where am I being unclear to someone outside my subfield?
3. What's the key claim and is it stated clearly enough?
4. What can I cut without losing content?
5. Where do I need more detail vs. less?

Follow-up prompts:

This paragraph buries the important point. Restructure it.
A reviewer says this is unclear: [specific passage]. Rewrite for clarity.
Make this sound less like I'm hedging while maintaining accuracy.

Critical Note: The writing must ultimately be yours. Use Alex for structure and clarity feedback, but any scientific claims must reflect your genuine understanding.


5. Mathematical Formulation and Notation

When to use: When you’re wrestling with how to formalize an idea.

Prompt pattern:

I want to formalize [concept or intuition].

The intuitive version: [describe in words what you mean]
What I want to capture: [the key property or relationship]
What I want to avoid: [common formalizations that don't fit]
Context: [the broader framework this will fit into]

Help me:
1. What's a clean mathematical framing?
2. What notation is standard in this area?
3. What similar concepts exist and how are they formalized?
4. What are the edge cases this formalization handles (or doesn't)?
5. What assumptions does this formalization bake in?

Follow-up prompts:

This is too complicated. What's the simpler version that captures most of what matters?
How do I extend this formalization to handle [additional constraint]?
What's the connection between this and [related concept]?

Critical Note: Always verify mathematical derivations independently. LLMs make algebraic errors that look correct.


6. Rebuttal and Response Writing

When to use: Responding to peer reviews. Crafting rebuttals.

Prompt pattern:

I received this review comment:

[paste the reviewer comment]

My paper claims: [the relevant claim they're questioning]
My evidence: [what supports this claim]
My honest assessment: [do they have a point?]

Help me draft a response that:
1. Acknowledges what's valid in their concern
2. Explains what we actually did/claimed (if they misread)
3. Addresses the concern concretely (not vaguely)
4. Proposes an additional experiment/analysis if warranted
5. Maintains professional, respectful tone

Follow-up prompts:

They're wrong but I don't want to sound defensive. How do I push back diplomatically?
They're right. How do I acknowledge this while showing the contribution still holds?
This is a fatal flaw. Is the paper salvageable?

7. AI Systems as Research Subjects

When to use: When you’re studying the AI systems you’re using. Meta-experimentation.

Prompt pattern:

I'm studying [aspect of LLM behavior]:

Phenomenon: [what you've observed]
Why it matters: [implications for the field]
Challenge: [why this is hard to study]

Help me think about methodology:
1. What are the confounds in studying this behavior?
2. How do I distinguish capability from spurious patterns?
3. What prompting variations would strengthen causal claims?
4. What's the appropriate skepticism level for model introspection?
5. What related behavioral studies might inform methodology?

Follow-up prompts:

How do I test whether this is prompt-specific vs. robust?
What would a mechanistic explanation look like and how would I test it?
How do I report this without overclaiming what the model "knows" or "understands"?

Practice Progression

Week 1: Use Literature Discovery for an area you think you know well. Find something you missed.

Week 2: Run your next experimental design through the critique prompts before you start.

Week 3: Take a drafted section through Technical Writing Assistance. Compare before and after.

Week 4: Be explicit about where you used AI assistance in your research process. Develop your own norms.


What Great Looks Like

After consistent use, you should notice:

The goal isn’t for Alex to do your research — it’s for Alex to help you do better research.


A Note on Disclosure

The norms in AI research communities around AI-assisted writing are still forming. Consider:

The field is watching how we handle this. Lead with integrity.