Lecture 13

Your Game: Driving the Build Yourself

Week 3 has no seed prompts. The design is now your problem. This lecture covers the skills the seed prompts were hiding — writing a spec, scoping a project you can finish, prompting across many files, and recovering when the AI goes off the rails.

Spec writing AI interviews Scope Vertical slices Debugging Final project
The Central Idea

In Weeks 1 and 2 you followed prompts someone else wrote. The game design was a given. Your job was to read what the AI produced, fix what was wrong, and extend it.

Week 3 is different. Nobody hands you a design. You write the spec, you choose the scope, you decide what the Model holds and what the Controller does. The AI is still your collaborator — but only as good as the direction you give it.

The skill this week is directing the AI, not just using it.

Two Kinds of AI Interface

Most of what follows works in either kind of tool, but the mechanics differ. Know which one you're in.

Chat AI

claude.ai, ChatGPT, Gemini in a browser. You paste code in, it gives code back, you paste it into your editor.

  • Best for: spec interviews, design discussions, "explain this error"
  • You manually move code in and out
  • The AI never touches your files
  • Drift is common — the AI's idea of your code lags reality
Agentic AI

GitHub Copilot agent mode, Claude Code, Cursor, Google Antigravity. The AI reads and edits files directly, runs commands, sees compile errors itself.

  • Best for: scoped edits, refactors, "make this test pass"
  • The AI sees your real files — no pasting needed
  • It can run the build and react to errors
  • It can also delete or rewrite more than you intended — review every diff
Most students will use both: chat for thinking, agentic for editing. The interview pattern in Part 3 is pure chat. The scoped-edit and refactor patterns later use either, but they look different in each.
Part 1

What's Actually Different About Week 3

The seed prompts are gone. Understanding what they were doing for you is the first step in replacing them.

Where Week 3 Projects Actually Fail

Common failure modes
  • The game was too ambitious — student spent three days on a menu system and never got to gameplay
  • The student described the game in vibes ("a fun puzzle game") and the AI built a different game each time it was asked
  • Files grew to 600 lines and the AI started forgetting what was in them
  • An early bug was patched over instead of fixed; later features built on the broken patch
  • No `ModelTester` was written, so bugs only surfaced through gameplay
What works instead
  • A small, well-defined game finished early, then polished
  • A written spec the AI can be re-grounded on
  • A playable vertical slice within the first hour
  • Scoped prompts that name the file and method to change
  • A `ModelTester` extended with each new behavior
Finished and small beats ambitious and broken. Every time.
Part 2

Scope — The Failure Mode No One Warns You About

Most failed Week 3 projects are not failures of code. They are failures of scope.

The Scope Thermometer

Safe — pick this
  • Pong
  • Breakout
  • Asteroids (simple)
  • Snake variants

One core mechanic. Few entity types. Finishable in a week with time left for polish.

Real but doable
  • Frogger
  • Tetris
  • Missile Command
  • Centipede

Multiple entity types or a non-trivial rule (line-clearing, lane logic). Possible if you start early.

Danger zone
  • Pac-Man with 4 ghost AIs
  • Platformer with levels
  • RPG with inventory
  • Multiplayer anything

Possible — but most students who start here finish nothing playable. Cut the scope, then cut again.

Rule of thumb: if you cannot describe the core gameplay loop in two sentences, your idea is too big. Cut until you can.

How to Cut Scope: A Worked Example

Original idea

"Pac-Man with four ghosts that each have different personalities, fruit bonus items, multiple levels, a high score table, and power pellets that let Pac-Man eat ghosts."

Five major systems. Each one is a multi-day project. None of them are the core game.

Cut down

"Pac-Man eats pellets in a fixed maze. One ghost chases him by moving toward his current cell. Game ends when all pellets are gone (win) or the ghost touches Pac-Man (lose)."

One entity type beyond the player. One rule per system. Win and lose conditions both stated. This is the MVP.

The original idea is not gone — it is the stretch goal. Build the cut-down version first. Add features only after it works end-to-end.
Part 3

Let the AI Interview You

Before asking the AI to write anything, ask it to ask you questions. The AI is more useful as an interviewer than as an oracle.

The Interview Prompt

I want to build a [game name] in Java with Swing using MVC. Before we write any code, interview me. Ask me one question at a time about gameplay, controls, win and loss conditions, and what should be on the screen. After about 8 questions, summarize what I told you as a one-page spec organized by Model, View, and Controller.

Why "one at a time"

A wall of ten questions gets a wall of ten half-thought answers. One at a time forces a real decision before the next question lands. This is the same reason teachers do not hand you the whole exam before reading the first question.

Why "summarize as a spec"

The summary is structured around MVC because the rest of your project is structured around MVC. The interview ends with an artifact you can immediately use — not just a chat transcript.

A Real Interview — Excerpt

AI

In your Frogger, what happens when the frog is hit by a car — does it lose a life and respawn at the start, or is the game over immediately?

You

Loses a life and respawns. Three lives total.

AI

Got it. When the frog is on a log floating across water, does it move with the log automatically, or does the player still have to press keys to keep up?

You

It moves with the log. If the log carries the frog off-screen, that's a death.

AI

Good — that means the Model needs to know which lane the frog is in and whether that lane has a "carrier" object under it. Next question: how many lanes total, and in what order?

Notice: by question 3, the AI is already starting to think in terms of the Model. That is the whole point — the interview shapes the design before any code exists to constrain it.

Variations Worth Knowing

The skeptic

"Now play devil's advocate. What is still underspecified in what I told you? What will be ambiguous when I start coding?"

Use after the spec summary. Catches the gaps you didn't notice.

The scope reviewer

"Given a one-week deadline and that I'm a beginner, which features in this spec should I cut for an MVP? Which are stretch goals?"

Trades on the AI's pattern-matching: it has seen many failed projects and knows what cuts first.

The user simulator

"Pretend you are a player who has just opened my game. Walk through your first 60 seconds. Where do you get confused? What do you want to do that the spec doesn't allow?"

Reveals missing onboarding, missing pause keys, missing restart logic.

The MVC auditor

"Look at my spec and tell me, for each rule, which MVC layer it belongs in. Flag any rule that seems to live in two layers."

Surfaces Model/View confusion before it ends up in code.

Part 4

From Interview to Written Spec

The spec lives in your README.md. It is a contract with yourself — and a context document you will paste to the AI repeatedly.

What a Good Spec Looks Like


# Frogger — Spec

## Gameplay
A frog crosses a road and a river. The road has
cars; getting hit costs a life. The river has logs;
the frog must ride them to cross. Three lives.
Reach the top to win the level.

## Model — FrogModel.java
- frog x, y (grid coords)
- lives (int, starts at 3)
- score (int)
- 5 lanes, each a List<LaneObject>
- gameState: PLAYING / WON / LOST

## View — FrogView.java
- draws frog as green square
- draws each lane (cars red, logs brown)
- draws lives count and score top-left
- draws "GAME OVER" / "YOU WIN" centered

## Controller — FrogController.java
- arrow keys move frog one cell
- Swing Timer ticks every 100ms
- on tick: advance lanes, check collisions
- R key restarts after game over

## "Done" for this week
Frog can cross both road and river. Lives work.
Win and lose screens appear. R restarts.
      
Why this spec works
  • Each layer's data is enumerated
  • "Done" is concrete and testable
  • Stretch features are not here — they go in a separate section if at all
  • Short enough to paste into a prompt as context
  • Specific enough that the AI cannot drift into building a different game
Your spec is your most-pasted document this week. Treat it as code.

Bad Spec vs Good Spec

Bad

"A fun puzzle game with cool levels and good controls. The player solves puzzles to advance. There will be a score and probably some kind of timer. I want it to feel polished."

Every sentence is a vibe, not a fact. The AI will invent five different games across five prompts.

Good

"Sokoban on a 10x10 grid. Player pushes boxes onto target tiles. Player cannot pull. Boxes cannot be pushed into walls or other boxes. Level is won when every target tile has a box on it. One hard-coded level. Arrow keys to move. R to restart."

Every sentence is a rule the AI can implement and you can test.

Part 5

Build a Vertical Slice First

A playable bad game beats an unplayable good one. Get something on screen that responds to input within the first hour.

Suggested Build Order

  1. Spec the game — one page, in README.md
  2. Generate three class shells — Model, View, Controller, compiles, opens window
  3. Get one entity drawn — the player, in the right place
  4. Get one input working — arrow keys move the player
  5. Add the timer — one Model thing updates each tick
  6. Add the core mechanic — collision, scoring, whatever the game is about
  7. Add win and lose conditions — game cannot run forever
  8. Add restart — R key resets state
  9. Now add features and polish
If you stop at step 8

You have a complete, defensible Week 3 submission. It is small but it is whole. That is a B+ project.

Steps 1–8 in one hour

Aggressive, but possible if your spec is tight and your prompts are scoped. Most students who hit step 8 quickly end up with the strongest projects — they have time left for the things that make a game feel finished.

If you are on day three and still on step 4, your scope is too big. Cut, do not push through.
Part 6

Prompt Strategies When There Are No Seed Prompts

Five prompt patterns that replace what the seed prompts gave you in Weeks 1 and 2.

Pattern 1 — The Spec Dump

I am building [game] in Java with Swing using MVC. Here is my spec:

[paste your full spec from README.md]

Generate three class shells — GameModel.java, GameView.java, GameController.java — with method stubs based on this design. GameModel must not import any Swing classes. The program should compile and open a blank window.

Use this once at the very start. The spec dump is what gives the AI a shared mental model with you. Don't paste the spec every time — only when you are starting fresh or correcting drift.

Pattern 2 — The Scoped Edit

In FrogModel.java, add a method advanceLanes() that moves every object in every lane by its speed field. Wrap objects around the screen edges. Do not modify FrogView or FrogController. Show me only the new method — I will paste it in.

  • Names one file
  • Names one method
  • Forbids changes elsewhere
  • Returns a snippet, not a rewrite
Most of your prompts in Week 3 should look like this. Scoped edits prevent the AI from "helpfully" rewriting code you didn't ask it to touch. In agentic tools (Copilot agent, Claude Code, Antigravity), the AI will edit the file directly — read the diff before accepting, and undo immediately if it touched anything outside the named scope.

Pattern 3 — The Paste-the-File Prompt

Here is my current FrogModel.java in full:

[paste the entire file]

I want to add a power-up that gives the frog two extra lives. Where in this file should the new state live, and what method should add it? Show me the diff, not a rewrite of the whole file.

Use this when the AI's idea of your file has drifted (you can tell — it references methods that don't exist, or forgets fields you added). Pasting the file resets its mental state. In agentic tools you usually don't need to paste — the AI reads the file itself. But you may still need to name it explicitly ("look at FrogModel.java first") if the tool doesn't pick it up automatically.

Pattern 4 — The Error-First Prompt

I am getting this exception when I press the spacebar:

Exception in thread "AWT-EventQueue-0" java.lang.NullPointerException
        at FrogController.keyPressed(FrogController.java:47)
        at java.desktop/java.awt.Component.processKeyEvent(...)

Here is the relevant method from FrogController.java:

[paste the keyPressed method]

What is null on line 47, and why?

Bad: "it's broken, fix it." Good: stack trace, the relevant code, a hypothesis or specific question. The AI is far better at debugging when given the same context a human debugger would want. In agentic tools you can often just paste the trace and say "find and fix it" — but you'll still get better results if you add a hypothesis. The agent reads code; it does not read your mind.

Pattern 5 — The Test Extension

Here is my ModelTester.java:

[paste it]

Add three new test methods that verify: (1) the frog cannot move off the bottom edge, (2) crossing into the river without a log decrements lives, (3) reaching the top row sets gameState to WON. Use the same check(name, condition) helper. Do not change the existing tests.

From Lecture 12. Every new Model behavior should ship with a new test. The AI is good at writing these once you've shown it the pattern.

Part 7

Managing the AI's Context Drift

After 30+ messages the AI forgets your design. Recognizing this — and recovering from it — is a Week 3 skill.

Symptoms of Context Drift

  • The AI adds Swing imports back to your Model
  • It renames a method you've been using for the last hour
  • It "helpfully" rewrites a method you didn't ask it to touch
  • It references a method that doesn't exist in your file
  • It forgets a rule from your spec ("the frog moves with the log")
  • It produces code that compiles but does the wrong thing
Defenses
  • Re-paste the spec (or, in agentic tools, point at README.md) at the top of important prompts
  • Paste the actual file when the AI's mental model is off — or in agentic tools, tell it to re-read the file
  • Ask for diffs, not whole files — easier to review, less drift
  • Read every change before accepting it — pasting in chat, or approving a diff in agentic tools
  • Start a fresh chat / new agent session when the AI is arguing with its own earlier wrong answer
If a chat has gone in circles for 5+ turns, the chat itself is the problem. Open a new one.

The Fresh-Chat Restart Recipe

I am partway through building [game] in Java with Swing using MVC.

Here is my spec:

[paste spec]

Here are my current files:

[paste FrogModel.java]
[paste FrogView.java]
[paste FrogController.java]

Right now I am stuck on [specific problem]. Help me with that and only that.

A fresh chat with full context will out-perform a long chat with stale context almost every time. Don't be afraid to restart. Save the previous chat's key prompts to PROMPTS.md before you do.

Part 8

Debugging Your Own Game

When the bug is yours and there's no seed prompt to fall back on, you need a debugging method.

Bad Debug vs Good Debug

Bad

"my game is broken can u fix it"

No file, no error, no hypothesis. The AI guesses, you paste the guess, the new bug is worse than the old one.

Good

"The ball passes through the paddle when moving fast. I think this is a tunneling bug — the ball moves more than its own width per tick, so the collision check at the new position misses. Here is my checkPaddleCollision method. Is my hypothesis right, and how would I fix it?"

Names the symptom, names the suspected cause, asks a specific question. The AI now has somewhere useful to start.

When the Bug Is in the Model — Use the Tester


// add this BEFORE asking the AI to fix anything
public static void testBallPaddleCollision() {
    GameModel m = new GameModel();
    m.setBall(100, 200, 0, 30);   // ball moving down 30 px/tick
    m.setPaddle(80, 210, 60, 10); // paddle just below it
    m.tick();
    check("ball bounces off paddle when moving fast",
          m.getBallVY() < 0);
}
  
A failing test is a more precise bug report than any sentence you can write. If you can reproduce the bug in ModelTester, the AI can fix it without ever seeing the View or Controller.
Part 9

Refactoring as the File Grows

When GameModel.java hits 300 lines, the AI starts forgetting things. Time to extract.

When and What to Extract

Signs you should refactor
  • One file is over 300 lines
  • The AI forgets fields you added an hour ago
  • You have to scroll to find anything
  • One concept (Ball, Brick, Ghost) appears in dozens of places
Common extractions
  • An Entity class for things with x, y, width, height
  • A Sprite class for image-loading logic
  • A Level class for level layout data
  • A Sound helper for audio
  • An InputState class held by the Controller
Refactor only after the game works. Never refactor while debugging — you cannot tell whether your bug fix worked or whether the refactor introduced new bugs.

The Refactor Prompt

Here is my BreakoutModel.java:

[paste full file]

The Brick concept appears in many places — its position, its color, whether it's destroyed. Extract a Brick class in a new file Brick.java with fields and a constructor. Update BreakoutModel to use a List<Brick> instead of parallel arrays. Show me both files in full. Do not change game behavior — this is a pure refactor.

"Pure refactor — do not change behavior" is critical. Without it the AI may "improve" your logic and break things. After the refactor, run your ModelTester — if all tests still pass, the refactor was clean. Agentic tools shine here: a single prompt can edit multiple files, run the tester, and report back. Commit before you start so you can revert in one command if it goes sideways.

Part 10

Polish on a Budget

Two well-chosen polish features make a game feel finished. Five half-finished ones make it feel broken. Pick two.

High-Leverage Polish

Disproportionate impact
  • Title screen with the game name and "Press SPACE to start"
  • Pause key (P) that freezes the timer
  • One sound effect on the most important event (hit, score, death)
  • Persistent high score — read and write a single int from a text file
  • A clean death and restart cycle — game over, R restarts, no leftover state
Low leverage — skip
  • Custom fonts
  • Animated sprites with multiple frames
  • Particle effects
  • Settings menus
  • Difficulty levels (unless this is the core mechanic)

Low-leverage polish takes hours and rarely shows up in a 5-minute demo.

The Title Screen Pattern


// in GameModel
public enum State { TITLE, PLAYING, GAME_OVER }
private State state = State.TITLE;

public void startGame() { state = State.PLAYING; }
public State getState() { return state; }
  

// in GameView.paintComponent
switch (model.getState()) {
    case TITLE     -> drawTitle(g);
    case PLAYING   -> drawGame(g);
    case GAME_OVER -> { drawGame(g); drawGameOver(g); }
}
  

A state enum in the Model lets the View pick what to draw without holding any logic of its own. Adding a pause state later is a one-line change.

Part 10b

Adding Real Art (Optionally)

Free, legal sprites are one download away. Swapping rectangles for art is a small View change — the Model never knows.

Where to Get Art

OpenGameArt.org

Huge library of free, community-contributed game art. Filter by license — most assets are CC0 (no attribution required) or CC-BY (credit the artist).

  • Search "2D" + your game type
  • Prefer PNG with transparency
  • For CC-BY, credit the artist in your README.md
Kenney.nl

Curated asset packs aimed at beginners. Everything is CC0. Cleaner and more consistent than OpenGameArt, smaller selection.

  • Asset packs are zipped and themed
  • Great for "I need a paddle, ball, and bricks" needs
  • No attribution needed (but it's polite)
Browsing for the perfect sprite is a rabbit hole. Timebox it to 20 minutes. Pick something good enough and move on — you can always swap art later, and your grade does not depend on visual taste.

The Asset Workflow

  1. Make an assets/ folder in your repo, alongside your .java files
  2. Download a PNG (use PNG — JPG has no transparency and will draw ugly white squares)
  3. Drop it in assets/ and commit it to Git
  4. Load it once in the View's constructor — never inside paintComponent, which runs every frame
  5. Draw it with g.drawImage(...) using the Model's coordinates
  6. If you used a CC-BY asset, add an Art credits section to your README.md
The Model never changes. alien.x and alien.y are still the same integers — the View just decides whether to draw a green rectangle or a sprite at that position. This is the MVC payoff.

The Code Change — That's It

Before

// in GameView
public void paintComponent(Graphics g) {
    super.paintComponent(g);
    for (Alien a : model.getAliens()) {
        g.setColor(Color.GREEN);
        g.fillRect(a.x, a.y, 30, 30);
    }
}
      
After

// in GameView
private Image alienSprite;

public GameView(GameModel model) {
    this.model = model;
    try {
        alienSprite = ImageIO.read(
            new File("assets/alien.png"));
    } catch (IOException e) {
        e.printStackTrace();
    }
}

public void paintComponent(Graphics g) {
    super.paintComponent(g);
    for (Alien a : model.getAliens()) {
        g.drawImage(alienSprite,
            a.x, a.y, 30, 30, null);
    }
}
      

No Model changes. No Controller changes. The four-argument drawImage would use the sprite's natural size — the six-argument version (with w, h) scales it to match your grid.

Things That Will Bite You

  • Loading every frame. If ImageIO.read is inside paintComponent, your game will run at 2 FPS. Load in the constructor.
  • JPG instead of PNG. JPG has no alpha channel — you'll get a white box around your sprite. Always use PNG.
  • Size mismatch. A 64×64 sprite drawn at 30×30 may look fuzzy. Either pick assets near your grid size, or scale explicitly with drawImage(img, x, y, w, h, null).
  • File not found. Run your game from the project root, not from src/. The path is relative to where Java was launched.
Sprite sheets — stretch only

Many OpenGameArt downloads are sprite sheets: one PNG with many frames in a grid. Using them requires img.getSubimage(...) and a frame counter for animation.

For a Week 3 MVP, use single-image-per-entity. Treat sprite-sheet animation as a stretch goal after the game is fully playable.

If your game does not yet have a working game-over and restart cycle, you should not be looking at sprite sheets.
Part 11

PROMPTS.md as a Record of Judgment

This week needs at least 10 prompts. The graders care less about the count than about the one labeled "this is where the AI was wrong."

What Goes in PROMPTS.md This Week

  • The interview that produced your spec
  • The spec-dump prompt that started the build
  • A scoped-edit prompt for one of your features
  • A debugging prompt with the stack trace
  • One prompt where the AI was wrong, and what you did
  • A refactor prompt (if you did one)
  • A test-extension prompt for ModelTester
For each entry
  • Copy the prompt verbatim
  • One sentence on what the AI produced
  • One sentence on what you changed and why
  • If it was wrong: what was wrong, how you noticed, what you did instead
Honesty matters more than polish. "I asked it to add bricks and it deleted my paddle code, so I reverted and asked again with 'do not modify the paddle'" is a great entry.

One Good Entry — Example

Prompt 6 — Adding ghost AI (first attempt)

Prompt: "Add a ghost that chases Pac-Man."

What it did: Generated a Ghost class and a movement method that called Math.atan2 with floating-point coordinates. My Model uses integer grid cells.

Why it was wrong: The AI assumed continuous movement because I never told it the maze was a grid. The ghost ended up between cells, which broke my collision check.

What I did: Re-prompted with: "My maze is a 20x15 grid of integer cells. The ghost should also live on grid cells, moving one cell per tick toward Pac-Man's current cell. Use Math.signum(dx) and Math.signum(dy), and prefer the longer axis when both are nonzero."

What I learned: The AI defaults to physics-style movement unless told otherwise. My spec didn't say "grid-based" loud enough.

An entry like this is worth five "it worked great!" entries combined. It shows judgment.

Part 12

The 5-Minute Video Demo

A demo is not a screen recording of you playing. It is a guided tour. Plan it like a presentation.

A 5-Minute Demo, Minute by Minute

TimeWhat to showWhat to say
0:00 – 0:30Title screen, README spec on screen"This is [game]. The core mechanic is X. Here's the spec I wrote."
0:30 – 2:30Play the game — main mechanic working"Notice how the player can do X, the enemy responds with Y, and Z triggers when…"
2:30 – 3:30Demonstrate an edge case — die on purpose, win on purpose, hit a corner case"Here's what happens when you lose / win / hit the boundary."
3:30 – 4:30Open VS Code — point to one Model method and one View method"This rule lives in the Model because… This drawing code lives in the View because…"
4:30 – 5:00One slide or one sentence"The hardest part was X. The thing I would change with more time is Y."
Do not narrate every line of code. Pick one method per layer and explain why it lives there. The grader is checking that you understand the structure, not that you can read aloud.
Part 13

What You Actually Learned This Semester

The course was not really about Java or Swing. Step back and look at what you can do now.