Week 3 has no seed prompts. The design is now your problem. This lecture covers the skills the seed prompts were hiding — writing a spec, scoping a project you can finish, prompting across many files, and recovering when the AI goes off the rails.
In Weeks 1 and 2 you followed prompts someone else wrote. The game design was a given. Your job was to read what the AI produced, fix what was wrong, and extend it.
Week 3 is different. Nobody hands you a design. You write the spec, you choose the scope, you decide what the Model holds and what the Controller does. The AI is still your collaborator — but only as good as the direction you give it.
The skill this week is directing the AI, not just using it.
Most of what follows works in either kind of tool, but the mechanics differ. Know which one you're in.
claude.ai, ChatGPT, Gemini in a browser. You paste code in, it gives code back, you paste it into your editor.
GitHub Copilot agent mode, Claude Code, Cursor, Google Antigravity. The AI reads and edits files directly, runs commands, sees compile errors itself.
The seed prompts are gone. Understanding what they were doing for you is the first step in replacing them.
Most failed Week 3 projects are not failures of code. They are failures of scope.
One core mechanic. Few entity types. Finishable in a week with time left for polish.
Multiple entity types or a non-trivial rule (line-clearing, lane logic). Possible if you start early.
Possible — but most students who start here finish nothing playable. Cut the scope, then cut again.
"Pac-Man with four ghosts that each have different personalities, fruit bonus items, multiple levels, a high score table, and power pellets that let Pac-Man eat ghosts."
Five major systems. Each one is a multi-day project. None of them are the core game.
"Pac-Man eats pellets in a fixed maze. One ghost chases him by moving toward his current cell. Game ends when all pellets are gone (win) or the ghost touches Pac-Man (lose)."
One entity type beyond the player. One rule per system. Win and lose conditions both stated. This is the MVP.
Before asking the AI to write anything, ask it to ask you questions. The AI is more useful as an interviewer than as an oracle.
I want to build a [game name] in Java with Swing using MVC. Before we write any code, interview me. Ask me one question at a time about gameplay, controls, win and loss conditions, and what should be on the screen. After about 8 questions, summarize what I told you as a one-page spec organized by Model, View, and Controller.
A wall of ten questions gets a wall of ten half-thought answers. One at a time forces a real decision before the next question lands. This is the same reason teachers do not hand you the whole exam before reading the first question.
The summary is structured around MVC because the rest of your project is structured around MVC. The interview ends with an artifact you can immediately use — not just a chat transcript.
In your Frogger, what happens when the frog is hit by a car — does it lose a life and respawn at the start, or is the game over immediately?
Loses a life and respawns. Three lives total.
Got it. When the frog is on a log floating across water, does it move with the log automatically, or does the player still have to press keys to keep up?
It moves with the log. If the log carries the frog off-screen, that's a death.
Good — that means the Model needs to know which lane the frog is in and whether that lane has a "carrier" object under it. Next question: how many lanes total, and in what order?
"Now play devil's advocate. What is still underspecified in what I told you? What will be ambiguous when I start coding?"
Use after the spec summary. Catches the gaps you didn't notice.
"Given a one-week deadline and that I'm a beginner, which features in this spec should I cut for an MVP? Which are stretch goals?"
Trades on the AI's pattern-matching: it has seen many failed projects and knows what cuts first.
"Pretend you are a player who has just opened my game. Walk through your first 60 seconds. Where do you get confused? What do you want to do that the spec doesn't allow?"
Reveals missing onboarding, missing pause keys, missing restart logic.
"Look at my spec and tell me, for each rule, which MVC layer it belongs in. Flag any rule that seems to live in two layers."
Surfaces Model/View confusion before it ends up in code.
The spec lives in your README.md. It is a contract with yourself — and a context document you will paste to the AI repeatedly.
# Frogger — Spec
## Gameplay
A frog crosses a road and a river. The road has
cars; getting hit costs a life. The river has logs;
the frog must ride them to cross. Three lives.
Reach the top to win the level.
## Model — FrogModel.java
- frog x, y (grid coords)
- lives (int, starts at 3)
- score (int)
- 5 lanes, each a List<LaneObject>
- gameState: PLAYING / WON / LOST
## View — FrogView.java
- draws frog as green square
- draws each lane (cars red, logs brown)
- draws lives count and score top-left
- draws "GAME OVER" / "YOU WIN" centered
## Controller — FrogController.java
- arrow keys move frog one cell
- Swing Timer ticks every 100ms
- on tick: advance lanes, check collisions
- R key restarts after game over
## "Done" for this week
Frog can cross both road and river. Lives work.
Win and lose screens appear. R restarts.
"A fun puzzle game with cool levels and good controls. The player solves puzzles to advance. There will be a score and probably some kind of timer. I want it to feel polished."
Every sentence is a vibe, not a fact. The AI will invent five different games across five prompts.
"Sokoban on a 10x10 grid. Player pushes boxes onto target tiles. Player cannot pull. Boxes cannot be pushed into walls or other boxes. Level is won when every target tile has a box on it. One hard-coded level. Arrow keys to move. R to restart."
Every sentence is a rule the AI can implement and you can test.
A playable bad game beats an unplayable good one. Get something on screen that responds to input within the first hour.
You have a complete, defensible Week 3 submission. It is small but it is whole. That is a B+ project.
Aggressive, but possible if your spec is tight and your prompts are scoped. Most students who hit step 8 quickly end up with the strongest projects — they have time left for the things that make a game feel finished.
Five prompt patterns that replace what the seed prompts gave you in Weeks 1 and 2.
I am building [game] in Java with Swing using MVC. Here is my spec:
[paste your full spec from README.md]
Generate three class shells — GameModel.java, GameView.java, GameController.java — with method stubs based on this design. GameModel must not import any Swing classes. The program should compile and open a blank window.
Use this once at the very start. The spec dump is what gives the AI a shared mental model with you. Don't paste the spec every time — only when you are starting fresh or correcting drift.
In FrogModel.java, add a method advanceLanes() that moves every object in every lane by its speed field. Wrap objects around the screen edges. Do not modify FrogView or FrogController. Show me only the new method — I will paste it in.
Here is my current FrogModel.java in full:
[paste the entire file]
I want to add a power-up that gives the frog two extra lives. Where in this file should the new state live, and what method should add it? Show me the diff, not a rewrite of the whole file.
Use this when the AI's idea of your file has drifted (you can tell — it references methods that don't exist, or forgets fields you added). Pasting the file resets its mental state. In agentic tools you usually don't need to paste — the AI reads the file itself. But you may still need to name it explicitly ("look at FrogModel.java first") if the tool doesn't pick it up automatically.
I am getting this exception when I press the spacebar:
Exception in thread "AWT-EventQueue-0" java.lang.NullPointerException
at FrogController.keyPressed(FrogController.java:47)
at java.desktop/java.awt.Component.processKeyEvent(...)
Here is the relevant method from FrogController.java:
[paste the keyPressed method]
What is null on line 47, and why?
Bad: "it's broken, fix it." Good: stack trace, the relevant code, a hypothesis or specific question. The AI is far better at debugging when given the same context a human debugger would want. In agentic tools you can often just paste the trace and say "find and fix it" — but you'll still get better results if you add a hypothesis. The agent reads code; it does not read your mind.
Here is my ModelTester.java:
[paste it]
Add three new test methods that verify: (1) the frog cannot move off the bottom edge, (2) crossing into the river without a log decrements lives, (3) reaching the top row sets gameState to WON. Use the same check(name, condition) helper. Do not change the existing tests.
From Lecture 12. Every new Model behavior should ship with a new test. The AI is good at writing these once you've shown it the pattern.
After 30+ messages the AI forgets your design. Recognizing this — and recovering from it — is a Week 3 skill.
README.md) at the top of important promptsI am partway through building [game] in Java with Swing using MVC.
Here is my spec:
[paste spec]
Here are my current files:
[paste FrogModel.java]
[paste FrogView.java]
[paste FrogController.java]
Right now I am stuck on [specific problem]. Help me with that and only that.
A fresh chat with full context will out-perform a long chat with stale context almost every time. Don't be afraid to restart. Save the previous chat's key prompts to PROMPTS.md before you do.
When the bug is yours and there's no seed prompt to fall back on, you need a debugging method.
"my game is broken can u fix it"
No file, no error, no hypothesis. The AI guesses, you paste the guess, the new bug is worse than the old one.
"The ball passes through the paddle when moving fast. I think this is a tunneling bug — the ball moves more than its own width per tick, so the collision check at the new position misses. Here is my checkPaddleCollision method. Is my hypothesis right, and how would I fix it?"
Names the symptom, names the suspected cause, asks a specific question. The AI now has somewhere useful to start.
// add this BEFORE asking the AI to fix anything
public static void testBallPaddleCollision() {
GameModel m = new GameModel();
m.setBall(100, 200, 0, 30); // ball moving down 30 px/tick
m.setPaddle(80, 210, 60, 10); // paddle just below it
m.tick();
check("ball bounces off paddle when moving fast",
m.getBallVY() < 0);
}
ModelTester, the AI can fix it without ever seeing the View or Controller.
When GameModel.java hits 300 lines, the AI starts forgetting things. Time to extract.
Entity class for things with x, y, width, heightSprite class for image-loading logicLevel class for level layout dataSound helper for audioInputState class held by the ControllerHere is my BreakoutModel.java:
[paste full file]
The Brick concept appears in many places — its position, its color, whether it's destroyed. Extract a Brick class in a new file Brick.java with fields and a constructor. Update BreakoutModel to use a List<Brick> instead of parallel arrays. Show me both files in full. Do not change game behavior — this is a pure refactor.
"Pure refactor — do not change behavior" is critical. Without it the AI may "improve" your logic and break things. After the refactor, run your ModelTester — if all tests still pass, the refactor was clean. Agentic tools shine here: a single prompt can edit multiple files, run the tester, and report back. Commit before you start so you can revert in one command if it goes sideways.
Two well-chosen polish features make a game feel finished. Five half-finished ones make it feel broken. Pick two.
Low-leverage polish takes hours and rarely shows up in a 5-minute demo.
// in GameModel
public enum State { TITLE, PLAYING, GAME_OVER }
private State state = State.TITLE;
public void startGame() { state = State.PLAYING; }
public State getState() { return state; }
// in GameView.paintComponent
switch (model.getState()) {
case TITLE -> drawTitle(g);
case PLAYING -> drawGame(g);
case GAME_OVER -> { drawGame(g); drawGameOver(g); }
}
A state enum in the Model lets the View pick what to draw without holding any logic of its own. Adding a pause state later is a one-line change.
Free, legal sprites are one download away. Swapping rectangles for art is a small View change — the Model never knows.
Huge library of free, community-contributed game art. Filter by license — most assets are CC0 (no attribution required) or CC-BY (credit the artist).
README.mdCurated asset packs aimed at beginners. Everything is CC0. Cleaner and more consistent than OpenGameArt, smaller selection.
assets/ folder in your repo, alongside your .java filesassets/ and commit it to GitpaintComponent, which runs every frameg.drawImage(...) using the Model's coordinatesREADME.mdalien.x and alien.y are still the same integers — the View just decides whether to draw a green rectangle or a sprite at that position. This is the MVC payoff.
// in GameView
public void paintComponent(Graphics g) {
super.paintComponent(g);
for (Alien a : model.getAliens()) {
g.setColor(Color.GREEN);
g.fillRect(a.x, a.y, 30, 30);
}
}
// in GameView
private Image alienSprite;
public GameView(GameModel model) {
this.model = model;
try {
alienSprite = ImageIO.read(
new File("assets/alien.png"));
} catch (IOException e) {
e.printStackTrace();
}
}
public void paintComponent(Graphics g) {
super.paintComponent(g);
for (Alien a : model.getAliens()) {
g.drawImage(alienSprite,
a.x, a.y, 30, 30, null);
}
}
No Model changes. No Controller changes. The four-argument drawImage would use the sprite's natural size — the six-argument version (with w, h) scales it to match your grid.
ImageIO.read is inside paintComponent, your game will run at 2 FPS. Load in the constructor.drawImage(img, x, y, w, h, null).src/. The path is relative to where Java was launched.Many OpenGameArt downloads are sprite sheets: one PNG with many frames in a grid. Using them requires img.getSubimage(...) and a frame counter for animation.
For a Week 3 MVP, use single-image-per-entity. Treat sprite-sheet animation as a stretch goal after the game is fully playable.
This week needs at least 10 prompts. The graders care less about the count than about the one labeled "this is where the AI was wrong."
Prompt: "Add a ghost that chases Pac-Man."
What it did: Generated a Ghost class and a movement method that called Math.atan2 with floating-point coordinates. My Model uses integer grid cells.
Why it was wrong: The AI assumed continuous movement because I never told it the maze was a grid. The ghost ended up between cells, which broke my collision check.
What I did: Re-prompted with: "My maze is a 20x15 grid of integer cells. The ghost should also live on grid cells, moving one cell per tick toward Pac-Man's current cell. Use Math.signum(dx) and Math.signum(dy), and prefer the longer axis when both are nonzero."
What I learned: The AI defaults to physics-style movement unless told otherwise. My spec didn't say "grid-based" loud enough.
An entry like this is worth five "it worked great!" entries combined. It shows judgment.
A demo is not a screen recording of you playing. It is a guided tour. Plan it like a presentation.
| Time | What to show | What to say |
|---|---|---|
| 0:00 – 0:30 | Title screen, README spec on screen | "This is [game]. The core mechanic is X. Here's the spec I wrote." |
| 0:30 – 2:30 | Play the game — main mechanic working | "Notice how the player can do X, the enemy responds with Y, and Z triggers when…" |
| 2:30 – 3:30 | Demonstrate an edge case — die on purpose, win on purpose, hit a corner case | "Here's what happens when you lose / win / hit the boundary." |
| 3:30 – 4:30 | Open VS Code — point to one Model method and one View method | "This rule lives in the Model because… This drawing code lives in the View because…" |
| 4:30 – 5:00 | One slide or one sentence | "The hardest part was X. The thing I would change with more time is Y." |
The course was not really about Java or Swing. Step back and look at what you can do now.