Our first AI hackathon taught engineers what the tools do. Our second one taught them how to build with them. The difference came down to three deliberate changes — and one project brief that gave everyone a reason to ship.
The Setup
In Part 1, I wrote about our first internal AI hackathon. Twenty engineers, one day, the full SDD stack. I was honest about the outcome: it wasn't really a success. Some engineers got momentum. Most didn't. Setup ate coding time. Presentations ran long. People understood the concepts but froze when it was time to apply them.
I also wrote that we'd run a second one — same methodology, same anonymous feedback form — with every lesson baked in. This is that post.
The second hackathon wasn't a revolution. It was an iteration. Three specific changes, each one a direct response to feedback from the first round. The result: higher scores across almost every metric, and — for the first time — engineers who actually built something by the end of the day.
Change 1: Setup Before the Clock Starts
In the first hackathon, nearly half the participants spent their coding time wrestling with tool installation. That's not a training problem — that's a logistics failure. If your event starts at 9 AM and someone's first hour is spent configuring API keys, you've already lost them.
For the second hackathon, I sent setup instructions 24 hours in advance. Not a suggestion — a requirement. Every developer had to have their environment configured, tools installed, and accounts created before they walked in.
The effect was immediate. People arrived ready to listen. There was no background anxiety about "will my setup work when it's time to code." When the presentations ended, they opened their terminals and started building. No context switch. No scramble. Just work.
One participant captured it perfectly:
"None — it went smoothly."
That was the answer to "What challenges did you face?" — from multiple engineers. In the first hackathon, that same question produced paragraphs about setup friction.
Change 2: Less Theory, More Doing
The first hackathon covered 17 tools. Seventeen. That's not a presentation — that's an avalanche. Even for engineers who were genuinely interested, the sheer volume created a paradox of choice. Too many options, not enough muscle memory with any of them.
For round two, I cut the presentations down hard. Shorter. Sharper. Right to the point. The ratio shifted from roughly 50/50 to closer to 30% presentation, 70% hands-on. Every concept that stayed in the slides had to earn its place by being directly applicable to what they'd build that day.
The feedback confirmed it worked. Presentation clarity improved. But the more telling signal came from the qualitative responses — nobody complained about information overload. Instead, the most common request was for more exercises and demos. That's the right problem to have.
"Maybe some more exercises and demos :)"
When your audience is asking for more of what you're offering, you know you've hit the right ratio.
Change 3: One Project, One Brief, One Goal
This was the biggest change, and the one I'm most convinced matters for any engineering leader reading this.
In the first hackathon, there was no shared project. Engineers were told to pick a task from their backlog in Jira and work on it using the AI tools. In theory, that sounds practical — real work, real context. In practice, it fragmented the room. Everyone was solving a different problem, in a different codebase, with different constraints. There was no shared context, no way to help each other, and no common reference point for discussion.
For the second hackathon, I wrote a detailed project brief. Every engineer built the same thing: Handshake — a micro-contract app where two people negotiate terms in real-time and seal the deal by simultaneously holding a button for three seconds.
The brief wasn't just a description. It included:
• A core user flow — create, share, negotiate, seal, permanent URL
• The interesting engineering problems — real-time sync, simultaneous hold mechanics, true immutability, access control
• A suggested tech stack — React + Vite, Convex for real-time backend, Clerk for auth, Vercel for hosting
• A database schema and component structure — not to be followed blindly, but to eliminate the "where do I even start?" paralysis
• MVP scope and stretch goals — so everyone knew what "done" looked like
This changed everything. Engineers weren't making architectural decisions and learning AI tools at the same time. The architecture was given. The decisions were given. The only job was to build — and to use AI to build faster.
The result? People actually shipped. Not toy examples. Not half-started repos. Working features, deployed, demoed live. For the first time, the hackathon produced something tangible.
The Data
Same five questions. Same 1–5 scale. Same anonymous form. Here's what moved:
Every metric went up. The biggest jump — confidence to apply the tools independently — is the one that matters most. In the first hackathon, that was the weakest score. Engineers understood what the tools could do but weren't sure they could use them alone. That gap closed by 12%.
Duration perception also shifted. "Just right" went from 50% to 75%. Fewer people felt rushed. Fewer felt it dragged. When you remove setup friction and give people a clear target, time stops being the enemy.
What the Comments Revealed
The qualitative feedback told a story the numbers couldn't.
People wanted more, not less. The dominant request wasn't "slow down" or "explain more" — it was "give us more exercises." That's a fundamental shift from the first hackathon, where the dominant theme was "I couldn't keep up."
Setup friction moved, not disappeared. A few people still mentioned setup challenges — but the nature changed completely. In hackathon one, it was about AI tooling: installing Claude Code, configuring skills, getting hooks working. In hackathon two, it was about the project's tech stack: Clerk authentication tokens, Vercel deployment config, frontend dependencies. The AI tooling setup was solved. The remaining friction was domain-specific — and that's a much smaller, more solvable problem.
People started thinking bigger. One engineer suggested future hackathons should simulate a real multi-team repo:
"We could work on implementing a large system, each member works on a small part of it, and then it merges together into something large scale, to simulate real repo development with many features and their contracts across frontend/backend/infra/CI/CD."
That's not the feedback of someone who's confused about the tools. That's someone who gets it and wants to push further. You don't get comments like that from a hackathon where people are still struggling with setup.
The split between experience levels persists. One engineer flagged that frontend tooling was unfamiliar to them and suggested splitting tracks for frontend and backend developers. This echoes the recommendation from Part 1 about running separate tracks for different experience levels — and it's still the right call.
Why the Project Brief Matters More Than You Think
The three changes I made were all direct responses to specific feedback. But if I had to pick one that moved the needle the most, it's the project brief.
Here's why: a shared project solves three problems simultaneously.
It eliminates decision paralysis. In the first hackathon, some engineers spent their entire coding window deciding what to build. With a brief, that decision is made for them. Every minute goes to building, not choosing.
It creates a shared context for learning. When everyone is building the same thing, they can help each other. "How did you handle the seal mechanic?" is a conversation that can only happen when two people are solving the same problem. In the first hackathon, everyone was on their own island.
It makes the outcome tangible. "I learned about AI tools" is hard to measure. "I built a working app with real-time sync and deployed it to Vercel" is concrete. It gives engineers something to point to. Something to demo. Something that proves to themselves that they can do this.
The project brief is the difference between a workshop and a hackathon. Workshops teach. Hackathons produce. If you're running an AI adoption initiative, you need both — but the hackathon is where belief is built.
The Updated Playbook
Everything from Part 1's playbook still stands. Here's what I'd add after running the second iteration:
Ship setup instructions 24 hours early. Not optional. Not "recommended." Required. Verify completion before the event starts.
Cut your presentation time in half. Whatever you think is the right amount, it's too much. If you can't explain it in the time you have, write a doc and link to it.
Write a project brief, not a prompt. Give engineers a real application to build. Include the schema, the component structure, the user flow. Remove every decision that isn't about learning the tools.
Design the brief around interesting engineering problems. The Handshake app worked because the simultaneous hold mechanic, real-time sync, and immutability constraints were genuinely interesting puzzles — not busy work.
Measure the same things every time. Same questions, same scale. You can't improve what you don't track, and you can't track what you keep changing.
The Bottom Line
Three changes. Setup in advance, shorter presentations, a shared project brief. None of them are revolutionary. All of them required listening to what engineers actually said — not what I assumed they needed.
Confidence jumped 12%. That's not a rounding error. That's engineers going from "I think I understand this" to "I know I can use this." And when people build something real with the tools — not just watch a demo, not just read the docs — that's when adoption stops being a corporate initiative and starts being a habit.
The first hackathon planted the seed. The second one proved it could grow.
This is Part 2 of a series on running AI adoption programs for engineering teams. Part 1 covers the SDD methodology and the lessons from our first hackathon.