Best Hackathon Themes #3: LLMs to Increase Operational Efficiency for Knowledge Workers
Large Language Models (LLMs) are the hot tech everyone’s talking about, and for good reason. But much like the blockchain frenzy of the past, the first instinct is almost always to solve customer-facing problems. This is a mistake — one my team and I almost fell into. Most businesses don’t actually need LLMs for their revenue drivers. Instead, the real value lies in turning LLMs inward, towards knowledge workers, to unlock operational efficiency.
That’s why this hackathon theme is so important: it shifts the focus from flashy use cases to practical, high-impact applications within day-to-day workflows. Plus, this hackathon was particularly special — it was sponsored entirely by my team, led by Tessa Bojan, a phenomenal engineer and an even better organizer. Tessa established a cross-functional committee with members from our Frontend Community of Practice and our UX design team. With her committee’s efforts, we brought together over 120 participants across 6 time zones to reimagine how LLMs can eliminate friction in our work.
Our prompt provided a clear objective: “Given you are a product owner, scrum master, designer, or developer, how can you use an LLM to improve your workflows?” Let’s get into our criteria for a top theme:
Inclusivity
This theme invites participants from various backgrounds to explore how LLMs can streamline internal workflows. By focusing on operational efficiency, it avoids overly technical challenges, ensuring that both technical and non-technical team members can contribute meaningfully. Simply put, everyone has some idea of the tedious tasks in their own workflows and those of their cross-functional peers.
Educational
Participants gained hands-on experience with LLMs, demystifying AI applications in everyday operations. This practical exposure empowers team members to identify and implement AI-driven improvements in their workflows.
Every participant left with practical knowledge of how they could integrate these tools into their workflows.
Well-Defined Scope
The objective was clear-cut: leverage LLMs to streamline and enhance internal workflows. This focused goal provides participants with a concrete target, encouraging the development of practical, actionable solutions that can be readily implemented within the organization.
Accessibility of Tooling
Thanks to Ken Cross and Parthi Akkini’s teams, we were able to leverage hosted APIs for Ollama and Azure’s OpenAI API, offering endpoints participants could hit immediately. We also spun up a few repos with starter templates, allowing teams to dive right into problem-solving instead of wrestling with setup.
Dynamic Hosts and Judges
Thanks to the hotness of LLMs, this was one of those hackathon themes that practically sold itself. Both engineers and designers were eager to dive in, no heavy lifting required from the host to drum up excitement. That said, Tessa’s experience as a seasoned hackathon host played a big role — she knew exactly how to energize the crowd and keep the event running smoothly.
On the judging front, we had to get creative due to the size of the hackathon. With nearly 20 teams participating, live presentations for all of them simply weren’t feasible. Instead, we split judging into two rounds:
- Round One: Each team submitted a recorded demo, which was reviewed by a panel of judges that included myself and leaders from product, design, and incubation. Teams were on standby to answer any follow-up questions.
- Final Round: We selected the top six teams to present live to our CIO and SVP of Engineering — two leaders who are deeply invested in operational efficiency and had the perfect perspective to evaluate the solutions.
This approach balanced efficiency with fairness, ensuring that every team got their ideas in front of judges while keeping the event manageable for everyone involved.
Reflection & Next Steps
On the surface, this hackathon was an opportunity to expose our knowledge workers to LLMs. Equally important, it helped to prove the immediate and tangible value that LLMs can deliver in our ecosystem. By focusing on operational efficiency, we showcased a small fraction of the low-hanging fruit available through tools like Ollama.
And here’s the best part: you don’t need a PhD or years of AI expertise to work with LLMs at this level. Our teams leveraged accessible APIs and their preferred programming languages to build practical, actionable solutions tailored to real-world workflows. With the right tools and some creativity, anyone can put this technology to work in meaningful ways.
A controlled hackathon like this is the perfect way to break the fear cycle around adopting LLMs, providing a safe space to explore their potential. Much like we found, the biggest wins aren’t flashy — they came from solving the small yet persistent challenges causing everyday friction.