Hackathon Part 1: Jane's Company-Wide AI Hackathon
What happens when 800 people across every function — engineering, design, support, people, marketing — put their regular work down for a day and ask: what's the most annoying part of my workflow, and can I fix it with AI? On February 25th, we found out. Over 100 demos later, here's what we learned.
If you haven't heard of Jane, here's the quick version: we're a practice management platform serving 65,000+ healthcare businesses. We help practitioners spend less time buried in admin and more time with patients. We're profitable, we're growing fast, and we think the future of healthcare software looks pretty different from what exists today. We're always looking for builders, so we thought we'd pull back the curtain on some of what that actually looks like 👀
This is Part 1 of two. What you're reading covers our company-wide hackathon, where close to 800 people across every function spent a full day building with AI. Part 2 is coming soon: our product organization, including engineering, product, and design, is running a dedicated AI SDK Hackathon in the coming weeks, hacking directly on the infrastructure powering JaneAI. We'll have a lot more to share from that one.
A Day to Build
On February 25th, our full team put their regular work down for a full day and had one question: what's the most annoying, repetitive part of your workflow, and can you fix it with AI?
The ideas board had been filling up for weeks. By end of day, over 100, two-minute Loom demos were flooding our #all-hands-hackathon channel.
Where We're Starting From
AI is not new at Jane. A couple of years ago, we formed JaneX, the team that shipped AI Scribe, a tool that transcribes patient appointments and generates clinical notes automatically. Around the same time, our Dev Acceleration team in Platform and Enablement started focused work on AI-powered developer tooling, giving our engineers better capabilities to do their work faster.
But this is one piece of the puzzle. Hundreds of thousands of practitioners use Jane every day across booking, scheduling, charting, billing, telehealth, and payments. Millions of patients book through Jane every month. The opportunity to make all of that meaningfully better is enormous. Practitioners got into their work to help people, not to spend their days on admin.
As a team, we feel a deep responsibility to not settle. To keep pushing what's technically possible so that every experience of Jane, from how a practitioner documents a session to how a patient books an appointment, gets meaningfully better as AI evolves. That's why we formed Jane.ai.
Jane.ai is building the infrastructure that makes AI possible across all of Jane. A shared foundation so every product team can move faster, and a reimagining of how practitioners actually interact with Jane day-to-day. Grounded in real customer problems, held to the same bar of trust and quality Jane has always been known for.
That work runs alongside a broader internal transformation. Our whole team has had full access to AI tools for a while, from ChatGPT to Claude Enterprise, connected to Jira, Notion, Slack, GitHub, Gmail, and more via MCP server integrations. We built governance tooling through RunLayer so we can move fast without creating security debt. And Mark Hazlett, our VP of Product, stepped into a dedicated Internal AI Transformation role to lead this full-time alongside JaneAI. The hackathon was one of his first moves.
What Engineers Built
This is where the day got genuinely interesting.
One engineer built a multi-agent coordination system for software development: multiple Claude agents working in separate git worktrees, claiming tasks through a shared Markdown status file with a locking mechanism to prevent corruption, shipping in parallel without colliding. The goal was context window optimization through parallelism. It ran. Multiple agents, one codebase, no chaos.
Another set an agent loose on a complete partner API client, first establishing architecture and CI enforcement, then connecting the agent to our partners GraphQL and REST schemas and letting it run overnight. He came back in the morning to a finished client.
A third tackled a partner client creation, a problem that quietly costs platform teams enormous time. Every new integration that needs an auth client pulls an engineer into the same interrupt loop. The fix was a Slack bot, RealmOps Copilot, that takes a plain-English description and opens a ready-to-review GitHub PR.
And one engineer built an automated E2E test failure diagnosis step that runs as part of CI. When a test fails, Claude reads the Serenity artifacts, maps the failure across the Jane auth stack including multiple repos, and posts ranked root cause hypotheses directly in the GitHub Actions Summary tab. No artifact downloads. No log diving. You open the job and the hypotheses are already there.
What Product Managers Built
PMs spent the day on context loss, the chronic gap between what a ticket says and what actually needs to happen.
One team built a CLI that pulls a Jira ticket plus its epic and linked issues, runs them through Jane's AI SDK, and generates a Business Context Brief before any code is written. Why does this feature exist? What are the constraints? What will break? It surfaces what's normally buried in Slack threads and in the heads of people who've been at the company for three years. A few PMs built personal chief of staff agents, AI advisors with enough context about their specific role and decision-making style to serve as a genuine thinking partner on the calls that don't have an obvious right answer.
What Designers Built
The design work centered on two things: eliminating coordination overhead and making qualitative research less manual.
One designer rebuilt a user interview synthesis assistant that replaces the most bias-prone parts of qualitative analysis with AI synthesis. A shared research participant pool now auto-merges contacts across all design teams, ending the duplicate outreach that comes from everyone maintaining their own notes. A post-research call reviewer pulls transcripts from Fellow, asks what you were trying to learn, and gives structured facilitation feedback while anonymously logging patterns across the team over time.
What Support, Marketing, Sales, and People Built
Support built a predictor grounded in 62,000 real customer conversations that tells you which questions a new feature will generate before it ships. That's not a support tool. That's a product planning tool. The UK team modeled capacity and hiring needs across multiple time horizons using calendar data and conversation exports. One support driver shipped her first HTML tool and wrote: "I was thrilled to discover that you can make Claude code things for you."
They also built competitor intelligence tools purpose-built for specific verticals, useful mid-demo. Customer success built personalized onboarding and offboarding communication generators that know something about the clinic they're addressing. The UK marketing team built a live Snowflake-connected metrics dashboard that replaced hours of manual reporting.
People came in prepared from their onsite the week before. Interview agents designed to train hiring managers. Bias-reduction tools to surface blind spots in hiring decisions. Org health signal agents that scan for early indicators of team dynamics worth paying attention to. And serious work on what AI-expanded coaching could look like across the organization.
What Made It Work
No lane restrictions. The best work often came from people solving something outside their usual domain, without assumptions about how it had to be done.
Learning counted as much as shipping. One team hit a wall when their Jira connector started redacting Slack URLs in ticket descriptions, a real limitation with no day-of workaround. They documented it and kept going. The channel didn't only celebrate the things that worked.
And by early evening, people had independently started building tools to navigate all the demos. Someone built a Hackathon Explorer. Someone else generated a categorized Notion summary of every project. Oh, and this blog was created through the Claude, Notion and Slack MCP 💡
Why This Matters
The hackathon wasn't a morale event dressed up as innovation. It was a deliberate move to close the gap between the AI tools we have and the organizational fluency to use them well. The output isn't just a list of demos. It's our whole team who now have insight to the question: what can I actually build with this? This matters for our customers, our team and our product. #lovejane
Privacy, Governance & Safety
Giving 800 people AI tools and an open brief requires trust, and trust requires systems. Jane's internal AI working group, which includes Security Engineering, Privacy, People, Product, and Legal, assessed every tool used during the hackathon, ran education sessions ahead of time, and established clear boundaries on data handling, integration approvals, and escalation paths.
We use RunLayer as our MCP governance platform so new tool integrations get security reviewed before they're live. We have a dedicated AI review group for anything touching regulated data, new models, or workflows with safety implications. And we're investing in an AI skills matrix across the organization so governance isn't just a policy document but something every team understands.
We're a healthcare SaaS company. The bar for getting this right is higher than most. And we think that constraint is actually an advantage. It forces the kind of rigor that makes AI adoption sustainable rather than just exciting.
Interested in being part of our team?
We're hiring for people who want to build, not just consume. People who see a gap and close it without being asked. What you've read here is what that actually looks like right now. It's already happening across every function in the company. If that resonates, explore open roles at jane.app/careers. If you're a senior or staff engineer excited by greenfield work, take a closer look at JaneAI. That team is still early.