From Monoliths to Modularization: Rebuilding Authentication with Microservices

Georges Akouri-Shan
6 min readDec 30, 2024

--

When the cracks in the monolith became too big to ignore, the next step was obvious: break it up. But where do you start?

If you’re new to this series, start with my introductory post first.

Visual representation of cracks in a monolith while dozens of people continue to work on it

For us, the challenge wasn’t just one monolith but many, many monoliths — at some point, creating new monoliths became the de facto strategy!

The situation reminded me of Admiral Nelson and the British Navy. Unlike the nimble force we now associate with British naval dominance, the fleet Nelson joined was bogged down by rigid hierarchical command structures. Captains were often stuck waiting for orders, even in the heat of battle. On top of that, the prevailing belief at the time was that bigger was better — more ships, bigger ships, more firepower. Nelson had other ideas. He valued speed and agility, proving repeatedly that smaller, nimbler fleets could outmaneuver their massive counterparts.

That’s where I found myself: surrounded by a fleet of monoliths, each slower and more unwieldy than the last. Like Nelson knocking on the door of naval convention, I began knocking on metaphorical doors in our systems, asking, “Where do we start? What’s the right place to prove a modular approach?” And, like Nelson’s initial proposals, many of my early suggestions were met with skepticism. The inertia of “how we’ve always done it” or “that’s too big of a change” is hard to overcome.

Eventually, I found my answer in the smaller authentication monolith.

Why Authentication Was First

It was isolated, straightforward, and carried fewer risks than the others. This was our proving ground — our mini Battle of Trafalgar — where we could test the waters of modularization. Tackling authentication allowed us to minimize the risk of adopting new architecture and technology without overwhelming our team.

The decision wasn’t just technically strategic — it aligned perfectly with business priorities. Product and fraud teams had already prioritized modernizing authentication, giving us the alignment and momentum to “do it right.” Even better, mobile teams could leverage the same authentication microservices, increasing the return on this investment.

One thing was clear: this migration would set the tone for all future modernization efforts. My team was determined to ensure its success, knowing the lessons learned here would shape the path forward.

How We Broke It Down

Replacing the authentication monolith wasn’t just about upgrading technology — it was an opportunity to resolve horrendous load times, white-labeled user experiences, overwhelming tech debt, and extremely slow development timelines.

Since the user experience was too small to justify breaking apart into microfrontends (MFEs), we opted for the 50/50 approach with the frontend as a monolith and the backend as a set of microservices.

Monolith to Modularization: different architectural strategies for frontend & backend development

Architecting the Solution

We chose OAuth2 with Proof Key for Code Exchange (PKCE) as the foundation for the new authentication system. This decision was driven by the planned use of an SPA for the frontend. PKCE provided an additional layer of security for public clients like SPAs, reducing the risks of Cross-Site Request Forgery (CSRF) and authorization code injection attacks.

OAuth2 PKCE Flow

OAuth2 allowed us to standardize token-based authentication and enable scalability across multiple consumers, including native apps and third-party integrations.

Build

We built the backend as a set of OAuth2-compliant microservices with endpoints for token management, credential storage, session management, user management, and authorization.

For the frontend, we built a lightweight SPA using React & TypeScript, giving us faster page load times and a more fluid user experience. We also stretched the capabilities of create-react-app (CRA) with significant shell scripting to deliver multiple white-labeled experiences.

Deploy

Once the microservices were built, we needed a way to manage and secure the APIs effectively. This is where Apigee came in. By centralizing API management, it allowed us to enforce consistent policies, monitor usage, and simplify integrations with external partners. Teams could focus on building functionality while Apigee handled authentication, rate-limiting, and other operational concerns.

Given our production traffic was already going to an existing asset, we could comfortably deploy our services and frontend app without immediate concern. Then we worked through any issues testing with a small population of friends and family. From there, we slowly migrated traffic away from the monolith, starting with low traffic partners and eventually getting them all onto the new system.

Aside from a few minor issues, this low risk approach gave us a great deal of breathing room to bring this to production in a non-disruptive way. However, building and deploying the new system wasn’t without its obstacles.

Challenges Along the Way

Breaking apart the authentication monolith wasn’t without its challenges. While we successfully replaced the system with a modern SPA and microservices, the process revealed some important lessons:

Security Trade-offs with an SPA

Building the frontend as a client-side SPA seemed practical at the time, given the need for interactivity and custom branded outputs for each partner. In hindsight, using an SPA for an authentication app added unnecessary security complexity.

Server-side rendering (SSR) or hybrid approaches would have been a better fit. SSR simplifies token handling and mitigates some of the risks inherent to client-side apps, like secure cookie storage for tokens and better CSRF protection. This experience underscored the need to evaluate frontend architecture choices in the context of the app’s role.

Managing Partner-Specific Customizations

Branded authentication experiences for partners introduced additional complexity. Managing these configurations with shell scripting often pushed CRA’s capabilities to their limits, adding significant development and deployment overhead. We were keen on ensuring that each partner deployment didn’t have a reference to our other partners — a requirement that later on vanished just as quickly as it had come on.

Caching Issues in a Complex Architecture

We leveraged our ecosystem’s custom architecture that included a CDN, load balancer, and hosting platform, etc. — all from different providers. This setup introduced caching challenges, with conflicting rules across layers leading to unpredictable behavior, such as rendering failures in production.

These caching issues, only reproducible in production, created debugging nightmares, often leading to sleepless nights while resolving the ‘white page of death’ — a scenario where a React app request returned a cached 200 status but a 404 for all of the scripts and css required to render the content.

Resolving this required overwriting caching across multiple layers. It also foreshadowed some major infrastructure pain points that we would face in the future.

Team Alignment & Culture Shifts

Perhaps the hardest challenge of all was the cultural shift underpinning this new architecture. Moving from a shared codebase to a modular one required teams to take strict ownership of their domains and full accountability for their decisions.

This wasn’t intuitive for everyone. Like Nelson trusting his captains to make decisions in the heat of battle, success hinged on having team leaders who could navigate ambiguity and act independently. But when teams lacked decision-making capability or were too accustomed to having every detail prescribed, the strategy faltered.

One of the first steps was aligning teams around API-first design principles. This meant designing APIs upfront, long before backend implementation, to ensure the contracts were clear and agreeable for frontend teams. Unsurprisingly, Swagger became our go-to tool for documenting API contracts, offering a standardized format that both frontend and backend teams could use to align on expectations.

Defining service and frontend boundaries was another challenge. What seemed simple in theory — ‘each team owns X’ — quickly became a gray area when overlapping concerns emerged. For example, should the frontend or the backend manage session expiry and refresh logic? Or should token validation logic belong to the session management service or the credential store? These debates often required cross-team alignment sessions to hammer out clear ownership and expectations.

What Came Next

The authentication migration proved we could deliver modular solutions without disrupting user experiences. More importantly, it gave us the clarity and confidence needed to tackle much larger monoliths.

Unlike authentication, the other monoliths were deeply coupled, with thousands of developers’ code layered over decades. Addressing them required a far more intricate strategy — one that would push us to adopt MFEs to manage shared user flows and enable true team independence at scale.

This felt like the naval campaigns Nelson faced after Trafalgar. While Trafalgar proved that decentralized decision-making could secure victory, the real challenge lay in turning those principles into a scalable, repeatable strategy for future battles. Similarly, we now faced the challenge of scaling that success to much larger, deeply entangled systems.

--

--

Georges Akouri-Shan
Georges Akouri-Shan

Written by Georges Akouri-Shan

Engineering leader in a rapidly changing world, writing so something sticks.

No responses yet