One month ago, we launched three new features in Glovo: Videos, Social, and Picks. These features enable users to experience a more engaging, social, and personalized browsing experience. We aimed to introduce new ways for users to explore, connect, and save their favorite stores with minimal friction. The main target of these features was to introduce them in the Store Wall, a screen within our app where users can see a list of stores or products for a specific category.
We wanted to deliver these features within a strict 90-day timeline, and it was possible because we have an architecture capable of handling high modularity, scalability, and rapid iteration. This post explores the architecture we built a year ago — powered by a plugin-based, server-driven design with a Backend-for-Frontend (BFF) service layer. We will explain how this approach allowed more than ten teams to work in parallel, coordinate complex dependencies, and maintain stability for a seamless user experience.
Engineering Videos, Social, and Picks
1. Videos: Enhancing User Engagement with Dynamic Content
Adding Videos to the Store Wall meant handling high-resolution media while maintaining performance. The goal was to add a new way of discovering products through videos and to achieve this, a new video carousel was added to the Store Wall.
This new feature delegates the domain logic to a specific microservice in charge of customer content discovery. This service encapsulates the logic for calling multiple microservices across different domains to retrieve videos available at the user’s location, filtering to keep only those related to available products, and enriching them with relevant information.
2. Social: Integrating Friend Recommendations and Social Proof
The Social feature leverages user networks by recommending products your friends often order. This involved connecting data streams from social profiles, orders, and product ratings.
Like Videos, this feature delegates the domain logic to a specific service in charge of customer content discovery, encapsulating all the logic of getting the product recommendations (provided by Data models), filtering by store and product availability, and enriching with all information.
3. Picks: A Modular Approach to Personalized Curation
A pick is a list of stores that the customer decides to group. It is similar to creating a playlist in a music app. Picks allow users to organize their favorite stores, adding personalization to the Store Wall. This introduced specific requirements, such as modularity for different types of stores and future support for sharing and social integration.
In the Store Wall, we provide quick and easy access to the user’s picks and favorites. For this, the Store Wall delegates to the Picks microservice the logic of getting the users’ picks, filtering by store availability, and enriching with relevant information.
Evolution of the Store Wall with a Plugin Architecture and Server-Driven UIs: Building for Flexibility and Scalability
We redesigned the Store Wall screen a year ago to use an architecture that enables dynamic and personalized store walls powered by templates. We needed to deliver high-performing, category-specific experiences without overloading the client application. Here’s how we structured it:
- Templates: We call Templates a group of components/features (internally known as Modules) inside a screen, with their specific positions. You can imagine this as how the visual components on your screen will be presented. Those components or visual elements come both from a configuration, enabling business to inject special features, and some others come from Machine Learning (ML) models targeting a better experience and adoption of a user.
Technically speaking, each Template provides a list of independent modules that we will execute and show configuration details for rendering a specific Store Wall. For instance, the “Food” category has a different layout from “Retail,” with each view receiving specific modules in pre-defined positions.

- Template Selection: We allow selecting different templates for the store wall screen based on category, country, city, user segment, and even by specific dates, which is usually helpful for events like Sant Valentine’s or Halloween. We also allow experimentation to test different templates for the same category to define the best alternative. As an example, here we have a template for a restaurant category in Spain where each selected module is marked with a red square.
- Orchestrator: The Orchestrator is the central engine executor, controlling the process of building all modules. This engine receives a Template as an input, together with a RequestContext, which has the current request details like user location and device information, for example. With these, it is in charge of calling every module that needs to be executed. It is also in charge of reporting metrics for each module (success, failures, and latencies), but it also throws errors in case a critical module fails. Lastly, it is also responsible for ensuring that each module is resolved within its latency thresholds and that the whole template is generated before its timeout for that screen. This allows us to:
— Prevent failures and ensure seamless experience: If a module considered not critical fails, it is ignored, returning all the other content to the user. The same happens if the module is taking more time than expected to be executed. This also prevents teams collaborating in the Store Wall from breaking it if a bug or unexpected behavior is introduced.
— Improve accountability: As we report metrics for the execution of each module, each team can have their monitors and metrics based on the modules they own.
— Improve performance: Since each module is executed in parallel, the latency is now dictated by the slowest module in the screen, allowing us to introduce new modules inside the same screen without penalizing the overall performance. - Modules: A Module is a plugin that can be injected into the template. Modules are independently developed features, which allowed us to scale up with the Videos, Social, and Picks features while keeping our system modular and maintainable. Each module handles its own:
— Backend logic and data retrieval: Each module includes specific logic for fetching relevant data and transforming it into a server-driven component.
— Monitoring and metrics: Individual modules log their metrics, which allows us to monitor and address module-specific issues without impacting the entire Store Wall. - Data Providers: Once each module is executed in parallel, some may require the same data to calculate their business logic. As an efficiency measure, we introduced a proxy design pattern to data providers to ensure that each data request (e.g., store information, city metadata) is fetched only once per user request. For example, if three modules require store details, the data will be fetched once by a Proxy and shared across the modules, reducing redundant requests to backend services and improving response time/user experience.
- Server-Driven Components: The last piece of the architecture that enables high reusability is the Server-Driven UI layer that we injected on top of the plugin architecture. This layer acts as a support package used by Modules to render server-driven elements. This approach decouples backend logic from frontend dependencies, allowing us to reuse components across different modules/screens, which is essential for high-scale apps with numerous feature experiments and regional differences.
Here, you can find an overview of the flow with all the components:

Ensuring Stability and Scalability Under Tight Deadlines
Even though our architecture allowed us to move forward fast and develop features independently, the critical nature of our 90-day deadline required meticulous dependency management and stability assurance, as each module had downstream dependencies where more traffic would be added. To understand that our features were production-ready and to ensure a stable release, here’s how we achieved simultaneous rollout of all three features:
- Real-Time Monitoring: Each module collected metrics, enabling us to monitor and respond to issues in real-time. We set up dedicated alerts for each feature, ensuring a quick response to any emerging issues.
- Load Testing: We simulated traffic load across all features to understand system behavior under peak conditions, adjusting resource allocation to manage surges without compromising performance.
- Caching: Some modules share some of the data they need as input to process their logic. We could ensure minimum impact with caching strategies designed to manage parallel requests effectively.
Conclusion: Powering the Future of Delivery with a Modular Store Wall
With several teams working on various features simultaneously, it was proven that its modular design not only enabled these parallel efforts but also minimized code conflicts and dependencies, contributing to a highly efficient and independent workflow. This decoupled approach ensured that new features could be added seamlessly without compromising app performance, allowing us to maintain a consistent user experience.
With features like Videos, Social, and Picks, we’re taking steps towards a more engaging, user-centric delivery app that enhances the user experience while preserving stability and scalability. This architecture will continue to support rapid feature introduction, evolving the Store Wall into a personalized, content-rich experience for all users.
Authors:
Victoria Perelló, Software Engineer from Glovo
Hernán Malatini, Software Engineer from Glovo
How We Engineered a Scalable Architecture to Power Videos, Social, and Picks in Our Delivery App was originally published in The Glovo Tech Blog on Medium, where people are continuing the conversation by highlighting and responding to this story.