Building a Booking System That Does Not Double-Book
I recently built a multi-tenant booking platform called ReserveFlow. It is a scheduling system for service businesses (dental clinics being the primary use case) that handles resource reservations, per-branch inventory and pricing, and real-time calendar updates across connected clients.
The most interesting technical problem was concurrency. How do you prevent two people from booking the same dentist at the same time?
The Problem
Here is a scenario that breaks most naive booking implementations. User A opens the calendar and sees 2:00 PM is available. User B opens the same calendar and also sees 2:00 PM available. Both click "Book." Without any concurrency control, both writes succeed, and now you have a double booking.
You might think "just check for overlaps before inserting." That does not work either. Between the check and the insert, another request can sneak in. This is a textbook race condition, and it happens more often than you would expect in a system where multiple staff members are scheduling at the same time.
Distributed Locking with Valkey
The solution I went with is a distributed lock using Valkey (a Redis-compatible store). Before any booking write, the API acquires a lock keyed to the specific resource and time range. If another request tries to book the same resource in an overlapping window, it has to wait for the lock or fail gracefully.
public async Task<Result<Reservation>> CreateBooking(BookingRequest request)
{
var lockKey = $"booking:lock:{request.ResourceId}";
await using var lockHandle = await _lockService.AcquireAsync(
lockKey,
ttl: TimeSpan.FromSeconds(30),
retryCount: 3,
retryDelay: TimeSpan.FromMilliseconds(200));
if (lockHandle is null)
return Result<Reservation>.Conflict("Resource is currently being booked.");
// Inside the lock: check for overlaps
var hasOverlap = await _db.Reservations.AnyAsync(r =>
r.ResourceId == request.ResourceId &&
r.Status != ReservationStatus.Cancelled &&
r.StartsAt < request.EndsAt &&
r.EndsAt > request.StartsAt);
if (hasOverlap)
return Result<Reservation>.Conflict("Time slot is no longer available.");
// Safe to write
var reservation = new Reservation { /* ... */ };
_db.Reservations.Add(reservation);
await _db.SaveChangesAsync();
await _hub.Clients.Group(tenantId)
.SendAsync("BookingCreated", reservation);
return Result<Reservation>.Ok(reservation);
}The lock has a 30-second TTL as a safety net. If the process crashes while holding the lock, Valkey releases it automatically after the timeout. The retry uses exponential backoff so concurrent requests do not all hammer the lock at the same interval.
I chose Valkey over pessimistic database locks for a few reasons. It is faster because it is in-memory. The lock lifetime is decoupled from the database transaction, so a slow query does not hold a lock open indefinitely. And the TTL guarantees cleanup even in failure scenarios.
Why Not Just Use a Database Constraint?
A unique constraint on (resource_id, time_range) would catch conflicts at the database level, and that is a valid approach. But it has a few downsides for this use case.
First, it turns conflicts into exceptions. The database throws a constraint violation, which you then catch and translate into a user-friendly error. With distributed locking, you know before attempting the write whether the slot is available.
Second, it does not help with inventory. ReserveFlow allows booking products alongside services (a dental cleaning plus a teeth whitening kit, for example). Stock needs to be checked and decremented atomically. A unique constraint on reservation times does not cover product inventory.
Third, the lock gives you a natural place to do all pre-booking validation in one protected section. Check time overlap, check inventory, validate pricing, then write. Everything inside the lock is guaranteed to be consistent.
Real-Time Updates with SignalR
Preventing double bookings on the backend is only half the problem. If User A books 2:00 PM, User B's calendar should immediately reflect that the slot is taken. Otherwise User B fills out the booking form, submits it, and gets a conflict error. That is a bad user experience.
SignalR handles this. Every connected client joins a group based on their tenant ID. When a booking is created, moved, or cancelled, the API broadcasts the event to the group. The Angular client listens for these events and updates the calendar grid in place.
this.hubConnection.on('BookingCreated', (reservation: Reservation) => {
this.reservations.update(current => [...current, reservation]);
});
this.hubConnection.on('BookingMoved', (reservation: Reservation) => {
this.reservations.update(current =>
current.map(r => r.id === reservation.id ? reservation : r)
);
});The SignalR backplane runs on Valkey (same instance as the lock service), so if the API scales to multiple nodes, broadcasts still reach all connected clients regardless of which node they are connected to.
Tracing the Booking Flow with OpenTelemetry
A distributed lock adds moving parts. When something goes wrong, you need to know where and why. Was the lock acquisition slow? Did the overlap check return a false positive? Did the SignalR broadcast fail?
I instrumented the entire booking flow with OpenTelemetry. Custom activity spans wrap lock acquisition, overlap checks, database writes, and SignalR broadcasts. Each span carries attributes like resource_id, tenant_id, lock state, and outcome.
using var activity = ActivitySource.StartActivity("Booking.Create");
activity?.SetTag("resource_id", request.ResourceId);
activity?.SetTag("tenant_id", tenantId);
// ... booking logic ...
activity?.SetTag("outcome", hasOverlap ? "conflict" : "created");SigNoz collects the traces and gives me a waterfall view of every booking. I can see exactly how long the lock took to acquire, whether the overlap check hit the database index efficiently, and how long the SignalR broadcast took. When a user reports a slow booking, I search by correlation ID and get the full picture.
Multi-Branch Complexity
ReserveFlow supports businesses with multiple locations. A dentist can work at two clinics. A product can have different stock levels at each branch. A service can have different pricing per branch.
This is modeled with junction tables. ResourceBranch links resources to branches. ProductBranch tracks stock per branch. ServiceItemBranch stores pricing overrides per branch. When a user selects a branch in the calendar, the available resources, product inventory, and service prices all filter to that branch.
The booking transaction validates everything against the selected branch. If a product has 3 units in stock at Branch A and 0 at Branch B, you can book it at Branch A but not Branch B. This check happens inside the distributed lock, so two concurrent bookings at the same branch cannot oversell inventory.
What I Would Do Differently
If I were starting over, I would add event sourcing for the booking lifecycle. Right now, a reservation has a status field (Confirmed, Cancelled, Completed) and gets updated in place. Event sourcing would give me a full audit trail of every state change, which matters in industries like healthcare where you need to know who changed what and when.
I would also build the calendar grid as a web component instead of an Angular component. The drag interactions are complex enough that decoupling them from the framework would make testing easier and open up the possibility of using the same calendar in a different frontend.
But the core concurrency model (distributed lock, overlap check, atomic write, real-time broadcast) is something I am confident in. It handles the hard problem correctly, and the observability layer makes it debuggable in production.
Alvin Almodal
Cloud & Data Engineering Consultant. Your partner for cloud-native builds and data pipelines.