dev-resources.site
for different kinds of informations.
From Legacy to Modern: Creating Self-Testable APIs for Seamless Integration
In every project there is a phase when you have to rewrite a legacy application into a new one. It’s often the case that refactoring is not only an unavoidable, but also extremely challenging task. One of the primary goals of such integrations is to ensure that both systems - new and old - produce consistent outputs.
In this article, we'll dive into how we build a self-testable API mechanism. The goal is simple - to build a system that is able to validate the input and output itself and notify the developers of any inconsistencies. By creating this system, we will then be able to ensure that both systems produced identical results by running thousands of different scenarios.
We’ll cover the architecture of such systems, the challenges we face in such situations, and what the limitations of ensuring the smooth coexistence of both systems are.
 Â
Legacy
Imagine the legacy system has been serving various business-critical operations for about 10 years. While it has proven reliable over the years, it has several outdated standards that made new integrations nearly impossible. One of the primary complications is the diversity of data formats it handles. The legacy system supports both JSON and XML formats - an unfortunate result of an unsuccessful attempt to move from XML to JSON. This introduces complexity during integration. Another challenge is that endpoints are being used by a variety of third-party clients, so we can't make even the most minimal changes, because we don't know how they are parsing data or how they are integrated. This not only increases the complexity of the system, but it also means that we have to consider how the system can be refactored and replaced without ongoing operations being disrupted. And lastly, legacy systems don't follow RESTful architecture. There are a lot of different naming conventions, input variants (query parameters, body), authentication mechanisms, etc which makes it less predictable and harder to standardize across different parts of the system.
Given these constraints, our primary challenge is ensuring that our new system can be integrated smoothly into the legacy system, whilst maintaining consistent and verifiable outputs. Here, the self-testable API we have developed (more on that later) will act as a bridge, allowing us to automate the validation process between the two systems.
 Â
New application
Our new system is built on modern principles, but due to the use of the legacy API by numerous third-party clients, certain aspects must remain unchanged during the migration phase. Specifically, we can’t modify the authentication mechanism, API paths, or input formats, as it is this that ensures we are able to maintain seamless compatibility with existing clients.
The main focus now is on migrating the code to the new test, while taking great pains to ensure that as many tests as possible are conducted to ensure its reliability. Only once the refactored code is fully tested and validated, can we move forward with the plan to rewrite the API under a new version, /api/v2. The self-testable API is the main component of this process, as it allows us to validate that both the new and the legacy systems are returning the same outputs without disrupting current clients.
 Â
Overview
Now, it's time to look at both systems from an architectural perspective. This is quite straightforward, but in any case we should have a clear picture of it. Both systems share a single MySQL database to access the same dataset. In front of these two systems, we have an HAProxy load balancer that routes traffic based on predefined rules.
 Â
Solution Design: Step-by-step implementation
So, it’s time to present technical details on how we implemented this self-testable mechanism.
From this diagram you can see that we have both a New and Legacy system. When an API request is received, we forward the request along with all the data directly to the legacy system, and then wait for its response. After the legacy system has responded, we immediately serve this output to the client to prevent any delays. Also, we gather all the incoming request data into a message and send it to RabbitMQ using Symfony Messenger. As part of the asynchronous processing, we send the same API call to the new system for validation. This entire process is done in the background to prevent any performance issues on the primary API response.
Once we receive the output from the new system, we compare it to the legacy system’s response. If there’s a mismatch between the two, an error is logged, and the development team is automatically notified, allowing us to detect bugs or inconsistencies between the systems without affecting the client’s experience.
After the new code has been fully tested and verified, we release the API. Releasing means that we exclude a specific route from the shadow mechanism. At this point, all incoming requests are routed exclusively to the new system, bypassing the legacy system entirely. The legacy API calls are no longer needed, marking a complete migration of the route. We repeat this process from the beginning for each route.
If we dive deeper and look at it from the Symfony perspective, it looks like this:
 Â
Limitations
But not everything goes as smoothly as it seems. Every system and approach has its limitations, and ours is no exception. Since both the legacy and new systems rely on a single database, we face a significant challenge: we cannot modify the data during testing. This means that our self-testable API approach is only viable for read-only endpoints or those that don’t involve data modification. For any endpoints that involve creating, updating, or deleting data, this approach isn’t usable, as changes made by one system would affect the other, leading to inaccurate comparisons and potential data integrity issues.
 Â
Summary
In this post, we have looked at how we can verify whether the refactoring process has gone well. As I have outlined in this article, we forward requests to the legacy system and present the output right away to ensure that it works as before. Meanwhile, at the same time we use Symfony Messenger to check the outputs with the new system in the background. However, we might still have some issues, as sharing the same database can create challenges, particularly for endpoints that modify data. But despite this, the approach presented above is very useful, as it helps us to ensure that our migration process is bug-free, while reducing risks and allowing us to deliver a smoother shift to the new system.
Copyright NFQ Technologies 2025
Featured ones: