Cloud Integration: Five Patterns That Don't Become Tech Debt
Most cloud integration projects age badly. Here are five patterns that still look good three years later, and a few that don't.

"Cloud integration" is a phrase that has been abused to mean everything from "we put a REST API in front of this thing" to "we rebuilt our entire system on Kubernetes." We use it to mean something narrower: the specific work of connecting cloud services, SaaS products, and on-prem systems so that data and events flow between them reliably. That work has a decade of accumulated patterns, and most of them age badly. Here are the five that still look good after three years in production, and a few that reliably do not.
1. Event-Driven With a Durable Queue, Not Point-to-Point Webhooks
The simplest integration is a webhook: system A fires an HTTP POST at system B when something happens. It works on the whiteboard. It fails in production the first time system B is down for maintenance and system A decides that a 500 response means "drop the event and move on."
The pattern that ages well is event-driven integration with a durable queue in the middle. System A publishes to a queue (SQS, Service Bus, Pub/Sub, EventBridge, Kafka, whichever). System B consumes from the queue at its own pace. When system B is down, the events pile up in the queue and get processed when it comes back. When system B has a bug, you can replay the events from the queue instead of asking system A to resend.
The cost is that you now have a queue to operate, and the programming model is asynchronous instead of request/response. That is a feature, not a bug. Asynchronous systems survive partial failures. Synchronous point-to-point integrations cascade them.
For anything beyond the simplest one-off integration, put a queue in the middle. You will thank yourself the first time the downstream system has a bad afternoon.
2. Idempotent Receivers, Because Everything Will Be Delivered Twice
Every queue technology and every retry policy you will ever use is "at-least-once delivery." That means messages can and will arrive twice, occasionally three times, and very rarely many more. If your receiver is not idempotent — meaning it is safe to process the same message multiple times — you will eventually double-charge a customer, double-send an email, or double-create an invoice.
Idempotency is not complicated. You give every message a unique ID, and the receiver keeps a short record of IDs it has already processed. If a message comes in with an ID the receiver has seen, it acknowledges the message and moves on without reprocessing. The dedup window can be as short as 24 hours for most systems.
The failure mode of skipping this pattern is invisible until it bites, and it always bites in production with real customer data. Build idempotency in from the first integration, and make it a policy for every new one.
3. Schema Contracts That Live Outside the Code
Integrations break when one side changes a field name, changes a data type, or adds a required field without telling the other side. This happens constantly. The fix is to treat the schema as a first-class artifact that lives outside both systems in a version-controlled repository, and to have both sides validate against it.
JSON Schema, Protobuf, Avro, or OpenAPI — pick one based on your ecosystem. The specific technology matters less than the discipline of having the contract in one place, versioned, and enforced on both ends. When someone wants to add a field, they update the schema, both sides publish new versions that support both old and new, and eventually the old version is deprecated.
This sounds bureaucratic for a two-system integration. It is. It is still cheaper than the incident where a silently renamed field caused six hours of bad data to land in your data warehouse before anyone noticed.
4. A Real Dead Letter Queue, Watched by a Real Human
Every integration will have messages that cannot be processed. Malformed data, bugs in the receiver, downstream systems that are permanently gone, records that reference foreign keys that no longer exist. The question is not whether you will have failures. The question is whether you will know about them.
The pattern that works is a dead letter queue (DLQ) for every integration, plus a monitoring rule that alerts when the DLQ is not empty, plus a runbook for investigating DLQ contents, plus a real human who is responsible for running the runbook.
The pattern that does not work is a DLQ that nobody looks at. Every mature organization we have worked with has a story about a DLQ that accumulated months of failed messages before anyone noticed, by which point the underlying data was gone or the remediation was impossible. Do not be that organization. Watch the DLQ like you watch production alerts, because that is what it is.
5. The Boring iPaaS for the 80 Percent
For every integration where you need precise control, there are eight where you just need to move data from Salesforce to HubSpot, or from a SaaS product to a spreadsheet, or from a webhook to a database. For those eight, an integration platform as a service (iPaaS) like Workato, Zapier, Make, Boomi, or Azure Logic Apps is the right answer, not custom code.
The objections we hear are always the same: it is expensive per-task, it is a black box, it does not version well. All of these are partially true. They are also all beaten by the fact that the iPaaS tool works today, does not require engineering time, and will be maintained by the vendor for the next five years whether or not you remember how it works. A custom Python script that nobody has touched since 2021 is not cheaper than Workato. It is just cheaper on the line item where you were looking.
Reserve your custom integration engineering for the 20 percent of integrations where you need the control. Put the other 80 percent on an iPaaS and stop writing glue code that ages badly.
Patterns That Do Not Age Well
For the sake of being honest, here are the patterns we see fail most often and recommend against:
- Direct database-to-database sync via triggers or CDC without a middle layer. Couples the schemas, couples the upgrade cycles, and breaks when either side refactors.
- Scheduled batch file drops over SFTP for integrations that should be real-time. It is 2024. Stop it.
- Embedding business logic in integration plumbing. The integration layer should move data, not decide what the data means. Business rules belong in systems of record, not in mapping transformations.
- Hand-rolled retry loops in application code. Use the queue's retry policy or a framework. Homegrown retry logic is always subtly wrong.
What We'd Actually Do
For a new integration today, we put a managed queue in the middle, write idempotent receivers with unique message IDs, version the schema in git with CI validation, wire up a DLQ with PagerDuty alerts, and use an iPaaS for anything that does not need custom logic. We document the data flow on one page and keep the documentation near the code. None of this is glamorous. All of it saves weeks of incident response per year.
Three Takeaways
- Put a durable queue in the middle of every integration that matters. Synchronous point-to-point is a failure waiting to happen.
- Idempotency and dead letter queues are not optional. They are the difference between a mature integration and a pager.
- An iPaaS is the right answer for most integrations. Custom code is for the 20 percent where you genuinely need it.
Talk with us about your infrastructure
Schedule a consultation with a solutions architect.
Schedule a Consultation