There’s a lot of discussion about the value of consistency and ACID transactions in data management. I thought it would be useful to describe three fully implemented enterprise-class applications, and one example application, that need ACID transactions. These descriptions should make an abstract concept a bit more concrete.
Table Of Contents
“The Last Dollar Problem”
Airpush is a digital ad tech company. They came to us with their application that matches ad campaigns to advertising opportunities. One of the biggest challenges faced by ad tech companies is matching ad budget and ad spend. The Airpush platform manages over 120,000 live real-time bidding (RTB) applications for its advertising partners, each of which connects advertisers to hundreds of thousands to millions of mobile subscribers. The platform must deliver thousands of ads a second for thousands of separate campaigns to thousands of mobile subscribers, without exceeding ad spend.
Airpush’s system asks, “Which ad should we show to a user?” This question is asked thousands of times a second by different ad campaigns, running simultaneously. The system answers the question, thousands of times a second for thousands of campaigns, with a decision it makes based on a number of factors, among them the answer to a simple question: does this campaign have enough money left to show the ad to the user?
Account balances for each campaign are stored in the database. When an ad is shown, the balance is debited. The problem is there are holes in the decision process where a non-ACID database can give the wrong answer. Consider the two steps below:
- Check the balance. If there is at least $0.30 to cover it, show the ad.
- Write out a new balance that is $0.30 less than the previous balance.
The first problem is handling failure. What if you show the ad, but the system fails before the account is debited? The answer is simple: you lose some money. Maybe not a lot, and it only happens when systems fail, so perhaps it’s not the end of the world if it happens once or twice. But Airpush shows hundreds of thousands of ads a day to hundreds of millions of mobile subscribers.
The second problem is handling concurrency. What if two opportunities are both matched to the same campaign at the same time? There are two ways this can go wrong.
First, debiting $0.30 from a balance without consistency may be hard to do concurrently, if that debit involves a read operation followed by a write operation. Intersperse these operations and you could end up showing an ad for free here and there. As above, if you’re showing potentially hundreds of thousands of ads to some subset of hundreds of millions of mobile subscribers connected to Airpush’s platform, $.30 is no longer a rounding error.
Second, addressing concurrency while debiting doesn’t solve the problem. Let’s assume that we have a system that can debit consistently, but might interleave operations (1) and (2) above. If you have a $0.40 balance, and you are selected to be shown two ads concurrently, then both opportunities might pass step (1), followed by both opportunities debiting $0.30 in step 2. You then end up with a negative balance.
So you’re forced to choose between two options:
- Leave a buffer of unspent balance in each campaign account so you can’t overspend. Then return the extra money to the ad buyer (who has, thus, missed opportunities to reach consumers).
- Overspend on a campaign and show ads for free when you could have shown an ad from a paying customer. You’ll lose money; your customer won’t.
This is known as “The Last Dollar Problem.” It might not seem like a lot of money, but across hundreds of thousands of campaigns, it’s significant enough to affect the bottom line.
Volt Active Data’s strong ACID consistency lets Airpush use every penny in each campaign with confidence that the accounting is correct, saving money. It doesn’t hurt that Volt Active Data performance means 80% lower infrastructure costs while doing more processing for each ad decision, leading to more effective ad placement and reducing Airpush’s CapEx.
Exactly Once Semantics
MaxCDN is one of the world’s top content delivery networks. As the official CDN for the Bootstrap project, I’ve used it for a number of different websites over the past few years. MaxCDN has a problem similar to that of Airpush.
Every time someone pulls a resource from its network, say an image file, MaxCDN needs to record in a database that the item was pulled. This database is used for internal metrics, reporting, and crucially, for billing customers. The primary job of the database is to maintain counters. How many times has a resource been pulled, broken down by time, or broken down by datacenter? Each single access of a resource by a browser triggers updates to a number of counters. Multiply those counters by a tremendous number of resources and a staggering number of pulls, and you have a lot of counters to maintain. But the most challenging part of maintaining this database is the counters all need to be updated, perfectly.
Updates have to be perfect because this data is turned into a bill for CDN customers. If it’s not perfect, MaxCDN has to choose between possibly overcharging, if counters are updated twice, and possibly under-charging if counters are not reliably updated. Both choices are bad for business. As with Airpush, the numbers for each event are small, but the aggregates can make or break a business.
That means when I pull the Bootstrap CSS file, all of the different counters need to be precisely incremented by one — not two, not zero. This requires exactly once semantics. Sadly, it’s not really possible to do things exactly once, given the vagaries of the networks we all use today. So we have to fake it.
How do you get effective exactly once behavior without actually doing things exactly once? You’re going to need two things:
- A source that can guarantee at-least-once delivery. The buffer sends calls to the database until they are confirmed committed, re-sending them if there is ambiguity.
- A strongly consistent database that can at least compare-and-swap atomically.
Here’s an example of how to ‘fake’ exactly-once counter updates using strong consistency, delivering true exactly once semantics.
The client generates a GUID (Globally Unique ID) for every counter update. Instead of schema:
counter-id (PK), counter-value
Use schema:
counter-id (PK), counter-value, guid
Every time you increment the counter, you set the GUID. But before you do the increment, you check the GUID and make sure it doesn’t match. If it does, then this is a duplicate increment and you can skip it. Combine this scheme with a careful client that only has one outstanding update per counter at a time, and you get effective exactly once incrementing of counters.
Of course there’s a catch. Before you started thinking about various failures and the bleak reality of distributed systems, you might have assumed incrementing a counter update was a simple little task you could do a bazillion times a second. Now your counters are each burdened with a 128-bit gUID and you need to invoke actual transactional logic, or at least the compare-and-swap features of your data store. Updating counters has gotten expensive.
This is what MaxCDN found as they tried to implement these kinds of counters using Storm in at-least-once mode backed by HBase to store the counters. The processing was too big and too expensive to keep up with their business. Even if they took 1,000 counter updates and sent them to Storm in a batch, there was no way to batch the consistent updates to HBase. Furthemore, the Storm job could fail mid-batch, leaving some HBase counters updated and others not. This complicated fault recovery greatly. They needed an alternative.
Strong transactions to the rescue! If we have a system with serializable ACID semantics across tables, we can take the idea above and make it radically more efficient. Rather than use one GUID and one expensive compare-and-swap per counter, batch up a thousand counter updates together and use one GUID for the whole batch. Now you check the GUID before you update 1,000 counters. Since the operation is atomic (per ACID), there’s no way to fail in the middle of a batch. Either 1,000 counters are updated or zero counters are updated.
This approach was hundreds of times more efficient than the Storm and HBase solution in both speed and storage costs. The fact that Volt Active Data is a single, smaller, super-efficient system to manage, compared to the Storm, HBase, ZooKeeper, and Trident alternative, was icing on the cake.
Policy Enforcement
Openet is a Volt Active Data OEM partner that sells billing and authorization software to mobile telecommunications providers. If you make a call on one of their customer’s networks, the authorization for that call goes through Volt Active Data.
When I dial my mom using my iGalaxy Nexus 6s Plus, Openet’s software receives a request. Its job is to respond within 50ms with a yes or a no to authorize or restrict the call, and also to record the call initiation for billing purposes. Once the call is underway, it may receive continuation authorization requests as well. Notably, if the software can’t respond in 50ms, the call goes through, and it probably isn’t billed. Openet spends a lot of time and energy trying to avoid that.
Authorization is a policy enforcement problem. There is a set of rules, and they must all pass to let the call through. There may be different rules that determine how the call is billed. This is accomplished by combining many database queries and updates, with business logic and rule applications, in a single ACID transaction.
Why do you need ACID? To ensure business rules are followed. A hypothetical rule might be, “Don’t let a customer call the same number more than five times in one minute.” Without ACID this is harder to enforce.
Then there are balance-debit problems such as ‘don’t let a call go through if the prepaid balance is too low to pay for it’. Note that the cost of the call is based on a number of factors and rules that need to be looked up before a determination can be made. This debiting problem is part of a larger recordkeeping problem to support precise billing, similar to the issues MaxCDN and Airpush have. ACID allows for correct billing, which means more money for Openet’s finance department and less work for Openet’s engineering department.
Volt Active Data does all of this work, neatly, in a stored procedure at nearly any scale needed. It also offers the low long-tail latency, extreme predictability and belt-and-suspenders persistence and fault-tolerance telecommunications customers need.
Bonus: Simpler Development
This one isn’t a customer, but it’s a great example.
I saw a presentation in 2014 from a mobile analytics company. You add its library to your mobile app, and it pings their servers with a device id, app id tuple every time a user launches the app. Their presentation showed how they answered the question, “How many unique devices have opened this app today?”.
The naive way to implement this is to store the complete set of all device id, app id tuples in a database and check for uniqueness every time someone opens an app. The problem with this approach is it can require a tremendous amount of space to hold a billion tuples in a database and it can be too slow to check them for uniqueness around a million times a second.
To go faster, this company decided to build a solution around the HyperLogLog probabalistic data strucure. HyperLogLog trades accuracy for constant space usage and constant performance.
The downside to HyperLogLog is it requires an external library to be run to query and update the count for any particular tuple. It’s also not an integrated part of many databases. In this case, Storm code was used to process HyperLogLog data, which was stored in Cassandra. This required a substantial server footprint.
We considered this problem and built an example to show how it might be approached with Volt Active Data. Our Unique Devices example shows the benefits of putting the HyperLogLog code right into a transactional stored procedure. Integration of processing and state makes Volt Active Data dramatically more efficient at solving the problem, running a million operations per second on many fewer servers than the original solution.
Where does ACID come in? Since all of the work is done in an ACID transaction, every time a HyperLogLog is updated a system with transactions can compute the current estimated unique device number for an application and store it in the database as a simple integer column. The client side doesn’t need to understand HyperLogLog; it can query the database like any vanilla relational data.
But ACID really shines when you want to expand beyond this simple case. We see a lot of this with our users: they begin using Volt Active Data for a specific problem, but quickly realize how powerful it can be to do more and more work in a transaction, and how it can make their application more powerful.
One enhancement to the problem we’re examining is to provide the mobile app developers with an exact count up to a fixed value, say 1,000, then use HyperLogLog to provide a bounded estimate beyond that. The challenge is that without transactions, it’s hard to make that switchover precisely at 1,000. With transactions, however, it’s almost trivial. This variation is available to run as part of our example code, linked above. Because the client isn’t aware of how the service is implemented, there are no client side changes required to make this change, a big boost to agility.
While this isn’t a production Volt Active Data application, it’s representative of a pattern. Strongly transactional apps often involve dramatically more straightforward code. There is no code to handle partial failure, due to atomicity. There is no code to handle conflicting concurrent access, due to isolation.
Watch a video walkthrough of the example code.
Conclusion
The six million dollar question is, “Could you build these apps on systems without ACID transactions?” Sometimes no. Sometimes yes, but with more effort.
- Apps that need billing and precise stats or live calculations rely on ACID to make that achievable. Weaker isolation models such as “Read Committed” isolation instead of “Serializable” would fall down hard in this case.
- Apps trying to achieve effective exactly-once processing, such as MaxCDN, need strong consistency, but the batching optimization MaxCDN implemented required full transactions.
- The Unique Devices problem could be implemented without strong consistency, as the answers it provides are already estimates. In this case the code to solve the problem itself is simpler with transactions and the features can be enhanced faster because the code is more concise and less fragile.
Sometimes it’s possible to get technically correct results with weaker consistency by using way more data. Rather than update a balance field with a new value, just keep a log of all operations against the field and sum them up on read. CRDTs take this a step further. The smaller issue is that these approaches are less efficient for writes and use way more space. The big problem is that reads can be orders of magnitude less efficient. What’s easier, summing a ledger of hundreds of entries, or just reading a single value? There are hybrid approaches that try to cache calculated values, but back it by logs. There are places where this approach makes sense, but you end up with log-compaction issues and huge rebuild jobs when anything goes wrong. These approaches are more suited to populating an ancillary dashboard than driving your core business.
It’s also important to note that there are plenty of apps out there that get less value from strong consistency. Netflix has famously built a robust and flexible system for running its service, but that service can be fungible and users will tolerate it. Netflix has also put hundred of engineer-years into their solution; your company might not be willing to make that kind of investment. The applications described above, amongst others in telco, finance, gaming, etc., all require, or strongly benefit from, ACID consistency.
My concluding point: ACID transactions allow us to build data services that work without the need to be a distributed systems expert. Even if you are a distributed systems expert, ACID transactions let you focus your expertise on what makes your application special, not on distributed systems, data processing and state management. Counter-intuitively, there are times when consistency can make your applications faster, thanks to removing all those workarounds.
Continued in Part two: ACID: How to screw it up!