In 2016, we partnered with Research Now to conduct a survey on a subject we take very seriously: data consistency. The results were shocking. While there was wide agreement on what is defined by “critical data” (financial data), only 58% of people surveyed stated that having consistent critical data transactions was vital to their business. While most others stated that having consistent transactions was important, consistency was not regarded as mission critical by almost half of the sample. Additionally, a sizable contingent (45%) of IT decision makers stated that they would move forward with design approaches that sacrificed consistency for time to market. Despite this, the problems caused by non-consistent transactions were well known, with most agreeing that inconsistency can cause serious customer satisfaction issues or worse.
Anyone who relies on accurate online financial transactions, including debit and credit card processing, should be appalled. While speedy transactions are nice to have (and Volt Active Data delivers that speed), most customers would rightfully insist that their financial data consistency be the first priority for financial institutions (Volt Active Data also delivers here, read on for more about how). For them, the potential problems caused by inconsistencies would be inconvenient at best and disastrous at worst. If I have $2 in my account, and I buy something for $1.50 with my card, I should have an additional purchase of anything over 50¢ rejected. But if my account information is not consistent, I could buy something else and end up with a negative balance.
This is a simple example where inconsistency would be an inconvenience, but let’s look at another scenario. In today’s stock markets, companies need to be able to analyze and execute trades in milliseconds. To maximize profit, you need to have the correct price information quickly. Inconsistent data here means that you lose out on potentially large profits or make bad trades. And then there’s the regulatory side of the equation – the SEC takes a dim view of firms that don’t report transaction data correctly and quickly.
With the importance of speed, it is no surprise that many financial firms switched from more traditional, costly, hard-to-scale databases to NoSQL databases, such as Cassandra.
However, these databases provide weak transaction support and insufficient consistency guarantees. Eventual consistency, as the name suggests, means that consistency is achieved eventually, which can be anywhere from milliseconds to hours or days, depending on a number of factors. Cassandra may be able to give you answers fast – if you can manage the difficulty in querying data – but there is no guarantee you will receive the correct answer in real-time. The consistency for some other databases can be tuned, giving the developer a choice between speed and accuracy.
While choice may seem appealing, the fact is that tunable or eventual consistency is still lacking. These databases are not ACID compliant. In brief, an ACID transaction has four requirements: a binary nature (Atomic), protections against invalid database states (Consistent), transaction independence (Isolated), and transaction finality (Durable). For a more in-depth look at what ACID means, take a look at John Hugg’s articles about consistency. Most NoSQL databases will fail one or more of these requirements, and thus fail to meet the data consistency requirements of many firms.
Another way to describe differences and trade-offs in distributed systems is the CAP theorem. The theorem states that with network partitions, you cannot have both 100% accuracy and 100% node availability. Practically, this means that in designing distributed systems one will have to, at some point, favor consistency or availability when they could not have both. This is unfortunately reduced by most people to mean “you can either have accuracy or availability”, which is further used to justify sacrificing consistency for node availability. However, as stated by Dr. Michael Stonebraker here and elsewhere, sacrificing consistency does not always improve node availability in the real world. Systems that claim to be ACID often fudge the isolation part, or screw up the implementation – see the Jepsen report for more on that. Volt Active Data provides serializable isolation, the strictest guarantee available.
One of the many things we are proud of is Volt Active Data’s consistency. Volt Active Data is immediately consistent and very fast. All transactions are ACID — data is available (and correct/consistent) as soon as the transaction completes – typically in 50ms, with predictable low latencies. We are so confident in the consistency of Volt Active Data that we hired Kyle Kingsbury, the creator of the Jepsen Tests, to test Volt Active Data. Even though these tests are regarded as the industry’s toughest, Volt Active Data passed the most stringent Jepsen tests so far.
When using Volt Active Data, developers do not have to worry about dealing with inconsistent data, since the database takes care of consistency. This means customers and firms don’t have to worry about the last dollar problem, or regulatory compliance, or if they’re getting their clients the best possible price on a security trade. Simpler systems like Cassandra often require developers to build more complex apps on top of the database. And application developers, who may not be distributed systems experts, often get this stuff wrong. Consistency, as provided in Volt Active Data, lets your business logic be smaller – because it’s where the processing is, unlike other systems, which move the data to the processing. Volt Active Data’s design requires less business logic, with the greater likelihood it’s more likely to be correct. Switching from NoSQL to Volt Active Data means 100% accurate data, and proper recognition of the potential problems caused by inconsistent data.
Learn more about the trends in financial services when you check out our #finserv infographic.