Homepage Rankings and Research Companies Channelcast Marketing Matters CRNtv Events WOTC NetApp Digital Newsroom WatchGuard Digital Newsroom Cisco Partner Summit Digital 2020 HPE Zone The Business Continuity Center Enterprise Tech Provider Masergy Zenith Partner Program Newsroom Hitachi Vantara Digital Newsroom IBM Newsroom Juniper Newsroom Intel Partner Connect 2021 Avaya Newsroom Experiences That Matter The IoT Integrator NetApp Data Fabric Intel Tech Provider Zone

AWS Launches Amazon MemoryDB For Redis

For application developers looking to simplify the data access layer, the new offering combines the features of an Amazon ElastiCache Redis cluster and database services into a single product, says Todd Helfter, director of database services at Effectual, a Jersey City, N.J.-based, cloud-first managed and professional services company.

Amazon Web Services’ launch today of Amazon MemoryDB for Redis, a new fully managed database, answers customers’ calls for an easier way to build modern applications with microservices without compromising on extreme performance and durability, according to the cloud provider.

Now generally available, Amazon MemoryDB for Redis is a Redis-compatible, in-memory database designed to deliver ultra-fast performance with sub-millisecond latency.

“With MemoryDB, customers unlock their ability to simplify their architecture with a durable, ultra-fast, in-memory database without the hassle of managing a separate cache, database and underlying infrastructure,” said Yan Leshinsky, vice president of Amazon Redshift, during his introduction of the new offering at the virtual AWS Innovate: Data Edition event today. “Because MemoryDB stores data durably, you can use it as your primary database to achieve low latency and high throughput.”

Up until today, AWS customers solved their need for performance by using a managed or self-managed Redis cache alongside their primary database, according to Leshinsky.

“This enables micro-second latency, and, at the same time, flexible and friendly Redis data structures allow developers to write traditionally complex code needed for microservices in a simpler way,” Leshinsky said. “But in-memory data stores are traditionally transient in nature, trading off durability for speed, so developers have to use cache alongside a separate durable database of records.”

AWS built Amazon Memory DB for Redis to deliver that durability, simplify the architecture and reduce the operational overhead of managing two separate data stores, according to Leshinsky.

“With MemorydDB, you can achieve micro-second read latency and single-digit, millisecond write latency, high throughput and multi-AZ (availability zone) data durability for modern applications,” he said. “MemoryDB stores your entire data set in memory across one to hundreds of nodes, delivering up to 13 trillion requests per day. That’s 160 million requests per second to meet the performance requirements of microservices-based applications. Plus, MemoryDB is fully compatible with Redis, Stack Overflow’s most loved database for five consecutive years. You can build applications quickly with flexible and friendly Redis data structures and APIs like streams, lists (and) sets to support high application velocity and agility for new and unforeseen business requirements.”

For application developers looking to simplify the data access layer, the new offering combines the features of an Amazon ElastiCache Redis cluster and database services into a single product, said Todd Helfter, director of database services at Effectual, a Jersey City, N.J.-based, cloud-first managed and professional services company and AWS Premier Consulting Partner.

“The impact here is a reduction in application code complexity and an increase in speed and reliability for both products,” Helfter said. “From an infrastructure management perspective, the combination of these services into a single product simplifies application coding and deployment.”

Amazon MemoryDB for Redis can be used to build versatile, flexible applications for the web, mobile, retail, gaming, banking, finance, media, entertainment and other industries, Leshinsky said.

For AWS customers and partners supporting them with managed data solutions, it will lower the complexity required for applications that need a persistent database, but don’t have a lot of relations or data structure to worry about, according to Ben Hundley, a senior cloud solutions architect at Los Angeles-based Mission Cloud Services, a managed cloud services provider and AWS Premier Consulting Partner. The proprietary layer that AWS has added to Redis -- which typically is used as a cache or a messaging queue -- will allow Redis to work as a persistent, primary datastore as well, he said.

“So, instead of running MongoDB, for example, as your primary database and caching in Redis, you can use Redis with MemoryDB for the whole thing,” Hundley said. “Redis is a simple key-value store, so the major difference between it and a traditional database -- SQL or NoSQL -- is that there isn’t much data modeling, schema or structure to documents. There actually aren’t even documents in Redis, just keys and values. But in those values, you could store unstructured data, deserializing inside your app. I think Amazon MemoryDB for Redis will be a welcome asset for AWS users and AWS partners supporting those users’ data and analytics projects.”

Amazon MemoryDB for Redis currently is generally available in AWS’ U.S. East (northern Virginia), Asia Pacific (Mumbai, India), Europe (Ireland) and South America (Sao Paulo, Brazil) cloud regions, and will be available in other regions in the coming months.

The new offering could impact how Philadelphia’s Anexinet builds application services for its customers, according to David Lavin, an enterprise architect for the digital business solutions provider and AWS Advanced Consulting Partner. Often, the company will consider building services with a Redis cache that’s used to supplement its relational or NoSQL database or databases and increase service performance, he said.

“For each application, we’ll need to decide whether the Redis cache approach makes sense given the extra overhead of managing the separate platform,” Lavin said. “And it’s not uncommon to do an MVP release without the cache, only to have to go back and enable it for future releases. Having an integrated capability like this could allow us to enable caching from the get-go, be more consistent in our implementations and get the performance benefits of the Redis cache without having to manage separate platform infrastructures.”

Back to Top



    trending stories

    sponsored resources