In today’s real-time, hyperconnected reality, data is a core driver behind most digital strategies. User data is constantly gathered and shared across various technology backends and platforms—as is system performance data to inform engineers. Data is pivotal for everything, from generating machine learning models, refining user experiences and even supporting new data-centric services. But although data is a clear backbone to many enterprises, there is no clear winner regarding which database to use.
Large organizations often find themselves supporting different database styles, from relational databases to NoSQL and distributed SQL. And as organizations continue to adopt multiple databases, safely managing the processing and storage of data becomes more challenging to enact at scale.
I recently met with Jakub Lamik, CEO, Redgate Software, to understand the role DevOps, specifically database DevOps, is playing in advancing data-driven digital transformations. According to Lamik, applying DevOps practices for database management can improve performance, increase productivity and help organizations stay compliant. Below, we’ll consider the state of today’s data-driven digital transformations and highlight ways to improve how DevOps is applied to database management.
State of Data-Driven Digital Transformations
The 2022 StackOverflow Developer Survey found the most popular databases to be MySQL, PostgreSQL, SQLite, MongoDB and Microsoft SQL Server. But most often than not, businesses deploy more than one of these in production. Redgate’s 2021 State of Database DevOps report found that 70% of businesses had more than one DBMS in use, and 48% had teams working on three or more different databases.
Engineers operate in increasingly hybrid and distributed technology environments. This isn’t for no reason—in fact, 90% of organizations said multi-cloud is helping them achieve their business goals. Lamik also notes a diverse array of software technologies in practice today, from multiple clouds to decoupled microservices architectures and database types. But the desire to work with diverse toolsets causes fragmentation, which could hinder software delivery efforts. For example, organizations want to deploy the same framework and similar methodologies across different databases, said Lamik, and have it be database- and cloud-agnostic.
Tips to Improve Database DevOps Operations
To increase the pace of software delivery, infrastructure automation is often needed. And like other roles, database professionals are increasingly trying to automate operations and free themselves from mundane tasks. So, what are some ways to streamline database DevOps? First off, Lamik encourages adopting a common approach across multi-database environments, using standard toolsets that are flexible and portable. “The organizations that standardize deployments across different database environments can increase the frequency and reliability of deployments,” he said.
Catching and resolving errors quickly is also becoming increasingly essential to meet objectives. And although DevOps brings many advantages, the CI/CD pipeline has gaps. “Observability supported by intelligent automation is key to closing those security gaps,” said Lamik.
Thus, he encouraged additional oversight and tracking of changes to ensure stability throughout the database deployment pipeline. Also helpful is encouraging collaboration with security professionals (also known as DevSecOps) and integrating security principles into the DevOps workflow to ensure the process is secure by design.
Another element of database security involves the proper observability and classification of sensitive data. As Lamik describes, organizations must be careful to distinguish between data used for production purposes and data used for development purposes. Ensuring sensitive data is cataloged correctly is imperative to avoid production data from being breached or accidentally publicly exposed.
In summary, applying DevOps ideals to database management could be a boon for development efforts. For those who haven’t yet considered it, a DevOps transformation will pay dividends, says Lamik. Furthermore, a common management layer could help wrangle multiple database types and help adopt new emerging database types, such as Snowflake and Postgres. Yet, when acquiring new database toolsets, Lamik admitted it may not be a quick transition. He recommends a land-and-expand strategy rather than a top-down approach (which typically isn’t as effective in empowering individual developers).
Commenting on the industry’s recent move toward platform engineering, Lamik sees it as the next evolution of DevOps. Whereas DevOps brought helpful toolsets to streamline deployment and digitization, platform engineering is taking these practices and productizing them to be reusable at scale across teams and departments, he explained.
Also, in the new cloud-native stack, containers are designed to be ephemeral, causing engineers to craft workaround solutions to enable stateful storage on platforms like Kubernetes. Regardless of deployment, data must have proper backup and recovery controls as well. In this new cloud-native paradigm, the role of database professionals is changing, explained Lamik, and “APIs are becoming the focal point of communication.”