OpsGuru Named 2024 AWS Canada Regional Partner of the Year
OpsGuru's Data Modernization services will empower your business with updated data infrastructure, advanced analytics and AI, and improved scalability and performance.
Learn Moreadd
Unlock the full potential of cloud migration with OpsGuru's Cloud Modernization services. We can refactor apps and use cloud-native features to future-proof your business.
Learn Moreadd
Maximize business resilience with OpsGuru’s 24/7 AWS Managed Cloud Operations Services. Get round-the-clock monitoring, proactive incident response, and cloud reliability.
Learn Moreadd
Enhance your applications with OpsGuru's Cloud Native Development services. Use custom strategies and cloud technology to cut costs while improving scalability, resilience, and operations.
Learn Moreadd
Enhance your cloud security with OpsGuru, a trusted Arctic Wolf Partner. Our Arctic Wolf consultants provide threat detection, incident response, and expert remediation to safeguard your cloud environment. Talk to a security advisor today!
Learn Moreadd
Maximize your data's potential with OpsGuru, a trusted Databricks consulting partner. From data engineering to analytics and machine learning, our Databricks consultancy provides tailored solutions to accelerate your cloud journey.
Learn Moreadd
Enhance your cloud security with OpsGuru, a trusted DoiT Partner. Our DoiT consultants provide threat detection, incident response, and expert remediation to safeguard your cloud environment. Talk to a security advisor today!
Learn Moreadd
Enhance your cloud security posture with OpsGuru, a trusted Fortinet consulting partner. Our experts provide tailored cloud security solutions using Fortinet's data-driven platform. Talk to a cloud security expert today!
Learn Moreadd
Data-centric approach to cloud security so you can establish multiple layers of defense, ensuring immediate risk remediation and compliance without disrupting your business.
Learn Moreadd
Explore the latest news from OpsGuru.
See Alladd
Discover our customer success stories through case studies showcasing OpsGuru’s innovative solutions.
See Alladd
Learn more about our upcoming events and how to connect with OpsGuru through conferences, webinars, and immersion days.
See Alladd
Unlock customer success stories, insights, and cloud strategies through our solution-based ebooks.
See Alladd
Find the latest industry news, insights, and more on our Blog.
See Alladd
  • DevOps
September 22, 2021
Infrastructure as Code and Continuous Delivery Makes Database Development Easy

Infrastructure as Code and Continuous Delivery Makes Database Development Easy

Infrastructure as Code (IAC), Continuous Integration and Continuous Delivery (CI/CD) are becoming part of the standard pattern for delivering application code into production environments. Unfortunately, this methodology is rarely applied when deploying models for relational databases, often favoring more classic and manual methods that are thought to be safer. This is unfortunate because Infrastructure as Code and Continuous Delivery Makes Database Development easier for the developer and less risky for the business.

Implementing IAC combined with a CI/CD pipeline usually follows a progression from manual, to semi-automated with infrequent releases, to automated with more frequent releases. The common occurrence when CI/CD is applied to database models is to transition from manual to semi-automated but often, the process gets stuck at this stage, never reaching a stage with frequent releases. The cause of this stall will be slightly different for each company, but it is generally fear of losing or corrupting data. Every database developer I know has a war story about a production database mistake they have made that caused data loss or corruption.

Usually, these mistakes have a semi-happy ending where there was some downtime, and the database backup was used to do a full restore, but that is not always the case, and sometimes data is gone forever.

In the postmortems of database events, the common suggestions are to go slower to better understand changes. On the surface, this is a good suggestion, but what “slowing down” means to most companies is to release less frequently. Unfortunately, releasing less often is not going slower, it just feels like it is. If the same number of developers are making roughly the same number of changes and those changes are just being released less often but all at once, that is better characterized as doing nothing broken up by short periods of going very fast. This can create a negative feedback loop where large batches of changes are made all at once to a database model, causing errors, leading to releasing less often, leading to larger batches, and more errors.

Here are changes you can make to improve your database deployments, and why they help:

  • What: make small changes all the way to production.
    Why: small changes are less complex, they are easier to understand, easier to quickly code review, and if a mistake is made the impact of that mistake should be smaller, thus easier to fix. These changes need to be made all the way to production so we can ensure that all our databases (development, test, and production) are in the same state as well as so that the time between when a change is made and when a change is deployed is minimized so if there is an issue, the developer is still operating in the same context.
    Example: If you are adding a column and making it not null, do it in multiple steps rather than a single step. First add the column, release, then populate, release, then make not null, release.
  • What: Do not combine multiple changes into a single release, each change should get its own release.
    Why: You want to clearly understand what change each release is making, rolling up multiple releases into a single deployment can make changes to complex to easily understand. As well dependencies and order of operation errors can be introduced when rolling up releases.
  • What: Deploy all changes through your entire pipeline(I.E through the development and testing environments).
    Why: Development and testing environments exist to help catch mistakes before proceeding to production, but we have to make sure we use them. These environments should be combined with manual or automated testing to not just check for syntax and dependency errors, but for logic errors as well.
  • What: Never make manual changes to your databases.
    Why: Manual changes directly in production are one of the easiest ways to make large mistakes quickly. Any manual change you can make should be check in to source code and deployed via a release. This will allow changes to be code reviewed, tested, and validated before entering production. This allows manual changes, even critical ones, to go through the same validation process as normal code, this is the best way to solve problems quickly, rather than making them worse. As well there are some additional benefits that checking in manual changes/fixes allows other developers in the future to see and copy how other problems in the past were resolved.

The steps above may look like they introduce a lot of overhead, but realistically they are just a reversal of most people’s current practices. Rather than doing a lot of work less often, the pattern above is to do a little work more often. As well, mistakes will always happen. This applies to databases just as much as any other software. The best approach you can take is the path that limits impact and increases the chance of a safe fix. The current standard for deploying databases does not do this.

Addendum: As stated above, mistakes will always happen, the pattern above does not remove all mistakes as nothing can, it is important that a disaster recovery strategy exists for when large mistakes happen. If you do not have one for your databases, that should be your highest priority above all else.