5 min read

How We Transformed Quality Assurance at Lifeline Biosciences with AI and Automation

How We Transformed Quality Assurance at Lifeline Biosciences with AI and Automation
Photo by Google DeepMind / Unsplash

If there’s one thing I’ve learned as a Product Manager at Lifeline Biosciences, it’s that innovation isn’t just about creating new technology—it’s about ensuring the systems behind that technology are as seamless and reliable as possible. Our Laboratory Information System (LIS) plays a critical role in processing diagnostic data, and when inefficiencies started to slow us down, we knew something had to change.

That’s where our journey to completely reimagine quality assurance (QA) and site reliability began. It wasn’t just about solving problems—it was about building a system that could handle the complexities of modern healthcare technology while delivering precision and speed. This blog post is my account of how we made it happen, the tools we used, and the dramatic results that followed.


The Problem: Why Change Was Necessary

Our LIS had grown over the years into a sophisticated platform that connected lab workflows, patient data, and healthcare providers. But as the system became more complex, cracks began to show:

  1. Critical defects were slipping through our testing processes and reaching production. This wasn’t just a technical issue—it delayed lab results, which in turn could delay treatment for patients.
  2. Testing was inconsistent. We relied heavily on manual QA, which was not only slow but also prone to human error. High-risk areas like database logic and service layers were particularly vulnerable.
  3. We couldn’t see what was happening in real time. Without proper observability tools, we were always reacting to problems instead of preventing them.
  4. Testing came too late. Most of it happened at the end of the development cycle, leading to expensive rework and slower releases.

We needed a solution that was automated, proactive, and scalable. And we needed it fast.


The Solution: Automating and Optimizing with AI

The heart of our solution was an AI-powered framework that tackled testing and reliability head-on. Here’s how we broke it down:


1. Supercharging Unit Test Coverage

Unit tests are the backbone of a robust system—they ensure individual pieces of the application work exactly as they should. But writing unit tests manually for a large system is time-consuming and often gets deprioritized. That’s where automation stepped in.

  • We used JUnit and PyTest for creating unit tests across our service layers and database logic. These tools are staples in testing—they’re fast, flexible, and integrate seamlessly with development workflows. But they require effort to set up properly, and scaling test coverage manually would have taken months.

Enter Diffblue Cover, a generative AI tool that writes unit tests for you. This was a game-changer. By analyzing our codebase, Diffblue automatically created high-quality tests for areas we’d historically struggled to cover, like complex database queries and service interactions.

Diffblue generated tests for a particularly tricky module responsible for validating lab sample data. Previously, we relied on manual testing here, and bugs were common. With automated tests in place, coverage increased overnight, and bugs dropped dramatically.

This combination increased our unit test coverage by 45% in just a few weeks, and defects in critical modules plummeted.


2. Automating Functional, API, and Regression Testing

Manual testing was slowing us down, so we turned to automation tools to take over repetitive tasks. Here’s what we used:

  • Selenium: Selenium allowed us to automate functional tests for the LIS user interface. This tool simulates user actions, like clicking buttons and filling out forms, ensuring that everything works as expected. Before Selenium, these tests took hours of manual effort for every release. Now, they run automatically in minutes.
  • Postman: Our LIS heavily relies on APIs to exchange data with external systems, so API testing was critical. Postman’s scripting capabilities made it easy to create automated tests for every endpoint, validating everything from data integrity to response times.

Applitools Eyes: Visual regression testing was another area where we saw huge gains. Applitools uses AI to detect even subtle changes in the UI, like misplaced elements or color mismatches. This tool helped us catch visual bugs that manual testers would have overlooked.

During one release, Applitools flagged a subtle alignment issue in our results dashboard—a bug that would have gone unnoticed until users started complaining.

By automating these layers, we cut our manual QA time by 70%. Releases that used to take days to test were now ready in hours.


3. Real-Time Observability with Monitoring Tools

Testing alone isn’t enough—you need to know your system is running smoothly after deployment. To achieve this, we implemented a robust observability stack:

  • Splunk: Splunk handled log analysis, allowing us to trace errors back to their root causes. This was invaluable for troubleshooting complex issues.

Prometheus and Grafana: Prometheus collects metrics from all parts of the LIS, like server uptime, response times, and error rates. Grafana turns these metrics into visual dashboards, giving us a real-time view of system health.

During a peak usage period, Grafana showed a spike in API latency. Prometheus alerts triggered an investigation, and we identified an under-provisioned database instance. Fixing it in real time prevented a system slowdown.

These tools didn’t just improve reliability—they gave us the confidence to move fast. System uptime improved to 99.98%, and issues that used to take hours to diagnose were resolved in minutes.


4. Shifting Testing Left

One of the biggest lessons we learned was that quality needs to be built in from the start. To make this happen, we shifted testing earlier in the Software Development Life Cycle (SDLC):

  • During planning, we worked with stakeholders to define clear acceptance criteria and testable requirements.
  • During development, we used SonarQube to enforce code quality standards, catching issues before they could snowball.
  • With Jenkins, every code commit triggered automated tests, giving developers instant feedback.

This approach not only reduced defects but also accelerated our release cycles by 40%.


The Results: A True Transformation

The impact of this project went beyond our expectations:

  1. Defects in production dropped by 60%. Automated tests and real-time monitoring meant we were catching and fixing issues before they could affect users.
  2. Testing time was reduced by 70%. Automation freed up our QA team to focus on strategic tasks instead of repetitive ones.
  3. Release cycles were 40% faster. Features that used to take weeks to deploy were now live in days.
  4. System reliability soared to 99.98%. With better observability and proactive monitoring, downtime became almost nonexistent.

What I Learned

Looking back, this project taught me the incredible power of combining the right tools with the right processes. Generative AI tools like Diffblue showed us how much time and effort can be saved with automation, while observability tools like Grafana and Splunk gave us the confidence to move fast without breaking things.

But most importantly, it reinforced the value of collaboration. By working closely with engineers, QA teams, and stakeholders, we built a solution that wasn’t just technically sound—it worked for everyone involved.

This wasn’t just a technical overhaul—it was a transformation of how we think about quality and reliability. At Lifeline Biosciences, we’ve always been about improving lives through technology, and now we’re better equipped than ever to deliver on that promise.

Stay Connected, Stay Informed!

@NabbilKhan

Subbscribe Now