January 12, 2022

Operationalizing Octant’s Platform

January 12, 2022

Operationalizing Octant’s Platform

From platform prototype to production

There’s a magical moment practicing a new musical piece where, after a prolonged struggle, something suddenly clicks, and your fingers and muscles finally understand how they need to move to execute that particularly tricky chord progression. You feel exhilarated -- your hard work paid off! But that thrill quickly dissipates the next time you try, and you don’t quite get it right. You’ve proven to yourself that you can do it. The challenge now is to be able to do it every time.

This feeling is probably all too familiar to any person or team working to bring a new idea from proof-of-concept to production-level performance. Getting a new technology to work once is hard enough and worthy of celebration. Translating that technology into a platform that helps power an entire company at scale, week after week, becomes its own battle and presents a new and often harder set of problems to solve.

Here at Octant, we’ve developed a state-of-the-art multiplexed assay platform (MAP) that uses programmable biology and chemistry to build precision drugs. Our platform combines synthetic biology, high-throughput chemistry, and machine learning to decode and modulate how chemicals act on key disease mechanisms. By the beginning of 2021, we proved that our platform prototype worked, albeit not always smoothly. It experienced data quality issues and process deviations, and occasionally workflows would fail catastrophically. But amidst the chaos, there were also experimental gems where the results were clear and the conclusions were stunning. We knew that Octant’s drug discovery proposition was possible; we just had to figure out how to scale the technology into a robust and reproducible machine.

From Left to Right, some of the MAP team Octonauts Naomi, Jeff, and Scott.
Our two-pronged strategy

Over this past year, we worked to turn our multiplexed assay into a predictable and routine process. Our strategy for this involved tackling the problem from two ends: (1) improving our ability to monitor and troubleshoot the workflow and (2) putting systems in place to support reproducible execution of the protocol. These elements have been the critical two pillars that raised our platform to the next level.

First, we put in place specialized process controls that could inform on the performance of our workflow at several critical steps. These controls comprise various synthetic DNA and RNA molecules spiked into our samples in precise quantities at various intermediate reactions. These “spike-ins” then undergo the same downstream processes together with our actual samples. Sequencing readouts of these controls are made at the end of the multiplexed assay, and their results provide key insights into how the process performed and what aspects of the process, if any, experienced problems. These control metrics enabled rapid detection, investigation, and resolution of process issues.

The performance of several synthetic RNA spikes provides insight into our multiplexed assay. In this example, four plates contain a substantial number of wells with a low abundance of these molecules, indicating a process issue at one of our liquid transfer steps.

The second pillar focused on operational best practices for reproducibly executing our workflows. This involved several efforts and included:

  • Locking down our protocol and implementing change control
  • Developing an equipment maintenance and testing plan
  • Tracking process issues in a systematized scheme

In a twist to the old adage about insanity, it would be crazy to expect that variable execution of a process should lead to the same successful outcome. Therefore, performing the protocol precisely as it was designed and introducing changes to it carefully and deliberately was critical to ironing out data quality issues. This allowed us to focus on identifying the unknown technical problems impacting our processes.

Finally, we needed to be able to measure the impact of our improvements to know if we were steering the platform in the right direction. We therefore developed analytics tooling to gauge the quality of our data. Most high-throughput biological assays take place in multi-well microtiter plates. In our platform, each well of these plates tests a particular experimental condition, namely the effects of a chemical on our cell libraries. Through a combination of the process controls mentioned above as well as the behavior of our cell reporters, we applied multiple specific quality control filters that differentiate poorly performing wells from functional ones. Only data from high-quality wells are passed into our downstream bioinformatics pipelines for analysis. Metrics around how many wells fail our QC gates and why they do so enable us to monitor the health of our processes and troubleshoot problems in a more focused manner.


The blue squares represent wells in our plates that do not meet the strict QC requirements for analyzing in our downstream computational pipeline.
Now Let’s Make Some Music! 

With these new improvements in place, Octant’s multiplexed assay platform is geared up for production-scale operations. We’re currently running high-throughput screens on ~10,000 chemicals per week. Even at this higher scale, we’ve achieved a data quality of >97% of wells passing our QC requirements. After scaling the platform, the next stage of Octant’s journey is to apply it towards treating some of our most challenging diseases. Indeed, we’ve recently identified several novel molecules that specifically hit our targets and have revealed intriguing avenues for further drug design. We’re excited to continue building the scale needed to apply it against even more therapeutic programs in the coming year.


Jeff Tang

Scientist
Back to all Posts