Blogs & Articles

Filter
Topic
Show
Sort

Flexible Assay Data Analysis or Scalability – No Need to Choose

When analyzing data there is often a competition between getting an analysis set up just the way you want versus implementing a consistent method across your data and organization.

Often with flexible tools, it is easy to adjust the analysis, but difficult to use it repeatedly at scale. Whereas with scalable tools, once a system is setup, the difficulty of changing it often precludes the adjustment. You live with the less-than-ideal solution or do many one-off operations.

The Calculations Explorer App allows a user to easily build an analysis template using transformations, visualizations, business rules, and curve fits. A user can use one of the 70+ included out of the box templates, modify one to suit, or start from scratch. All analysis steps are captured into a single template file to be shared or deployed. This template can then be further incorporated into a larger workflow using other modular Apps within Signals VitroVivo allowing a complete solution from raw data to results. Workflows can be shared easily with other users. Finally, the next time an adjustment is needed to the analysis, the workflow can be easily updated.

Flexible and Self-Service combined with Scalable Data Management

Watch the video to see in action, how we help our customers solve these challenges.

Signals VitroVivo unites assay development, low throughput to ultra-high throughput production assays, High Content Screening, and in vivo studies so users can search across all assay and screening data in a single platform. Learn more here.

Redefining the Future for Image Data Management & Analysis

We recently launched Signals Image Artist™ the fast, efficient image analysis and data management solution for High Content Screening and cellular image data that integrates with Revvity Signals Signals™ VitroVivo for secondary data analysis.

High Content Screening is a mainstream technology for drug research. More and more experiments are run on more and modern instruments either directly in the labs of pharma companies or outsourced by CROs. These experiments produce very detailed results very quickly. The turnaround time from experiment to experiment is quicker than ever before. Data handling and analysis needs to be kept up by automated high-performance processes. Labs and screening groups are confronted with an ever-increasing amount of image data to analyze and interpret. Signals Image Artist™ is a robust, powerful, and dependable solution for scientific image data management and analysis, which scales with your labs’ evolving needs.

Revvity Signals is the only company on the market today, which offers everything you need to plan, run and analyze High Content Screening experiments. From automated instruments, e.g. Opera Phenix Plus to PhenoVue reagents for Cell Painting, microplates, e.g. PhenoPlate and the management, analysis and interpretation of the experiment results with our Signals Image Artist and Signals VitroVivo software.

Signals Image Artist is out of the box ready to import all your data from the instrument while preserving all meta data. And since the original file formats are kept untouched during the import, you can use Signals Image Artist as your reference database for all your HCS data and primary analysis. It comes with an object store and a high-performance compute cluster built-in no matter how many nodes you have for the analysis of your data. The software is completely containerized and can easily be installed in the cloud or on-premise. We have made sure the analysis of your experiment data and the collaboration with your colleagues is happening inside one web interface, always guaranteeing your high security standards. Our building block approach to HCS analysis helps scientists to get started very quickly and be consistent in their analysis over time and across groups. So there is no programming necessary to analyze even the most challenging result sets coming from 3D cell cultures, cell painting or fast kinetic assays. All of your images, meta data and analysis results are stored in one single platform allowing it to be accessed together to keep it in context in the system as well as through other software directly via REST API with no need to export and re-import images and manage it alongside data outside of Signals Image Artist. This makes Signals Image Artist the most powerful and complete solution to handle and analyze High Content Screening data on the market today. And it is directly integrated for downstream analysis of the data with Signals VitroVivo.

Signals VitroVivo unites assay development, low throughput to ultra-high throughput assays, High Content Screening and many other assay types, and even in vivo studies so you can search and analyze across all assay and screening data in a single platform. Coupled with Signals Image Artist, scientists are empowered to quickly process, analyze, share, and store* phenotypic screening, cell painting, 3D and fast kinetic data like never before. They now have access to reliable data faster, more efficiently, and with more flexibility than before. From a science perspective, cell painting, 3D image analysis, and kinetic data are especially hot topics now, and we offer scientists the flexible and powerful means to explore these areas without the need to do any programming. From an IT perspective, Signals Image Artist uses high performance computing and an industry standard object store to provide a scalable, multi-user solution for image analysis and management that can expand with your labs evolving needs.

To learn more about Signals Image Artist™, and how we support a range of R&D customers, read more here.

Nexus 2021 Day Two Recap: Empowering Data-Driven Science with Flexible Informatics

Nexus 2021 Day Two Recap: Empowering Data-Driven Science with Flexible Informatics

Day Two of Revvity Signal's Nexus 2021 Informatics Virtual User Conference delivered another engaging slate of presenters – from Merck KGaA, Birla Carbon and Givaudan to Bayer, Nimbus Therapeutics and Johnson & Johnson.

If you’ve registered but missed a session (or even a day!) – don’t worry! All content will be available on-demand through November 30, 2021.

Keynote Speaker
Day Two of Nexus 2021 kicked off with a riveting keynote by Dr. Sharon Sweitzer, a Director within the Functional Genomics department at GlaxoSmithKline on Capturing Innovative Science for Re-Use – a GSK Functional Genomics Perspective. She shared GSK’s goal of an ‘experiment capture solution’ that simplified the experience for GSK scientists, while also enabling the provision of data for re-use to accelerate drug discovery. Sharon explored how Signals Notebook contributed to their objective. In one example, she cited the decision to create simplified templates for data capture, maximizing the value of their experimental data.

Sharon also described how GSK’s data pipeline now allows for data to be taken out of Signals Notebook for re-use and re-analysis in other systems, including Spotfire®. Signals Notebook delivers very structured data, so it can be put to use very quickly – providing GSK with a competitive advantage.

Industry Talks
The speakers on today’s Research Informatics Track 3, perfectly illustrated the move towards digital transformation. Michelle Sewell shared the story of Birla Carbon’s transition to digital lab notebooks. Birla Carbon had been using paper lab notebooks for more than 160 years! Michelle revealed the unique challenges they faced and how they were addressed across the project lifecycle.

Birla Carbon may stand out for its long-time commitment to paper, but they certainly aren’t alone. Drs. Dieter Becker and Mark Goulding explored Merck KGaA’s implementation of Signals Notebook, with the objective of improving the recording & storage, sharing, searchability and security of R&D data. They described Merck KGaA’s journey from a fragmented landscape of experimental write-up practices to a global rollout of Signals Notebook across R&D in the Electronics Business Sector with more than 2,000 experiments performed since go-live. Dieter and Mark detailed the hybrid Agile methodology and regular workshops with Revvity Signals that were used to address any gaps during implementation.

Givaudan’s Andreas Muheim also shared the implementation journey for Signals Notebook. Among the key benefits of their Signals Notebook implementation was support for 300 users across a wide range of applications – including chemistry, formulation, fermentation, molecular biology, enzyme transformation, processes, sensory science and more. The ELN allowed them to automate data extraction and simplify configurability.

On today’s Technical Track 4, the Head of Data Review and Operational Insights at Bayer, Holger Schimanski, discussed the experience of creating new visualization types using the Spotfire® Mods API. Spotfire® Mods gave Bayer a framework for adding Sankey and Kanban Board visualizations which were needed for reviewing clinical data but were not a part of the standard offering of Spotfire® charts.

Dr. Rebecca Carazza, Head of Information Systems at Nimbus Therapeutics, looked beyond standard capabilities to a combination of data functions and custom web services designed to enhance the data provided to their scientists. Data Functions are used to clean and datatype complex data so end-users can focus on analysis. Nimbus’ Lead Discovery web services framework is used to add a chemical fingerprinting methodology most relevant to their chemistry. Rebecca presented some of the use cases for this approach to improving data handling and improving the end-user experience.

Our final industry talk came courtesy of Johnson & Johnson’s Pieter Pluymers, Manager Clinical Insights who discussed clinical studies in iDARTs. The study files consisted of large numbers of subjects and massive volumes of data which exceeded size limits for the database underlying the Spotfire® Library. Pieter shared how they converted the raw data into Spotfire® Binary Data Format (SBDF) files which Spotfire® can quickly load.

What’s New & What’s Next for Spotfire®?
Spotfire’s® Arnaud Varin pulled back the curtain to reveal some of what is in development at Spotfire®, who partners with Revvity Signals as a premier partner in scientific R & D. The sneak peek included an expansion of Spotfire® Mods to include launching actions in other systems, integrated custom workflows, new visualization mods and AI-powered recommendations for data wrangling, data cleaning and visual analytics.

Technology Innovation Spotlights
Across Day Two of Nexus 2021 were four Innovation Spotlights highlighting real-world applications of key tools on both our Research and Technical tracks. Topics ranged from molecular biology capabilities and applications for machine learning (ML) in support of formulation development, to instrument integration and the recent launch of cloud-native ChemOffice+.

Thank You for Joining Us at Nexus 2021!
Thank you to all of our customers, partners and participants for making Nexus 2021 a success!

If you’ve registered but missed a session – don’t worry! All of this exciting content will be available on-demand through November 30, 2021. 

The Importance of FAIR Data and Processes

It seems like the R&D industry gets excited by some terms and acronyms and organizations and some people tend to hype them. FAIR (Findable, Accessible, Interoperable and Reusable) is one of those acronyms. The problem is the hype is valid, but it also needs follow through… Another problem is only half of the problem is being discussed as many are not including processes in with FAIR data! After all the processes produce the data.

UnFAIR data and processes have evolved to a breaking point in the sciences for a multitude of reasons. It is extremely important to point these out because it is not a technology problem, it goes much deeper than that. The underlying root cause of poor data environments and lower data integrity is a Cultural problem! What I am about to say will most likely ruffle some feathers. There are several critical problems in science today and they are arrogance, ignorance, and financial and peer pressures. This coupled with one of the most complex industries, NME/Drug, and therapy discovery, has led to the inability to drive approaches that would have led to FAIR data and process environments. A true transformation (everyone must change) is needed, starting in academia/learning institutions, and ending in data driven R&D organizations. Number one, a reteaching and relearning of data and process as an asset, and then taking the time to make sure that data is captured, curated/managed, and reused as model-quality data whenever and wherever possible. This takes strategy and agreement in an organization, and it is a change management program, like most journeys in these organizations. This also means sacrifice, commitment, and strong leadership.

So, when we said it’s not a technology problem, it kind of is as well. The in-silico technology approaches are not reaching their greatest potential because unFAIR data prevents these methods from being used! We need this FAIR transformation to happen now, and it will take everyone’s concerted effort.

This is not an unachievable goal as other industries like Telecom, Entertainment, W3C (World Wide Web Consortium), Banking, and Insurance have driven success in their industries by adopting data and process standards.

So, we just touched on three of the Four Pillars, Culture, Data, and Processes, now let us talk about some technology that will help enable the change!

Have you spoken with a lab scientist lately? They are usually terribly busy and must coordinate their time carefully. In many cases they are stressed in their role. The LAST thing you want to do is ask them to do more or work with scientific software solutions that are not intuitive, and simply not augmenting their work. One main tool for all scientists is their notebook, what they are going to do, what they did, how they did it, what results they got, and finally observations and conclusions. This must be the foundation for a FAIR data and process environment.

In the dynamic R&D world Electronic Lab Notebooks (ELNs) exist to capture the scientific method. First generation or earlier renditions of the ELN were focused on IP capture which may have missed the mark on usability and end user enablement, but things had to start somewhere!

ELNs are applicable to every flavor of R&D company. There will be more excuses than facts when it comes to (not) deploying an ELN. Academia, startups and small organization, scientific domains, and then finally the large R&D organization have had a plethora of wins and losses. Academics can get an ELN for free, startups have a lot lose if they are disorganized or even come across disorganized to their investors or collaborators, we do not need an ELN in parts of our research organization because for a multitude of reasons, etc.

It’s now 2021 and the new ELNs are advanced, mostly cloud-enabled, and driving that next-generation experience. Data and process environments are critical for an ELN to be able to drive FAIR principles. They are also a perfect environment for capturing your scientific business processes so that you can execute your experiments from your ELN! This is not a new concept it goes back years to Laboratory Execution Systems and another solution built for a top energy provider, but now we have technology that can capture the processes and version them! Why is this critical? Because companies that are trying to enhance/optimize, and harmonize their processes do business process mapping in other tools when in fact an ELN could be that repository and become a “functional” or “executional” business process map!

Now your ELN is not only capturing your contextualized data, but it has captured the executable processes and the two together give you the complete picture. Why is this important? Bench scientists now have FAIR data and processes! Bench scientists now have tech and knowledge transfer, they now have ability to mine all types of data and integrate with other data, and they now have data for you in silico-first approaches. This means they could drive 40% efficiency gains in areas or your organization which means faster to market and better quality of life for those that need it!

This costing your large organization a lot of money and potential. Price Waterhouse Cooper and the European Union have estimated its costing upwards of €26 billion euros a year for European R&D organizations. We have done our own calculations based on our knowledge and observed level of data wrangling etc. and we think the cost is higher, as a large Biopharma could see 100’s of millions of Return on Investment (ROI) with a properly deployed and adopted ELN.

The transformation needed to become FAIR compliant in your organization is critical as it reduces data wrangling, improves collaboration, will drive in silico-first approaches, and ultimately lead to a much more efficient R&D community. The efficiency gain leads to better products, better medicines, better therapies, and a better quality of life for all, producers, and consumers.

Learn more about Revvity Signals Research Suite and how it helps a range of ELN customers from small startups to large global biopharmas.

Topic
Show
Sort
Topic
Show
Sort
Filter
Topic
Show
Sort
Topic
Show
Sort
Topic
Show
Sort