Blogs & Articles

Featured Blogs

Filter
Topic
Show
Sort

Nexus 2021 Day Two Recap: Empowering Data-Driven Science with Flexible Informatics

Nexus 2021 Day Two Recap: Empowering Data-Driven Science with Flexible Informatics

Day Two of Revvity Signal's Nexus 2021 Informatics Virtual User Conference delivered another engaging slate of presenters – from Merck KGaA, Birla Carbon and Givaudan to Bayer, Nimbus Therapeutics and Johnson & Johnson.

If you’ve registered but missed a session (or even a day!) – don’t worry! All content will be available on-demand through November 30, 2021.

Keynote Speaker
Day Two of Nexus 2021 kicked off with a riveting keynote by Dr. Sharon Sweitzer, a Director within the Functional Genomics department at GlaxoSmithKline on Capturing Innovative Science for Re-Use – a GSK Functional Genomics Perspective. She shared GSK’s goal of an ‘experiment capture solution’ that simplified the experience for GSK scientists, while also enabling the provision of data for re-use to accelerate drug discovery. Sharon explored how Signals Notebook contributed to their objective. In one example, she cited the decision to create simplified templates for data capture, maximizing the value of their experimental data.

Sharon also described how GSK’s data pipeline now allows for data to be taken out of Signals Notebook for re-use and re-analysis in other systems, including Spotfire®. Signals Notebook delivers very structured data, so it can be put to use very quickly – providing GSK with a competitive advantage.

Industry Talks
The speakers on today’s Research Informatics Track 3, perfectly illustrated the move towards digital transformation. Michelle Sewell shared the story of Birla Carbon’s transition to digital lab notebooks. Birla Carbon had been using paper lab notebooks for more than 160 years! Michelle revealed the unique challenges they faced and how they were addressed across the project lifecycle.

Birla Carbon may stand out for its long-time commitment to paper, but they certainly aren’t alone. Drs. Dieter Becker and Mark Goulding explored Merck KGaA’s implementation of Signals Notebook, with the objective of improving the recording & storage, sharing, searchability and security of R&D data. They described Merck KGaA’s journey from a fragmented landscape of experimental write-up practices to a global rollout of Signals Notebook across R&D in the Electronics Business Sector with more than 2,000 experiments performed since go-live. Dieter and Mark detailed the hybrid Agile methodology and regular workshops with Revvity Signals that were used to address any gaps during implementation.

Givaudan’s Andreas Muheim also shared the implementation journey for Signals Notebook. Among the key benefits of their Signals Notebook implementation was support for 300 users across a wide range of applications – including chemistry, formulation, fermentation, molecular biology, enzyme transformation, processes, sensory science and more. The ELN allowed them to automate data extraction and simplify configurability.

On today’s Technical Track 4, the Head of Data Review and Operational Insights at Bayer, Holger Schimanski, discussed the experience of creating new visualization types using the Spotfire® Mods API. Spotfire® Mods gave Bayer a framework for adding Sankey and Kanban Board visualizations which were needed for reviewing clinical data but were not a part of the standard offering of Spotfire® charts.

Dr. Rebecca Carazza, Head of Information Systems at Nimbus Therapeutics, looked beyond standard capabilities to a combination of data functions and custom web services designed to enhance the data provided to their scientists. Data Functions are used to clean and datatype complex data so end-users can focus on analysis. Nimbus’ Lead Discovery web services framework is used to add a chemical fingerprinting methodology most relevant to their chemistry. Rebecca presented some of the use cases for this approach to improving data handling and improving the end-user experience.

Our final industry talk came courtesy of Johnson & Johnson’s Pieter Pluymers, Manager Clinical Insights who discussed clinical studies in iDARTs. The study files consisted of large numbers of subjects and massive volumes of data which exceeded size limits for the database underlying the Spotfire® Library. Pieter shared how they converted the raw data into Spotfire® Binary Data Format (SBDF) files which Spotfire® can quickly load.

What’s New & What’s Next for Spotfire®?
Spotfire’s® Arnaud Varin pulled back the curtain to reveal some of what is in development at Spotfire®, who partners with Revvity Signals as a premier partner in scientific R & D. The sneak peek included an expansion of Spotfire® Mods to include launching actions in other systems, integrated custom workflows, new visualization mods and AI-powered recommendations for data wrangling, data cleaning and visual analytics.

Technology Innovation Spotlights
Across Day Two of Nexus 2021 were four Innovation Spotlights highlighting real-world applications of key tools on both our Research and Technical tracks. Topics ranged from molecular biology capabilities and applications for machine learning (ML) in support of formulation development, to instrument integration and the recent launch of cloud-native ChemOffice+.

Thank You for Joining Us at Nexus 2021!
Thank you to all of our customers, partners and participants for making Nexus 2021 a success!

If you’ve registered but missed a session – don’t worry! All of this exciting content will be available on-demand through November 30, 2021. 

The Importance of FAIR Data and Processes

It seems like the R&D industry gets excited by some terms and acronyms and organizations and some people tend to hype them. FAIR (Findable, Accessible, Interoperable and Reusable) is one of those acronyms. The problem is the hype is valid, but it also needs follow through… Another problem is only half of the problem is being discussed as many are not including processes in with FAIR data! After all the processes produce the data.

UnFAIR data and processes have evolved to a breaking point in the sciences for a multitude of reasons. It is extremely important to point these out because it is not a technology problem, it goes much deeper than that. The underlying root cause of poor data environments and lower data integrity is a Cultural problem! What I am about to say will most likely ruffle some feathers. There are several critical problems in science today and they are arrogance, ignorance, and financial and peer pressures. This coupled with one of the most complex industries, NME/Drug, and therapy discovery, has led to the inability to drive approaches that would have led to FAIR data and process environments. A true transformation (everyone must change) is needed, starting in academia/learning institutions, and ending in data driven R&D organizations. Number one, a reteaching and relearning of data and process as an asset, and then taking the time to make sure that data is captured, curated/managed, and reused as model-quality data whenever and wherever possible. This takes strategy and agreement in an organization, and it is a change management program, like most journeys in these organizations. This also means sacrifice, commitment, and strong leadership.

So, when we said it’s not a technology problem, it kind of is as well. The in-silico technology approaches are not reaching their greatest potential because unFAIR data prevents these methods from being used! We need this FAIR transformation to happen now, and it will take everyone’s concerted effort.

This is not an unachievable goal as other industries like Telecom, Entertainment, W3C (World Wide Web Consortium), Banking, and Insurance have driven success in their industries by adopting data and process standards.

So, we just touched on three of the Four Pillars, Culture, Data, and Processes, now let us talk about some technology that will help enable the change!

Have you spoken with a lab scientist lately? They are usually terribly busy and must coordinate their time carefully. In many cases they are stressed in their role. The LAST thing you want to do is ask them to do more or work with scientific software solutions that are not intuitive, and simply not augmenting their work. One main tool for all scientists is their notebook, what they are going to do, what they did, how they did it, what results they got, and finally observations and conclusions. This must be the foundation for a FAIR data and process environment.

In the dynamic R&D world Electronic Lab Notebooks (ELNs) exist to capture the scientific method. First generation or earlier renditions of the ELN were focused on IP capture which may have missed the mark on usability and end user enablement, but things had to start somewhere!

ELNs are applicable to every flavor of R&D company. There will be more excuses than facts when it comes to (not) deploying an ELN. Academia, startups and small organization, scientific domains, and then finally the large R&D organization have had a plethora of wins and losses. Academics can get an ELN for free, startups have a lot lose if they are disorganized or even come across disorganized to their investors or collaborators, we do not need an ELN in parts of our research organization because for a multitude of reasons, etc.

It’s now 2021 and the new ELNs are advanced, mostly cloud-enabled, and driving that next-generation experience. Data and process environments are critical for an ELN to be able to drive FAIR principles. They are also a perfect environment for capturing your scientific business processes so that you can execute your experiments from your ELN! This is not a new concept it goes back years to Laboratory Execution Systems and another solution built for a top energy provider, but now we have technology that can capture the processes and version them! Why is this critical? Because companies that are trying to enhance/optimize, and harmonize their processes do business process mapping in other tools when in fact an ELN could be that repository and become a “functional” or “executional” business process map!

Now your ELN is not only capturing your contextualized data, but it has captured the executable processes and the two together give you the complete picture. Why is this important? Bench scientists now have FAIR data and processes! Bench scientists now have tech and knowledge transfer, they now have ability to mine all types of data and integrate with other data, and they now have data for you in silico-first approaches. This means they could drive 40% efficiency gains in areas or your organization which means faster to market and better quality of life for those that need it!

This costing your large organization a lot of money and potential. Price Waterhouse Cooper and the European Union have estimated its costing upwards of €26 billion euros a year for European R&D organizations. We have done our own calculations based on our knowledge and observed level of data wrangling etc. and we think the cost is higher, as a large Biopharma could see 100’s of millions of Return on Investment (ROI) with a properly deployed and adopted ELN.

The transformation needed to become FAIR compliant in your organization is critical as it reduces data wrangling, improves collaboration, will drive in silico-first approaches, and ultimately lead to a much more efficient R&D community. The efficiency gain leads to better products, better medicines, better therapies, and a better quality of life for all, producers, and consumers.

Learn more about Revvity Signals Research Suite and how it helps a range of ELN customers from small startups to large global biopharmas.

Digital Transformation to speed the search for new materials

Traditional approaches for making, testing and deciding which new materials need to be replaced by a holistic digital solution

Materials scientists today can engineer complex nanostructures and model topologically intricate architectures with relative ease. But unlike in other fields, they can’t seem to shed antiquated workflows and outmoded data storage and analysis tools.

In a new white paper, researchers in the cheminformatics group at Revvity Signals argue that this paradox of progress is quite pronounced on the applications side. When developing new materials for specific applications — be they biodegradable plastics, hydrogels, battery electrolytes or perfumes — materials scientists routinely find themselves sorting through tab after tab of data on old-school spreadsheets.

To remedy this all-too-familiar situation, the paper puts forward a new vision for how materials scientists might work. The authors detail an end-to-end approach to materials development that combines the capability and speed of an online search engine with the flexibility and ease-of-use of apps on a mobile phone. They also explain how this vision backs the company’s new integrated informatics platform.

Digital transformation has become a bit of a buzzword. But putting in place good modern systems really does matter — you can do better science and make better decisions, which will translate into a commercial advantage.

Make, Test, Decide

The materials development process breaks into three broad categories: Make, Test, and Decide. In each one, new digital technologies can increase efficiency, remove bias and support reproducibility.

In the Make Stage, researchers synthesize new materials for testing, and a good recording system is vital. The time-tested laboratory paper notebooks served that purpose for centuries. Electronic notebooks (ELN), are an improvement and offer simplified data recording. But it can be difficult to access data from them later. That’s because first-generation e-notebooks save data in databases, which were developed to minimize storage space. In contrast, tagging technology, which allows the software and the user to add an unlimited number of descriptive tags to data, prioritizes data accessibility over storage space.

The testing stage involves measuring various properties of a material, which means that researchers need an objective way to select the optimal set of testing parameters. Traditionally, this selection of testing parameters has been the domain of human experts and their intuition, but data-driven approaches can eliminate human bias and minimize the risk of selecting sub-optimal parameters.

When it comes to processing the experimental data during the testing stage, materials scientists gravitate toward two approaches: homegrown spreadsheets, which offer flexibility but are difficult for non-experts to create, and bespoke, commercial tools, which are easy to use but are difficult to modify since they are designed to do a single task. Ideally, the authors say, material scientists shouldn’t have to choose between flexibility and ease of use. Rather, in an environment that resembles apps on a smart phone, they can select the appropriate module for the data-processing task in hand.

You have this tension between a generic tool like Excel or bespoke tools. One solution to that is an environment containing a suite of applications.

Finally, in the all-important Decide phase, researchers select the most promising candidate materials for development. The authors argue that to improve workflows, researchers should look to harness the same indexing technology that Google and Amazon use to collect and quickly access data. It offers both flexibility and speed — flexibility because an index can incorporate data from multiple sources and formats, and speed because the algorithms that generate selections based on indexed data are extremely efficient. This contrasts with conventional approaches that often involve picking a material by laboriously scanning an Excel sheet of data.

Applying a comprehensive informatics platform to the Make–Test–Decide process promises to both greatly simplify the development of new materials and cut the costs involved. Investing in technology for conducting materials research can have a huge impact.

For our materials research customers, our integrated informatics solutions can foster more efficient and successful materials development. What does this mean? Companies can quickly innovate, accelerate product development, and speed to market, by leveraging a powerful enterprise product suite, including ChemDraw, Signals Lead Discovery, Signals Notebook, and Spotfire®.

We enable our materials science customers to revolutionize the research and development of the materials they need to make, as well as energy, chemicals, and food. We help them innovate and accelerate their R&D cycle so they can bring products to market faster and increase scientific insights and breakthrough innovations.

To learn more about how an integrated informatics can foster more efficient and successful materials development, read our new white paper, Digital Transformation Journeys for Material Science. Materials science is one area that we support, as we offer solutions for Industrial Segments, supporting an array of industries including specialty chemicals, agrochemicals, energy & petrochemicals, flavors & fragrances, food & beverage, and electronics.

Data Collection Geared Toward Translational Medicine

Blog: Clinical and Translational

This practice is lab- and data-driven, aspiring to make a bench-to-bedside approach that’ll efficiently develop therapeutic strategies. It identifies biomarkers that can then be used to inform the patient’s molecular profiles and disease etiologies. Biomarkers can include blood sugar levels that identify patients with diabetes, or certain gene mutations that can signal a patient’s risk of developing cancer.

Translational medicine’s use of establishing molecular profiles has been beneficial in creating drugs that are specialized to target specific pathways based on patient diagnoses. This approach, when compared to one-size-fits-all drug production, creates fewer side effects with better results. Translational medicine can be an effective method when done well, with financial benefits on top of health achievements. That said, it’s a data-heavy activity. Here’s an overview of working with data in translational medicine:

Data Collection

From the lab to treatment, there will be ample information collected throughout the process. For successful drug development, all this data must be sorted and analyzed; to make this more effective, data should be responsibly collected from the start with a standardized and efficient practice.

Guidelines for data collection include:
• Collecting enough samples that you can establish statistical significance
• Using the same clinical samples across the entire population
• Using data models with datasets from various sources
• Making sure data is clean and well-curated enough for cross-study analysis

Not only is quality data useful for developing effective drugs, it can also be used retroactively to determine why some drugs weren’t working. For example, the initial compound in a drug for treating non-small cell lung cancer was initially an ineffective treatment for immune responses in autoimmune diseases. Now the drug, Keytruda, is useful for an entirely different purpose than intended.

Data Simplification

The data analysis that follows large-scale genome products can end up fragmented during the process. The creation of many analysis pipelines and informational silos can make it difficult for scientists and clinicians to collaborate.

To be effective, this all needs to be sorted and broken down. A good initial step is making sure that the data is accessible across the board so that even non-experts, or teams, can look at the data, analyze it, and apply their biological understandings. Implementing strategies to streamline data access can save time.

Scalable data management and accessible data tools are vital for translational medicine to succeed, and with that in use, patients can expect to receive valuable, effective drugs. 

Topic
Show
Sort
Topic
Show
Sort
Filter
Topic
Show
Sort
Topic
Show
Sort
Topic
Show
Sort