Blogs & Articles

Filter
Topic
Show
Sort

Sequence Intelligence in Signals Notebook

Biotechnology is at the forefront of the world’s mind as new therapies and tools are developed to combat the COVID-19 pandemic.

Behind the COVID crisis, Revvity Signals’ solutions are applicable to multiple industries. For example, our customers are engineering microbial strains to develop new fragrances, improving crop performance through genome engineering, and creating new enzyme catalysts for greener chemistry. The foundation of biotechnology is molecular sequence design, often starting with plasmid vectors or protein sequences. With an October release we’ve brought the language of molecular biology into SignalsTM Notebook, the modern, cloud-native electronic lab notebook (ELN).

Because many competitive informatics software doesn’t support sequence files in a first-class fashion, scientists have developed workarounds like shared folders with their designs and a separate spreadsheet (or sheets!) that describes key features of plasmids or protein designs. Now documenting the design process that begins biologics development couldn’t be simpler with Revvity Signals Notebook.

Our new Biological Sequence Element supports a wide array of DNA and protein files. It’s simple to drop a file into an experiment and instantly get an annotated map of the underlying sequence. Open the Element into full-size mode and dynamically explore deeper features of the molecular design. Integration with SnapGene, a popular tool for molecular design, enables design and update of sequences housed in the Signals cloud.

The Biological Sequence Element also unlocks scientific collaboration. I’m reminded of a group engineering cell lines to support biomarker validation and assays. The team was spread across London, Chicago, and San Francisco for a global company. These colleagues collaborated by emailing files back and forth with notes on designs and experimental advice. A very inefficient and siloed process.

Now the designs and experimental context can be shared directly in Signals Notebook. And, commenting enables all the incredibly valuable scientific discussion to be shared outside of a one-to-one email message.

We’re excited to bring these new capabilities to our partners. I’m looking forward to hearing your response to this new feature and excited to share our continuing plans to fully support biologics workflows across our Signals Research Suite.

Nexus 2021 Day Two Recap: Empowering Data-Driven Science with Flexible Informatics

Nexus 2021 Day Two Recap: Empowering Data-Driven Science with Flexible Informatics

Day Two of Revvity Signal's Nexus 2021 Informatics Virtual User Conference delivered another engaging slate of presenters – from Merck KGaA, Birla Carbon and Givaudan to Bayer, Nimbus Therapeutics and Johnson & Johnson.

If you’ve registered but missed a session (or even a day!) – don’t worry! All content will be available on-demand through November 30, 2021.

Keynote Speaker
Day Two of Nexus 2021 kicked off with a riveting keynote by Dr. Sharon Sweitzer, a Director within the Functional Genomics department at GlaxoSmithKline on Capturing Innovative Science for Re-Use – a GSK Functional Genomics Perspective. She shared GSK’s goal of an ‘experiment capture solution’ that simplified the experience for GSK scientists, while also enabling the provision of data for re-use to accelerate drug discovery. Sharon explored how Signals Notebook contributed to their objective. In one example, she cited the decision to create simplified templates for data capture, maximizing the value of their experimental data.

Sharon also described how GSK’s data pipeline now allows for data to be taken out of Signals Notebook for re-use and re-analysis in other systems, including Spotfire®. Signals Notebook delivers very structured data, so it can be put to use very quickly – providing GSK with a competitive advantage.

Industry Talks
The speakers on today’s Research Informatics Track 3, perfectly illustrated the move towards digital transformation. Michelle Sewell shared the story of Birla Carbon’s transition to digital lab notebooks. Birla Carbon had been using paper lab notebooks for more than 160 years! Michelle revealed the unique challenges they faced and how they were addressed across the project lifecycle.

Birla Carbon may stand out for its long-time commitment to paper, but they certainly aren’t alone. Drs. Dieter Becker and Mark Goulding explored Merck KGaA’s implementation of Signals Notebook, with the objective of improving the recording & storage, sharing, searchability and security of R&D data. They described Merck KGaA’s journey from a fragmented landscape of experimental write-up practices to a global rollout of Signals Notebook across R&D in the Electronics Business Sector with more than 2,000 experiments performed since go-live. Dieter and Mark detailed the hybrid Agile methodology and regular workshops with Revvity Signals that were used to address any gaps during implementation.

Givaudan’s Andreas Muheim also shared the implementation journey for Signals Notebook. Among the key benefits of their Signals Notebook implementation was support for 300 users across a wide range of applications – including chemistry, formulation, fermentation, molecular biology, enzyme transformation, processes, sensory science and more. The ELN allowed them to automate data extraction and simplify configurability.

On today’s Technical Track 4, the Head of Data Review and Operational Insights at Bayer, Holger Schimanski, discussed the experience of creating new visualization types using the Spotfire® Mods API. Spotfire® Mods gave Bayer a framework for adding Sankey and Kanban Board visualizations which were needed for reviewing clinical data but were not a part of the standard offering of Spotfire® charts.

Dr. Rebecca Carazza, Head of Information Systems at Nimbus Therapeutics, looked beyond standard capabilities to a combination of data functions and custom web services designed to enhance the data provided to their scientists. Data Functions are used to clean and datatype complex data so end-users can focus on analysis. Nimbus’ Lead Discovery web services framework is used to add a chemical fingerprinting methodology most relevant to their chemistry. Rebecca presented some of the use cases for this approach to improving data handling and improving the end-user experience.

Our final industry talk came courtesy of Johnson & Johnson’s Pieter Pluymers, Manager Clinical Insights who discussed clinical studies in iDARTs. The study files consisted of large numbers of subjects and massive volumes of data which exceeded size limits for the database underlying the Spotfire® Library. Pieter shared how they converted the raw data into Spotfire® Binary Data Format (SBDF) files which Spotfire® can quickly load.

What’s New & What’s Next for Spotfire®?
Spotfire’s® Arnaud Varin pulled back the curtain to reveal some of what is in development at Spotfire®, who partners with Revvity Signals as a premier partner in scientific R & D. The sneak peek included an expansion of Spotfire® Mods to include launching actions in other systems, integrated custom workflows, new visualization mods and AI-powered recommendations for data wrangling, data cleaning and visual analytics.

Technology Innovation Spotlights
Across Day Two of Nexus 2021 were four Innovation Spotlights highlighting real-world applications of key tools on both our Research and Technical tracks. Topics ranged from molecular biology capabilities and applications for machine learning (ML) in support of formulation development, to instrument integration and the recent launch of cloud-native ChemOffice+.

Thank You for Joining Us at Nexus 2021!
Thank you to all of our customers, partners and participants for making Nexus 2021 a success!

If you’ve registered but missed a session – don’t worry! All of this exciting content will be available on-demand through November 30, 2021. 

Nexus 2021 Day One Recap: Cutting-Edge Solutions Redefining Research & Clinical Informatics

Nexus 2021 Day One Recap: Cutting-Edge Solutions Redefining Research & Clinical Informatics

Nexus 2021 – Revvity Signals Virtual User Conference – kicked off today with an exciting schedule of keynotes, technology innovation spotlights, and industry case studies from companies including Merck, Janssen, Roche Diagnostics, Bayer, Gilead Science, Bristol Meyer Squibb, and more.

If you weren’t able to attend today’s session, you can still register and join us on Day Two. Plus, you’ll get on-demand access following the Summit to all the sessions through November 30th, 2021

Wondering why we named our Revvity Signals virtual user conference "NEXUS"?
According to the Oxford Dictionary, a Nexus is ‘a connection or series of connections linking two or more things.’ It can also be a ‘central or focal point.’

For us, Nexus represents our connection with our community – here within Revvity Signals and among our customers, partners, and users. Nexus 2021 is the perfect shorthand for a place to share best practices, discuss the latest trends, and review both research and clinical informatics developments. 

Let’s dive into the Key Takeaways from Day One:

Our Morning Keynote Speakers
Kevin Willoe, General Manager and Vice President and David Gosalvez, Executive Director of Science & Technology at Revvity Signals, kicked off the event with their keynote: Leveraging the Power of the Revvity Signals Platform Across Research, Development, and Clinical. They outlined how the Signals Platform is both an informatics solution delivering advanced end-to-end scientific data management and workflows across R&D, as well as a robust clinical solution.

Industry Takeaways from the Research & Clinical Tracks
Today’s informatics landscape makes unifying diverse R&D data analytics a challenge for IT, operations, and users. Gottfried Schroeder from Merck Research Labs explored the in-depth testing of a new data analytics workflow system, Signals Screening (now known as Signals VitroVivo). He showed us how Merck & Co accomplished universal data capture & analysis in a single workflow – with the ability to personalize graphical interfaces for individual users. Using Spotfire®, they can simultaneously fit all data with trellis functions to make comparisons, which was especially important
for multi-parametric assays.

A key theme this year is the digital transformation in the laboratory and how that has improved productivity and transformed scientific workflows. In a talk on Signals Notebook Implementation at Janssen Vaccines, Janssen consultant Sjaak Peelen shared the story of the migration from paper lab notebooks to the Signals cloud-based ELN. He detailed the planning and implementation for the migration to the Electronic Lab Notebook and discussed some of the challenges they faced and how they were overcome. Sjaak also reported on a 5-month survey of internal users, who shared their favorite new capabilities: searchable data, templates, the ability to work & cosign from home, and the ability to link data.

Frank Shirl-Birk spoke about the implementation approach of the Roche Diagnostics R&D team when they rolled out a validated, cloud-based ELN in the GxP environment for their global R&D team of 1,000+ users. For Roche, the objective was to replace manual experiment documentation which was previously “pseudo digital,” ensuring all teams were on a validated system using global documentation standards. Frank reported that one of the keys to a successful implementation process was to shift away from a Waterfall project management methodology to increment-based (Scrum) implementation.

Jeremy Wilmot, a Crop Protection Discovery and Development Scientist at Corteva Agriscience, presented on the cross-company collaboration between Corteva Agriscience and Revvity Signals to build the Cheminformatics Workbench based on Signals Lead Discovery (now known as Signals Inventa). Jeremy explored the history of this exciting relationship, discussing the incorporation of molecular design into Signals Lead Discovery. He explained the future impact the project could have on crop protection discovery efforts, allowing users to identify interesting leads in Signals Lead Discovery, leveraging the extremely fast, multi-structure, multiparameter global search capabilities.

Spotfire®-Powered Clinical Trial Analytics
Vishakha Mujoo from Gilead Sciences discussed data challenges, speaking of the difficulties posed by huge unstructured data sets. Vishakha explained how the use of automated natural language processing (NLP) can be leveraged to improve adverse event data collection without compromising safety.

Prem Narasimhan of Bristol Myers Squibb’s Translational Bioinformatics Data Science group spoke about the need for better access to safety, efficacy, biomarker, and response data for scientific clinical review. Prem demonstrated a dashboard featuring an integrated 360-degree view of the data in both aggregated and individual levels, to help reviewers make informed decisions across the continuum of the study cycle. The results: Spotfire® has accelerated their review process by 70% and made it easy to gain insight into the data.

J&J’s Manager of Clinical Insights, Pieter Pluymers, discussed how the clinical departments at J&J are using Spotfire®, and demonstrated the new signal detection pages utilized by the Company’s Oncology Therapeutic Area.

A duo from Bayer were up next. Pieter Stokman & Jonas Mlynek talked about the Data Driven Site Risk Leveling Indicators for RBQM and explored risk-based monitoring, source data verification (SDV) and Source Data Review (SDR). These are practices which have immense value for some sites – but choosing which ones can be challenging. They presented an ongoing site risk leveling (OSRL) process and dashboard, taking advantage of various Spotfire® capabilities and a clinical data warehouse.

Technology Innovation Spotlights
Throughout the day, we also heard from four great presenters discussing real-world applications of key tools across the research and clinical tracks. They shared exciting groundbreaking capabilities in Line Spotfire, for Clinical Data, Review, Signals Inventory, Signals VitroVivo and more.

The Parting Keynote from Day One…
Day One closed out with our second Keynote Analytics and Data Science in Action with Spotfire® from Michael O’Connell, the Chief Analytics Officer at Spotfire® Software. He discussed how COVID-19 drove businesses to adopt the use of data and analytics across business functions. The pandemic coincided with the rise of low code assembly and the convergence of data management, BI, and predictive analytics – fueling the embrace of data tools.

Michael shared some fascinating real-world use cases of analytics and data science in action using Spotfire® Mods. His examples included applications of process control, anomaly detection and pattern classification along with digital twins and AI in the life sciences. [Spotfire® Mods are lightweight add-ins to Spotfire® software and include Visualization Mods with new visualizations, and Data Function Mods with new data functions for data prep, feature engineering and machine learning.]

We Can’t Wait for Tomorrow’s Sessions!
There’s much more to come on Day Two, including presentations from Merck KGaA, Birla Carbon, Nimbus Therapeutics, Bayer and Johnson & Johnson. We’ll be back with a wrap-up on tomorrow’s events here on the blog, and if you’ve registered but missed a session – don’t worry! You can access all of the sessions on-demand following the event through November 30th 2021.

The Importance of FAIR Data and Processes

It seems like the R&D industry gets excited by some terms and acronyms and organizations and some people tend to hype them. FAIR (Findable, Accessible, Interoperable and Reusable) is one of those acronyms. The problem is the hype is valid, but it also needs follow through… Another problem is only half of the problem is being discussed as many are not including processes in with FAIR data! After all the processes produce the data.

UnFAIR data and processes have evolved to a breaking point in the sciences for a multitude of reasons. It is extremely important to point these out because it is not a technology problem, it goes much deeper than that. The underlying root cause of poor data environments and lower data integrity is a Cultural problem! What I am about to say will most likely ruffle some feathers. There are several critical problems in science today and they are arrogance, ignorance, and financial and peer pressures. This coupled with one of the most complex industries, NME/Drug, and therapy discovery, has led to the inability to drive approaches that would have led to FAIR data and process environments. A true transformation (everyone must change) is needed, starting in academia/learning institutions, and ending in data driven R&D organizations. Number one, a reteaching and relearning of data and process as an asset, and then taking the time to make sure that data is captured, curated/managed, and reused as model-quality data whenever and wherever possible. This takes strategy and agreement in an organization, and it is a change management program, like most journeys in these organizations. This also means sacrifice, commitment, and strong leadership.

So, when we said it’s not a technology problem, it kind of is as well. The in-silico technology approaches are not reaching their greatest potential because unFAIR data prevents these methods from being used! We need this FAIR transformation to happen now, and it will take everyone’s concerted effort.

This is not an unachievable goal as other industries like Telecom, Entertainment, W3C (World Wide Web Consortium), Banking, and Insurance have driven success in their industries by adopting data and process standards.

So, we just touched on three of the Four Pillars, Culture, Data, and Processes, now let us talk about some technology that will help enable the change!

Have you spoken with a lab scientist lately? They are usually terribly busy and must coordinate their time carefully. In many cases they are stressed in their role. The LAST thing you want to do is ask them to do more or work with scientific software solutions that are not intuitive, and simply not augmenting their work. One main tool for all scientists is their notebook, what they are going to do, what they did, how they did it, what results they got, and finally observations and conclusions. This must be the foundation for a FAIR data and process environment.

In the dynamic R&D world Electronic Lab Notebooks (ELNs) exist to capture the scientific method. First generation or earlier renditions of the ELN were focused on IP capture which may have missed the mark on usability and end user enablement, but things had to start somewhere!

ELNs are applicable to every flavor of R&D company. There will be more excuses than facts when it comes to (not) deploying an ELN. Academia, startups and small organization, scientific domains, and then finally the large R&D organization have had a plethora of wins and losses. Academics can get an ELN for free, startups have a lot lose if they are disorganized or even come across disorganized to their investors or collaborators, we do not need an ELN in parts of our research organization because for a multitude of reasons, etc.

It’s now 2021 and the new ELNs are advanced, mostly cloud-enabled, and driving that next-generation experience. Data and process environments are critical for an ELN to be able to drive FAIR principles. They are also a perfect environment for capturing your scientific business processes so that you can execute your experiments from your ELN! This is not a new concept it goes back years to Laboratory Execution Systems and another solution built for a top energy provider, but now we have technology that can capture the processes and version them! Why is this critical? Because companies that are trying to enhance/optimize, and harmonize their processes do business process mapping in other tools when in fact an ELN could be that repository and become a “functional” or “executional” business process map!

Now your ELN is not only capturing your contextualized data, but it has captured the executable processes and the two together give you the complete picture. Why is this important? Bench scientists now have FAIR data and processes! Bench scientists now have tech and knowledge transfer, they now have ability to mine all types of data and integrate with other data, and they now have data for you in silico-first approaches. This means they could drive 40% efficiency gains in areas or your organization which means faster to market and better quality of life for those that need it!

This costing your large organization a lot of money and potential. Price Waterhouse Cooper and the European Union have estimated its costing upwards of €26 billion euros a year for European R&D organizations. We have done our own calculations based on our knowledge and observed level of data wrangling etc. and we think the cost is higher, as a large Biopharma could see 100’s of millions of Return on Investment (ROI) with a properly deployed and adopted ELN.

The transformation needed to become FAIR compliant in your organization is critical as it reduces data wrangling, improves collaboration, will drive in silico-first approaches, and ultimately lead to a much more efficient R&D community. The efficiency gain leads to better products, better medicines, better therapies, and a better quality of life for all, producers, and consumers.

Learn more about Revvity Signals Research Suite and how it helps a range of ELN customers from small startups to large global biopharmas.

Faster insights and better science in the search for desperately needed new therapies

During the last few years we’ve seen the increasing importance of biologics as therapeutic agents. According to an article, “The Ever Increasing Attraction of Biologics” published in Chemistry World, “Biologics are one of the fastest growing classes of therapeutic compounds, rapidly outpacing the growth of small-molecule drugs. By 2020, analysts are expecting biologics to make up over a quarter of the entire pharmaceutical market. This drive towards biologics is due to their ability to reach targets considered ‘undruggable’ using small-molecule therapies, and because of the benefits, or rather the reduced side-effects, offered to patients through better-targeted therapies.”

With this increasingly urgent drive to discover and develop novel bio therapeutics in areas such as oncology, it is crucial that researchers are equipped with the best possible tools to capture, manage and exploit all the available data. Traditionally, distinct software applications have been used to support chemical SAR and to analyze biological sequence-based SAR (bioSAR). With the growing adoption of multidisciplinary drug discovery teams, a unified chemistry/bio-sequence search and display application is critical.

In this post we drill into these requirements in more detail and discuss how an ideal bioSAR tool should support faster insights and better science in the search for desperately needed new therapies.

Researchers are struggling with a data deluge, and need effective tools to locate, extract, sift and filter relevant data for further detailed visualization and analysis. With biologics, these applications will need to understand and manage bio-sequences, and an immediate requirement will be to allow sequence searching, using a standard tool such as BLAST to search across internal and external sequence collections, to collect and import the appropriate hits in a standard format, and to link them to other pertinent properties (bioactivity, toxicity, physicochemical, DMPK, production, etc.)

With a tractable data set on hand, researchers will want to explore sequences to try to discern particular motifs or sequence differences that are correlated with bioactivity or desired physicochemical or DMPK profiles, and thus potentially amenable to further manipulation and enhancement.

The sequences must be aligned, for example by CLUSTAL Omega, and visualizations should present sequences so that sequence differences are immediately highlighted, and monomer substitutions can be explored for potential links to bio-therapeutic activity. LOGO plots to investigate the distribution of monomers in a set of sequences, and annotations to highlight and share areas of interest will also help researchers to get to insights more quickly.

If scientists want a deeper dive into the underlying structure of the sequence or a region, immediate access to a detailed and interactive 3D rendering of the biomolecule’s structure can provide a different lens through which to understand how different monomers substitutions may impact protein folding or active site binding and thus activity.

There may also be cases where required specialized analysis or visualization capabilities are only available in a separate in-house developed, third party, or open-source application, and the provision of an extensible Web Services framework will enable these to be quickly linked in to an enhanced analysis pipeline that can then be shared with colleagues and collaborators.

A bioSAR system providing the capabilities discussed above, equipped with an intuitive and unified user interface catering for novice and power users alike will enable them to derive faster incisive insights and make better informed scientific decisions in the search for novel bio therapeutic agents targeting some of the world’s most pressing unmet clinical needs.

With our Lead Discovery Premium solution, scientists can analyze and visualize chemical structure & biological sequence data, compare, score, and segment leads based on data-driven multi-parametric optimization. Whether you are a scientist working in drug discovery or materials science, you can discern and understand the trends and outliers in your data to ensure a successful candidate selection and promotion strategy.

Lead Discovery Premium combines the analytics power of Spotfire® with the chemical smarts Revvity Signals is known for, then adds powerful biological sequence intelligence to create the premier platform for scientific visualization and analysis. The guided workflows empower you with the ability to find and assemble any data you want to answer any scientific question – from drug discovery to materials science experiments – in minutes rather than days, independent of your IT department. Read more about our Lead Discovery Premium here.

To learn more about Lead Discovery Premium, and how we support a range of small molecule R&D and large molecule R&D customers, read more here. You can learn more about the extensive list of differentiating capabilities in our Lead Discovery data sheet with reference videos below:

Sequence Analysis window that enables motif discover and activity assessment with an integrated workflow. 

 The open and extensible web services framework that allow BLAST and CLUSTAL alignment out of the box.

The ability to view sequence information directly in line with activity data, including the ability to view differences relative to a reference sequence and correct handling of gapped annotations. 

The elegant support for import of various biological sequence file formats. 

Digital Transformation to speed the search for new materials

Traditional approaches for making, testing and deciding which new materials need to be replaced by a holistic digital solution

Materials scientists today can engineer complex nanostructures and model topologically intricate architectures with relative ease. But unlike in other fields, they can’t seem to shed antiquated workflows and outmoded data storage and analysis tools.

In a new white paper, researchers in the cheminformatics group at Revvity Signals argue that this paradox of progress is quite pronounced on the applications side. When developing new materials for specific applications — be they biodegradable plastics, hydrogels, battery electrolytes or perfumes — materials scientists routinely find themselves sorting through tab after tab of data on old-school spreadsheets.

To remedy this all-too-familiar situation, the paper puts forward a new vision for how materials scientists might work. The authors detail an end-to-end approach to materials development that combines the capability and speed of an online search engine with the flexibility and ease-of-use of apps on a mobile phone. They also explain how this vision backs the company’s new integrated informatics platform.

Digital transformation has become a bit of a buzzword. But putting in place good modern systems really does matter — you can do better science and make better decisions, which will translate into a commercial advantage.

Make, Test, Decide

The materials development process breaks into three broad categories: Make, Test, and Decide. In each one, new digital technologies can increase efficiency, remove bias and support reproducibility.

In the Make Stage, researchers synthesize new materials for testing, and a good recording system is vital. The time-tested laboratory paper notebooks served that purpose for centuries. Electronic notebooks (ELN), are an improvement and offer simplified data recording. But it can be difficult to access data from them later. That’s because first-generation e-notebooks save data in databases, which were developed to minimize storage space. In contrast, tagging technology, which allows the software and the user to add an unlimited number of descriptive tags to data, prioritizes data accessibility over storage space.

The testing stage involves measuring various properties of a material, which means that researchers need an objective way to select the optimal set of testing parameters. Traditionally, this selection of testing parameters has been the domain of human experts and their intuition, but data-driven approaches can eliminate human bias and minimize the risk of selecting sub-optimal parameters.

When it comes to processing the experimental data during the testing stage, materials scientists gravitate toward two approaches: homegrown spreadsheets, which offer flexibility but are difficult for non-experts to create, and bespoke, commercial tools, which are easy to use but are difficult to modify since they are designed to do a single task. Ideally, the authors say, material scientists shouldn’t have to choose between flexibility and ease of use. Rather, in an environment that resembles apps on a smart phone, they can select the appropriate module for the data-processing task in hand.

You have this tension between a generic tool like Excel or bespoke tools. One solution to that is an environment containing a suite of applications.

Finally, in the all-important Decide phase, researchers select the most promising candidate materials for development. The authors argue that to improve workflows, researchers should look to harness the same indexing technology that Google and Amazon use to collect and quickly access data. It offers both flexibility and speed — flexibility because an index can incorporate data from multiple sources and formats, and speed because the algorithms that generate selections based on indexed data are extremely efficient. This contrasts with conventional approaches that often involve picking a material by laboriously scanning an Excel sheet of data.

Applying a comprehensive informatics platform to the Make–Test–Decide process promises to both greatly simplify the development of new materials and cut the costs involved. Investing in technology for conducting materials research can have a huge impact.

For our materials research customers, our integrated informatics solutions can foster more efficient and successful materials development. What does this mean? Companies can quickly innovate, accelerate product development, and speed to market, by leveraging a powerful enterprise product suite, including ChemDraw, Signals Lead Discovery, Signals Notebook, and Spotfire®.

We enable our materials science customers to revolutionize the research and development of the materials they need to make, as well as energy, chemicals, and food. We help them innovate and accelerate their R&D cycle so they can bring products to market faster and increase scientific insights and breakthrough innovations.

To learn more about how an integrated informatics can foster more efficient and successful materials development, read our new white paper, Digital Transformation Journeys for Material Science. Materials science is one area that we support, as we offer solutions for Industrial Segments, supporting an array of industries including specialty chemicals, agrochemicals, energy & petrochemicals, flavors & fragrances, food & beverage, and electronics.

NEXUS 2021: 9th Annual Conference Delivers Transformative New Insights

Usually, big celebrations are saved for a 10th Anniversary, however the scale and depth of insights being presented at NEXUS 2021 is absolutely deserving of this 9-year milestone. 

What is NEXUS 2021?

NEXUS 2021 is a 2-day FREE virtual conference, featuring over 25 presentations from thought-leaders across the globe – all focused on solutions empowering scientists and researchers to gain critical insights from data analytics, accelerating informed decisions.

Registration is now closed.

Following the huge success of NEXUS 2020, NEXUS 2021 will be virtual. Although we sincerely miss the regional and in-person NEXUS format, 1,500 professionals logged in last year. This success solidified the direction for this year’s event:

  • Industry-minded keynote presentations
  • Breakout sessions focusing on Research, Digital transformation, Clinical trial analytics, and cutting-edge technology
  • Innovation Spotlight Sessions
  • Expo Hall where you can attend the virtual booths and engage with various experts

Great Minds Synch Alike

Nexus means “a central or focal point; connection or series of connections linking two or more things.” NEXUS 2021 is exactly that—connecting the community by providing a central forum to share best practices with the latest trends and developments in scientific research.

Hear from industry-leading experts:

  • Pharma/Biotech Leaders: Bayer, Bristol Myers Squibb, Gilead, GlaxoSmithKline, Johnson & Johnson, Merck & Co, Nimbus Therapeutics, and Roche Diagnostics
  • Chemical/Agrichemical/Fragrance Leaders: Birla Carbon, Corteva Agrisciences Givaduan, and Merck KGaA
  • Partners: Spotfire®, Accencio, and Veeva

Connect to Collaborate

NEXUS 2021 Virtual Conference is FREE to attend— Live on October 6 & 7

Data Collection Geared Toward Translational Medicine

Blog: Clinical and Translational

This practice is lab- and data-driven, aspiring to make a bench-to-bedside approach that’ll efficiently develop therapeutic strategies. It identifies biomarkers that can then be used to inform the patient’s molecular profiles and disease etiologies. Biomarkers can include blood sugar levels that identify patients with diabetes, or certain gene mutations that can signal a patient’s risk of developing cancer.

Translational medicine’s use of establishing molecular profiles has been beneficial in creating drugs that are specialized to target specific pathways based on patient diagnoses. This approach, when compared to one-size-fits-all drug production, creates fewer side effects with better results. Translational medicine can be an effective method when done well, with financial benefits on top of health achievements. That said, it’s a data-heavy activity. Here’s an overview of working with data in translational medicine:

Data Collection

From the lab to treatment, there will be ample information collected throughout the process. For successful drug development, all this data must be sorted and analyzed; to make this more effective, data should be responsibly collected from the start with a standardized and efficient practice.

Guidelines for data collection include:
• Collecting enough samples that you can establish statistical significance
• Using the same clinical samples across the entire population
• Using data models with datasets from various sources
• Making sure data is clean and well-curated enough for cross-study analysis

Not only is quality data useful for developing effective drugs, it can also be used retroactively to determine why some drugs weren’t working. For example, the initial compound in a drug for treating non-small cell lung cancer was initially an ineffective treatment for immune responses in autoimmune diseases. Now the drug, Keytruda, is useful for an entirely different purpose than intended.

Data Simplification

The data analysis that follows large-scale genome products can end up fragmented during the process. The creation of many analysis pipelines and informational silos can make it difficult for scientists and clinicians to collaborate.

To be effective, this all needs to be sorted and broken down. A good initial step is making sure that the data is accessible across the board so that even non-experts, or teams, can look at the data, analyze it, and apply their biological understandings. Implementing strategies to streamline data access can save time.

Scalable data management and accessible data tools are vital for translational medicine to succeed, and with that in use, patients can expect to receive valuable, effective drugs. 

Topic
Show
Sort
Topic
Show
Sort
Filter
Topic
Show
Sort
Topic
Show
Sort
Topic
Show
Sort