Tessella Support Services plc

The Five Factors of Smart Laboratory Success

- By:

Courtesy of Courtesy of Tessella Support Services plc

The Five Factors

Research organizations are under pressure to cut costs while maintaining or increasing throughput. To improve their operations, many laboratories are considering investments in data management or support and automation systems like ELNs, LIMS or Robotics/HTS.

Whatever the change that is introduced, there are many pitfalls on the way to delivery, roll-out and successful adoption.

In our experience the key to successful completion of these change programmes is understanding and mitigating the risks of non-alignment between the supporting IT on the one hand, and the people, processes and business management on the other. There is no magic formula that will determine the most suitable way to manage these risks in all circumstances. Every project is different, especially in the research environment where each organization has a different level of IT maturity, and each laboratory has its own priorities and problems.

However we have found that there are common factors to consider that can help you plot the best way forward. These are described in detail below, and further illustrated with examples from our wide project experience.

User Adoption

One of the key risks when introducing any new system is that it will be poorly adopted. End users must see the benefits in using the new system. If it doesn’t save time or make their job easier, they will not be enthusiastic adopters. If a system implementation requires a change in working that is unpopular with some users, and there are no incentives to change, IT will typically get the blame for failed implementation. In our view, if a project team fails to pin down the sponsors early on what change and standards are really going to be mandated, and what investment can be offered in improved usability carrots, then they deserve that blame.

We advise that you work out upfront what change must happen, make sure the users realise and agree with, or at least, accept that, and then the implementation project will be welcomed by users as a way to help them operate more easily within their new world. Think about ‘personal’ and ‘team’ wins - what will remove the nuisance factors and annoyances with current ways of working that don’t make it to the senior management agenda. Make sure that your requirements capture these needs, and that your (preferably agile) implementation process takes over and satisfies them. Ensure that you have mechanisms in place such as a ‘user group’ that can help prioritise between the more discretionary investments, and can help sweeten the pill of the more strategic changes and gains from a major implementation project.

Experience: Find your carrots

Biologists in a global pharmaceuticals company were capturing and managing preclinical in vivo data with ad-hoc and Excel-based approaches. They were comfortable with this approach, but this made this critical data hard to find, hard to understand and hard to reuse by the modellers who wanted to introduce better modelling into the drug pipeline.

An innovative information platform to capture, integrate, manage and analyse in vivo experimental data in a much more coherent fashion was commissioned and deployed. In spite of the new system, attempts at managing this data failed – the biologists generating the data were not getting any benefit – entering the data was just another time-wasting chore, and so the data was not entered.

The key to adoption was finding a benefit for the biologists. This turned out to be a simple report that the system would generate after the in vivo results had been entered, that the biologist could use in their experiment write-up. This extra feature was not expensive, and was the carrot needed to ensure that everyone involved would benefit from the change, and would therefore participate enthusiastically.

The outcome was dramatically improved quality, accessibility and confidence in the in vivo data across the organization. It has also virtually eliminated the need for modellers to manually locate and collate data from preclinical in vivo studies (which used to take up to 50% of their time), delivering significant efficiency savings across bioscience.

Global versus Local

It is often the case that there are benefits from introducing a single software solution and a common way of working across a multi-site organization. However this is not always the case, so look before you leap in committing to a global implementation. Think carefully about what you are trying to achieve in a global implementation.

If you are introducing LIMS, will everything really be consistent and standardised at the detailed level of workflow to which a typical LIMS will penetrate? If you want the ability to move work around flexibly, you must pay attention to cross-site workflow and perhaps also enable people to work as cross-site teams in a single process. Even if it is just samples and materials to be moved around, there will need to be standardisation of the relevant data types.

Or is the motivation for a global solution to simply reduce license and support costs? It might be helpful to accommodate different groups of users by offering the option of a varied implementation of a single preferred package, at a time that suits the users. Rather than buying licences in excess of the immediate need, mitigate the risk of change by negotiating a sliding scale of license discount, but commit only to implementation on the sites with greatest benefit and most urgent need.

Experience: The best of both

An ELN is introduced not only to give staff an efficient way to write-up their experiments, but also to allow sharing of results across an organization. Modern ELNs offer great flexibility in terms of templates, workflows and plug-ins. As such they provide the implementation team real opportunity to find the right level of configuration that meets the needs of the end users but that also finds the right balance between global and local.

We have delivered a number of ELN implementations in a variety of settings where the organization has used the flexibility of the ELN package to good effect. One client implementation adopted a global approach for the chemistry group and a local approach for the biology group. A key success factor in this project was agreeing and writing down the business rules for writing up experiments. These rules defined the high-level scientific workflow, and what would be in the scope of the ELN. The rules were referred to throughout the project to support the software requirements and prevent scope creep.

The ELN for chemists was configured with a workflow including safety and approval for controlled substances, and a detailed template that covered a standard plan-synthesize-analyseregister workflow that was followed by everyone. However, in contrast to this deep and narrow approach, an ELN for biologists in the same organization took a broad and shallow approach. In this case, a simplified workflow and a handful of experiment templates were needed so that the system could accommodate the wide variety of scientific work in biology.

Variety or Routine

In large R&D organizations, expensive equipment or scarce skills lead to specialisation, with some labs offering a standard service to the rest of the business e.g. NMR spectroscopy. The degree of flexibility needed from a software solution can vary from high to low, depending on where the lab work sits on a continuum from business services laboratories to pure innovation-driven R&D.

For labs that provide a routine service, the focus will be on delivering a reliable service with a reducing budget, and meeting KPIs for performance and quality. For research labs where there is less routine work, there are different challenges including securing IP, improving data sharing across the organization, and controlling resources and costs. In R&D labs especially, detailed needs will change so pick solutions that are amenable to that change, and ensure that you can resource the changes needed on the fly.

Suppliers may claim ‘flexibility’ for their product, which often means that they can build anything you want to your specification - but it will be a painful and expensive process to change this afterwards. As part of your requirements analysis, work out typical examples of flex you will need and make sure that the suppliers can demonstrate these against detailed use cases. Agree in advance how you will resource or pay for change. Is it as simple as a superuser making configuration change or updating templates, or will your IT support team need to be involved? Is their time, or time of IT support, secured within the funding for change?

Experience: Power of variety vs routine on LIMS selection

Exploring variety and routine proved to be a very effective discriminator in the selection exercise for a global pharmaceutical company. Working with them we conducted a requirements analysis for a procurement across both its small molecule and biologics operations.

The smaller scale and greater variety within biologics led to choice of a lowertech LIMS than the user management had originally expected, since a smaller supplier could demonstrate that superusers would be able to carry on with configuration and localised implementation, in different labs as the system was rolled out.

A ‘larger’ and better known system had more features but it became clear that the ‘configuration’ would have to be done by the suppliers and that reconfiguration after that was likely to be both difficult and expensive, essentially requiring a new, detailed requirements analysis and implementation in each wave of new use.

Experience: Making managing variety a routine activity

Making the management of variety a routine activity proved to be an effective strategy for one of the world’s biggest producer of paints and coatings. They wanted to increase throughput and reduce time-to-market when formulating a new product. Developing a new product formulation involves testing a vast number of possible combinations of basic ingredients.

To achieve this goal they invested in a robotic system for high throughput experimentation. However, they found that the breadth and flexibility of the robot interface meant that technicians were taking up to half a day to prepare the system to run each experiment.

Working with them we were able to develop an interface that allows scientists to design their experiments using predefined building blocks, and then supports the lab technicians by automatically preparing the experiment configuration for the robot. This approach of encapsulating the lab robot’s flexibility in software allowed them to double the throughput of the HTE lab.

The Bigger Picture

When introducing a new system, it is always worth taking the time to step back and look at the bigger picture. Think strategically about the other pieces in your application and services jigsaw, to make sure that the update aligns with the architecture of your information and processes, as well as wider business needs.

In a research lab, processes change all the time as scientists develop new assays and experiments that generate results that are differently structured or in new formats. This is often the driver for introducing new systems. However in this moving picture of processes, it is essential to have some fixed points and standards around basic data structures. What do you mean by a batch, sample or lot? How will you in the future be able to compare results between experiments and samples, to track quality trends or to carry out more sophisticated predictive analytics or data mining? Doing this successfully requires hard thought and experience in data modelling, data architecture and analysis of metadata.

It is also important to think about application architecture at the conceptual level, in order to avoid messy and expensive collisions between, say, cross-functional workflow designs (e.g. request management) and more complete, integrated functional solutions, as are typical in compound management and high-throughput synthesis or screening. If you have a common services infrastructure, one vital question is what standard services you need to define at the global and cross-functional scale (e.g. substance identification) and what, if any, constraints this places on your choice of vendor or solution that will consume these services.

Experience: Big picture to avoid unexpected dependencies and other surprises

Looking at the big picture was to prove key to avoiding unexpected dependencies and other surprises for one multinational pharmaceutical company. They had initiated several global projects to roll out applications across all R&D sites, as part of a larger drive to harmonize workflows and practices.

The new global systems were replacing local systems in use at individual sites, with functions ranging across much of the research workflow, from molecule synthesis and assay requesting, to compound management, data analysis and an ELN. The high level of integration between the systems, both old and new, and the regional differences in working practices meant that unexpected dependencies and other surprises kept emerging.

Working with the client we analysed the dependencies between the projects. The analysis covered harmonizing local workflows to the new system constraints, and as appropriate, managing the decommissioning of legacy systems or reintegrating them into the new system landscape. The resulting analysis provided support to the global project management function helping them plan and co-ordinate the various system rollouts and overcome issues that had proven difficult in previous plans.


If all your measurements and analyses are currently in-house, you are probably behind the trend. Already more than 30% of bio pharma R&D spending is outside company boundaries, and the trend is for this to increase. Commercial R&D organizations are partnering with academic institutions, to access new sources of innovation. An ever increasing number of companies are outsourcing routine lab activities to contract research organizations (CROs).

To make it easier to work with external collaborators and CROs, companies are looking at opening up their internal systems to the wider world. Approaches taken range from allowing outsiders direct VPN access to an internal application (an ELN say), to the creation of a cloudbased solution outside the corporate firewall that serves the needs of both in-house staff and outside partners. As well as the technical challenges, there are other aspects to consider around people and processes. Information security, ownership of IP, and relationships between different suppliers are just some of the areas to be aware of.

Any new systems for sample handling and results recording should be futureproofed by being designed to facilitate the transfer of materials and results across organizational boundaries. For sample handling, you will likely need to provide collaborators access to your sample ids and critical context, while protecting your sensitive information from outsiders and viruses. Try to use standard barcoding schemes so that third parties can charge you less for reading in your sample data. Make sure that results systems can accept direct external inputs or, at least, industrystandard formats.

Experience: Re-engineering as an opportunity to develop a strategic model of externalization

Working in partnership with a client we developed a platform to monitor clinical trials in real-time, reporting drug safety, efficacy and providing analytics on trial operations. Recently, they have started moving towards a CRO-based business model, and are taking the first steps on the externalization journey. This system is the first to be externalized as part of this business change.

Rather than opting for a point solution for the externalization of this system, the client saw this as an opportunity to evaluate and determine an overall approach to externalization. The sensitivity and confidentiality of the clinical trial data meant that the system posed many of the tough requirements and design questions inherent to externalization, providing a strong basis for the client’s strategic model.

Gathering the requirements and developing the architectural design required solid knowledge of the problem domain. Having developed the clinical trials platform, we were well placed to understand how the CRO would use the client’s internal applications, and the security implications of handling sensitive clinical trial data and adhering to high-level security standards. The technical problem of authenticating users outside the company firewall was solved by using third party providers for external identity management and single sign-on. The most challenging part of this project was getting all of the disparate IT groups in the client to work together: achieving this goal required strong organizational and communication skills.


Research organizations considering investments in smart labs must fully understand and mitigate the risks of non-alignment between the supporting IT on the one hand, and the people, processes and business management on the other.

Our experience is that using criteria such as those outlined in this document provides a strong foundation for success. When combined with an overall approach that respects the specifics of an organization’s research environment, level of IT maturity, priorities and problems and other project tools we have developed but not outlined here, we can help you find the most suitable way to manage these risks in all circumstances.

Tessella are experienced in delivering value from smart lab investments. Feel free to contact us at smartlabs@tessella.com to make your smart lab initiative a success.

Customer comments

No comments were found for The Five Factors of Smart Laboratory Success. Be the first to comment!