Quantcast
Channel: FDA and CE compliance using JIRA and Confluence – RadBee
Viewing all 68 articles
Browse latest View live

How Confluence meets the needs of document users?

$
0
0

When designing a controlled document management system, it’s important to take the requirements of everyone who may need to access the documents into account. That’s why mapping users’ needs through each stage of the controlled document lifecycle is a key element in each of our quality management projects.

The list below shows how a document management solution based on Confluence meets the main requirements of any life sciences quality management system (expand each title to read the details):

Ease of document preparation, eg access to a rich text editor and ability to import or attach content from different sources and formats

Confluence has a great, easy-to-use editor built in, which takes advantage of some very powerful macros. For example, it’s easy to embed parts of one document within another, so there’s no need to copy and paste the same definitions into each of your standard operating procedure (SOP) documents. What’s more, a change to the definition in the original location will be immediately reflected in all the documents that include it.

For those who prefer to edit their documents in Microsoft Word, it’s easy to attach Word documents to Confluence pages, as well as media like video.

I cannot avoid mentioning here the powerful templates and blueprints paradigm in Confluence. These predefined structures help to standardise layout, section structure and approval, so everyone can create content like a pro.

Providing feedback on documents

Multiple methods are available in Confluence for providing feedback on documents:
• Inline comments, with the support of long discussion chains, are available both for Confluence content and attached documents.

Inline comments supported by long discussion chains make gathering feedback easy. In this example a phrase that has received comments is highlighted in yellow.
Figure 1: Inline comments supported by long discussion chains make gathering feedback easy. In this example a phrase that has received comments is highlighted in yellow.

 

When a user clicks on the highlighted phrase the full chain of comments is displayed.
Figure 2: When a user clicks on the highlighted phrase the full chain of comments is displayed.

 

  • More generic comments can be added in the comments area available at the end of each page.
    Confluence encourages the sort of collaborative interaction that can yield a goldmine of ideas on how documents could be improved.

Document approval and sign-off

An FDA CFR 21 compliant workflow can be implemented in Confluence using the Comala Workflows plugin.

Ease of locating, retrieving and navigating documents

Three particular aspects of the Confluence platform make it easier to find documents and access the information you need:

  1. Configurable dashboards put the most relevant pages at users’ fingertips. This is a great way to declutter and cut through the noise.
  2. The search mechanism, based on leading search engine Apache Lucene, offers fast, accurate search and retrieval.
  3. It facilitates logical document storage and hierarchy.

For example, we recently set up a dashboard for a client that shows each author all their documents which are currently going through the approval process, including the most recent information about pending and rejected reviews. Each user can also set up a personalised table of contents, which always displays links to the pages most relevant to them.

Ability to access documents in various formats and mediums, eg online or hard copy

Documents in a Confluence instance, whatever their format, are accessible through any web browser. Access can be made available on the company intranet or to a wider audience. It’s also easy to export Confluence content in PDF or Word format.

One of the areas where we help companies is embedding their standard document layout and styles into Confluence. This means all documents will conform to the brand style and exported content will be ready to share with third parties.

Document identification, version control and audit trails

Confluence can be extended to comply with controlled document regulations and best practices.

At RadBee we’ve developed our own macros that create automatic audit trails and maintain and display document identification information and electronic approvals.

Access control

Confluence has a powerful authorisation mechanism. Generally, users will not even be aware of the content they don’t have access to.

Depending on the sensitivity of your data, additional layers of security can be built into the infrastructure that hosts the Confluence instance, either on-premise or in the cloud.

Sharing with the public

Using the authorisation mechanisms in Confluence, it’s possible to open up limited areas to the general public through your company’s website.

Theming tools make it possible to ensure the publicly-accessible content conforms with corporate branding and website style.

Configuration control (ensuring that only the current version is in use)

Confluence retains all the historical versions of each document, including intermediate drafts as well as official versions. However, the way we set systems up, only the current version is displayed through the default interface.

Best practice is to move pages in the process of being written or updated into a different space from the one where the official versions appear. This improves efficiency and eliminates confusion by preventing a clutter of not-yet-approved versions getting in the way of the official ones.

As you can see, although Confluence was originally designed to aid software development, with the right configuration and customisation it’s perfect for creating, editing and managing controlled documents. It provides valuable flexibility while driving consistency and promoting continuous improvement.

If you would like help implementing a Confluence-based document management system at your company, please get in touch.

 The information provided relate to using a customised and configured Confluence Server installation, relying on the current version at the time of publishing (version 5.9.4).

 


Adjusting JIRA for FDA CFR 21 part 11 compliance: managing deletion

$
0
0

Regulations dictate that controlled records should be managed in compliance with the FDA CFR 21 part 11 (aka Part 11) guidelines.
Part 11 defines the criteria under which the US Food and Drug Administration ‘considers electronic records, electronic signatures, and handwritten signatures executed to electronic records to be trustworthy, reliable, and generally equivalent to paper records and handwritten signatures executed on paper’ . Although released back in 1997, it remains the gold standard for management of software applications, databases and files in the life sciences sector.
JIRA features many important elements to facilitate Part 11 compliance as standard. These include automatic generation of audit trails (covered in section 11.10(e) of Part 11).

JIRA generates a log of all the changes in the lifecycle of each issue
Figure 1: JIRA generates a log of all the changes in the lifecycle of each issue

With the right configuration, JIRA can be set up to support a fully compliant operation. However there are some pitfalls that, if not correctly managed, can result in the creation of non-compliance loopholes.

The deletion trap

One of the major pitfalls is the ease with which it is possible to completely and irreversibly delete an issue. Quoting from Atlassian’s JIRA documentation:

When you delete an issue, you actually remove it permanently from JIRA, including all of its comments and attachments. If you have completed the issue, you may want to set it to Resolved or Closed instead of deleting it. If there are sub-tasks in the issue, these sub-tasks are also deleted.

Indeed the ‘Delete’ operation in JIRA is as total as it gets. It erases an issue from the database, leaving no trace, and there is no way to recover the deleted issue or the associated audit trail. Even when using JIRA’s powerful reporting language, JQL, to report on past data (using the ‘WAS’ operator), the deleted issue will not be included.

This possibility goes against the essence of an audit trail and leaving this feature in place would contradict Part 11.

Configuration to the rescue

Fortunately, JIRA’s default settings can be changed to avoid the possibility of deleting an issue. In fact, there’s more than one way to manage this, but the most common option is to block the ‘Delete’ operation through the permission settings.
The ability to delete an issue from a specific project is governed by a dedicated permission. You can revoke that permission from all users to avoid the possibility of total deletion. However you will need to decide what should happen when you do actually want to discard an issue.
The most common solution is to follow Atlassian’s own advice and set up a dedicated resolution type, such as ‘Cancelled’. In this scenario, instead of deleting an issue, you would take it through a workflow to a final state, typically ‘Done’, but setting the ‘Resolution’ field to ‘Cancelled’. In this way, the issue can be filtered out from reports, dashboards and your day-to-day work, but will still fully exist in the database.
As a further step, ‘Cancelled’ issues can be moved to a separate project, designed to serve as an electronic repository. They will then be available in case anyone needs to refer back to them, whether for auditing purposes, to learn from past mistakes or to retrieve an issue cancelled in error.
So, in summary, while JIRA comes out of the box with several features that facilitate Part 11 compliance, it needs careful adjustment to avoid potential problems like permanent deletion of issues and their associated records.

If you would appreciate our help to set up your JIRA instance to facilitate compliant process management, please get in touch.

Compliance, JIRA Core and the Atlassian Cloud

$
0
0

When companies first contact us to discuss how we can help them, many ask whether it’s possible to implement their electronic quality management system (eQMS) process in JIRA Core in the Atlassian Cloud.

In this article I answer that question. For simplicity I’ll focus here on JIRA Core, but many of the same considerations will also apply to JIRA Service Desk as well as Confluence and other Atlassian tools.

In a few words

It depends on several factors, but in most cases you will need to implement your compliance related processes on a JIRA Core Server instance.

The summary

There are four areas we need to consider:

  1. Compliance with FDA and other regulatory requirements

If you use JIRA Core to coordinate the quality management process but export the data before having it signed off and also rely on external evidence ( such as signed and scanned printouts from JIRA ) to establish compliance, then the Atlassian Cloud could be a suitable platform.

However if you plan to use JIRA Core data as evidence for compliance, eg to demonstrate that CAPAs have been opened and managed to completion or that required training has been carried out, the Atlassian Cloud is not the platform for you. In that case your best option will be to use a JIRA Core Server instance.

  1. Functionality

JIRA Core on the Atlassian Cloud is less flexible and extensible than Atlassian Core Server. You need to consider what functionality you may miss out on if you use the Atlassian Cloud and how critical this is for your organisation.

  1. IT administration

How easy will it be for your company to run a JIRA Core Server instance? You can delegate IT admin tasks and hardware management to a third party organisation, but nothing beats the Atlassian Cloud in terms of a ‘no hassle’ solution.

  1. Costs

Ultimately, the real cost of each of the two alternatives for your organisation – using JIRA Core on the Atlassian Cloud or JIRA Core Server – will directly relate to the first three points.

The detailed discussion

  1. Why can’t we use Atlassian Cloud records as proof of compliance?

The main reason is that regulatory bodies insist on software validation. The key regulation here is FDA CFR 21 part 11, but other guidelines and regulations from the FDA as well as European authorities and others all agree that you need to demonstrate that you control any software platform you use. The new ISO 13485:2016 standard for medical device quality management contains even more explicit requirements for validation of software applications used for operational purposes.

The most fundamental way that the Atlassian Cloud violates these requirements is the fact that you have no control over the actual JIRA Core version that you use. Even if you validate the Atlassian Cloud, Atlassian are pushing new versions of JIRA Core to the Cloud a couple of times a month. While it’s great that users always have access to the latest version, it negates the possibility of using Atlassian Cloud data directly for your electronic compliance records.

In addition, the Atlassian Cloud servers are currently located in the USA. Depending on the actual data you store and your own geography, this alone may mean you cannot host your data in the Atlassian Cloud.

It’s also worth pointing out here that, at the time of writing, Atlassian cannot be legitimised as a third party supplier to the regulated healthcare industry because it doesn’t:

  • hold any ISO certifications
  • open its floor to supplier audit
  • or make any regulatory representation in regard to its own QMS.

This means that the way to use Atlassian software for regulatory compliance is to create a supporting document that demonstrates the reasoning why it is OK to use. This will usually involve an in-house installation with a validation plan.

  1. What functionality would we miss out on if we used the Atlassian Cloud?

One of the reasons we can use JIRA Core for eQMS processes is its huge flexibility and extensibility. Many of the extension points and third party plugins are not available if you’re using JIRA Core in the Atlassian Cloud (see JIRA plugins for quality management and Managing your CAPAs in JIRA: key questions answered).

As a result you would experience:

  • a less streamlined user interface, because there is less flexibility to control how the various issue-related screens look
  • and more restricted automation options.

See Atlassian’s guidance on restricted functions in Atlassian Cloud apps.

  1. What about hosting our own JIRA Core Server instance?

The hassle-free use of the Atlassian Cloud may be tempting but, as outlined above, there are a number of down sides. Hosting your own instance of JIRA Core Server would avoid those, but then you would need to manage the application in-house, increasing the burden on your IT team.

However there is another alternative. There are companies that specialise in hosting Atlassian instances, giving you all the benefits of your own Altassian Core Server instance with none of the hassle of managing it. We work with several providers and will be able to recommend the best one to suit your specific requirements, however I’ve had very good experiences with the people at AtlasHost.

  1. But would it be cheaper to host JIRA Core Server ourselves?

All costs included, whether hosted internally or externally, I’m yet to see a JIRA Core Server installation that is cheaper than using JIRA Core in the Atlassian Cloud. JIRA Core Server is still good value for money  considering it provides better support to compliance.

The conclusion

While it is possible to run much of your eQMS process using JIRA Core in the Atlassian Cloud and there are benefits to doing so, you would need to rely on a separate process for sign-off and regulatory compliance. If your management team is comfortable with that, this may be a viable option for your company. Alternatively, using a third party company to host and manage an instance of JIRA Core Server could be a good middle ground and, taking all factors into account, could stack up well financially too.

We can advise you on the most appropriate options for your circumstances and help you set up your eQMS to maximise efficiency and ensure compliance. Please get in touch for a no-obligation chat.

Transforming your CAPA SOP with JIRA Core – an action plan

$
0
0

This post outlines the steps involved in setting up a quality management system (QMS) process in JIRA Core.

QMS standard operating procedures (SOPs) are all about process management, which is what JIRA Core excels at facilitating, so it makes good sense to use it for this purpose. I’ll guide you through the key steps in making the transition.

To make things more concrete, I’ll focus here just on the corrective and preventive actions (CAPA) SOP, within the context of life sciences. In fact, the relative complexity of this process and its central role in quality management means it’s often the first process clients ask us to help them transfer to JIRA Core.

A couple of notes before we dive into the details:

  1. While most of the functionality described is available both on the Atlassian Cloud and the server version, some functions are currently only available on the server version.
  2. Some prerequisite knowledge of JIRA Core configuration is assumed.

Step 1: Establish your verification and validation (V&V) strategy

A CAPA process set up in JIRA will be part of your quality management system, so it will be subject to the relevant validation and verification (V&V) procedures. If you don’t yet have a V&V strategy in place for automated processes, it’s a good idea to invest some time in developing one.

Bear in mind that the extent and depth of V&V activities required will depend on the regulatory framework in which you’re operating. For example, if you’re a medical technology company operating within the framework of FDA QSR 820, you will find the applicable rules in Section 70, Article i. These stipulate that the V&V activities for automated processes must cover the intended use of the software in your specific context. This means that you will need to create and execute a test plan to ensure that your CAPA implementation works as it should.

If you rely on software for other aspects of your business, for example if your medical technology incorporates a software element, you may take inspiration from the practices and tools you use there. However, before you consider applying similar methods to your CAPA SOP process, carefully consider the level of risk associated. For example if your device is a heart pacemaker, then obviously a CAPA process in JIRA will carry significantly less patient risk than the software controlling your device, so may need less stringent V&V efforts.

Step 2: Outline your project structure, issue types and subtasks

The most straightforward strategy at this point is to map each quality assurance (QA) process to a dedicated project in JIRA Core, and each instance of that process to a particular issue type in JIRA Core. For example:

  • CAPAs will be managed within the CAPA management project and each CAPA instance will be a CAPA issue.
  • Training activities will be managed within the Training management project and each training activity (ie one person trained on one subject) will be a training activity issue.
  • Nonconformities will be managed within the Nonconformities management project and each nonconformity will be a nonconformity issue.

You will then define these new issue types in JIRA Core.

Some procedures require multiple actions to happen in parallel. For example, imagine a pharmaceutical company opens a CAPA due to a series of complaints about medicines arriving at hospitals with damaged packaging. Following an investigation into the root cause, two separate courses of actions are planned: a redesign of the packaging and a change in transportation arrangements. Those should be assigned to two different people and will progress in two parallel timelines. The CAPA cannot be completed before these two actions are completed. This type of process calls for the creation of subtasks, so you would need to define both a CAPA issue type and CAPA subtask issue types.

At this stage it’s also worth identifying which processes are related to each other, because this can provide valuable contextual information. For example, a nonconformity issue may trigger a CAPA issue. This can be indicated in JIRA Core using issue links.

Step 3: Translate your SOP into a JIRA workflow

CAPA SOPs are often written using process language which lends itself easily to producing a workflow description. Here are a few hints on how to break your process into distinct statuses:

  1. When completing one set of actions is a prerequisite to another set of actions, it’s usually best to break those up into two separate status steps in the workflow. In the case of CAPAs, identifying the root cause should be done before planning the preventive and corrective actions, so those should be separated into two consecutive statuses.
  2. If a person in a specific role has to take action at a specific stage of the process, this is also a good indication that the status should change. For CAPAs, if the QA manager needs to review the CAPA before any action is taken, then it makes sense to create a dedicated ‘Review by QA’ status.
  3. If there should be a time delay between actions, a ‘Waiting’ status should be introduced. If an effectiveness check is due six months after a CAPA has been implemented, it should move to ‘Waiting for effectiveness check’ status (see Figure 1).
An example of a JIRA workflow for CAPAs
Figure 1: An example of a JIRA workflow for CAPAs

Step 4: Map your data fields

Typically, the existing CAPA form is a good starting point for the list of data fields you should define in your CAPA screens. Having a field on a screen can serve a few purposes beyond merely recording relevant information:
1. It can serve as guidance to the person executing the CAPA. We often see phrases within a CAPA SOP like ‘Check if the risk management file could be affected’ or ‘Notify the management representative if this might be related to the QMS’. As these are actions that need to occur during the CAPA process, it might be useful to add fields that will not only remind people of what they need to do but also record their decisions about it. A checkbox field with the title ‘Could risk management be affected?’ will enforce consideration of this question.
2. Data fields may influence the workflow of a CAPA. For example, if your organisation allows the ‘Effectiveness check’ stage to be avoided, then a field that allows users to indicate that the effectiveness check should be avoided can cause that stage to be skipped, taking the CAPA directly to the final status.
3. Data fields provide structured information which can be used to filter reports, facilitate analysis of long term trends and statistics and provide meaningful information for dashboards (see Figure 2).

The CAPA source is an example for a data field. A pie chart can be displayed on the Dashboard, which shows what is the distribution of sources for our CAPAs.
Figure 2: The CAPA source is an example for a data field. A pie chart can be displayed on the Dashboard, which shows what is the distribution of sources for our CAPAs.

Along with identifying the list of fields you will need, you should associate each field with the corresponding workflow status. For example, the field ‘Was this CAPA effective?’ should be filled in during the ‘Effectiveness check’ status. Because you could easily have many fields in a CAPA issue – a couple of dozens is typical – it’s good to split the fields into separate tabs, a tab for each workflow status. You should keep the layout of tabs and fields consistent across the different screens you use, for example using the same layout for the View and Edit screens.
Another design paradigm for creating a good user experience is to make the transition screen between workflow statuses present all the fields that need to be filled in for the current status. If ‘Root cause’ is a field that has to be filled in during the ‘Root cause analysis’ status, then the transition screen between ‘Root cause analysis’ and ‘Action’ will display the ‘Root cause’ field, along with all the other fields that have to be filled in before you can move forward. This is then coupled with a workflow validator, which will block the transition and display an error message if a user tries to move to the next stage without that field being filled in.

Step 5: Using electronic signatures

In life sciences, you need to consider the FDA CFR 21 part 11 if you want to use your JIRA Core Server CAPA implementation as your evidence for compliance. Intenso’s Electronic Signature plugin does a good job of providing an easy and compliant way to integrate electronic signatures into the workflow. You just need to decide during which workflow transitions you need an electronic signature then add an electronic signature field to those transition screens.
The other major requirement from FDA CFR 21 part 11 is the need to have an audit trail. JIRA Core has an audit trail facility built in. The trail for each issue can be found in the ‘History’ tab at the bottom of the View screen (see Figure 3).

The History tab shows for each CAPA, the modifications of data alongside with user and time information.
Figure 3: The History tab shows for each CAPA, the modifications of data alongside with user and time information.

Step 6: Spice up the workflow with a bit of ‘behind the scenes’ automation

In many cases steps 1-5 are all you need to get going using JIRA Core for your CAPA SOP.
However you can add plugins to streamline the workflow, making users’ lives easier. For example by using the right plugins you can:

  • prevent users from making errors or performing the CAPA in a non-compliant way
  • avoid people having to guess exactly what is expected of them
  • and reduce the number of clicks needed.

You can use workflow mechanisms powered by plugins to achieve some very cool effects in JIRA Core. Here are a few examples of ways they can improve your CAPA implementation:

  1. Opening CAPA subtasks automatically, such as a subtask to evaluate the risk management file when transitioning from ‘Root cause investigation’ status to ‘Action’ status. Several predefined subtasks may be created, depending on the fields filled in during the investigation. For example, if it was indicated that ‘Risk management may be affected’, then a CAPA subtask titled ‘Evaluate potential impact on risk management file’ will automatically be created and assigned to the right engineer.
  2. Blocking the progress of the CAPA until all relevant subtasks have been completed.
  3. Moving the CAPA from ‘Waiting for effectiveness check’ to ‘Effectiveness check’ status when the effectiveness check due date has elapsed.
  4. Automatically assigning a CAPA to the quality assurance officer responsible for CAPA validation when it transitions to validation and verification.

Step 7: Run your CAPA implementation past the end users

Although you may think your process is very easy and logical, it’s always good practice to allow some of your end users to try it out. You will often discover quirks that will be easy to iron out but which could make a big difference to the user experience. Simply changing the name of a field, the options within a particular field or even the name of a transition could make the whole process more intuitive.
Before launching the new system it’s very important to get the quality assurance team to validate that the implementation accurately reflects your CAPA SOP. As a matter of fact, this stage often reveals not just problems in the implementation but also issues with the original SOP that should be addressed, such as an overcomplicated or incomplete description.
This step will give you a good feeling for how your CAPA meets the following awesomeness criteria:

  1. Easy – a person who is familiar with JIRA (ie uses JIRA for other purposes) needs less than 15 minutes of training to be able to contribute to a CAPA issue.
  2. Self-explanatory – a compliant execution of the CAPA process does not require the person to read the SOP.

If your CAPA doesn’t meet these two criteria, use this step to discover what you could do to improve it.

Step 8: Define public filters and create a CAPA dashboard

Reports and dashboards are the gateway to some of the key advantages to be gained from using JIRA Core.

It’s important to give people access to meaningful reports and dashboards from the very start. This helps to keep users motivated during the adjustment period and can help them realise some very quick wins.

My go-to set of CAPA-related gadgets for dashboards is:

  1. Two dimensional filter statistics that shown all CAPAs split into their status (one axis of the table) and assignee (second axis).
  2. List of all CAPAs for which effectiveness check is due.
  3. Pie charts to view distribution of all our CAPAs per: CAPA source, status, assignee, etc.

Step 9: Verification and validation (V&V)

Before signing off the implementation you’ll need to complete the V&V activities you defined in Step 1.

Step 10: Rollout

You will now be ready to roll out your new CAPA SOP implementation.

A few tips to guide you through the planning of the rollout:

  1. Depending on the size of your organisation the rollout may be done in one go or gradually spread across teams and departments. By its very nature the CAPA process requires cross-team collaboration, so it’s not always easy to do a stepped rollout.
  2. Teams that are already working with JIRA will have a much easier time getting accustomed to managing CAPAs in JIRA. They may act as champions for the new way of working and support other users. So, if possible, it makes sense to roll out the new CAPA process first in areas where some of the people already know JIRA. For example you could start the rollout on a product line that involves a big software team that already uses JIRA to manage their development work.
  3. Create a JIRA project (or even a JIRA Service Desk project) where users can log tickets relating to the CAPA process implementation in JIRA. This will help you provide support where it’s needed and gather feedback to and drive the improvement of your implementation.

 

I hope you will have found the guidance here useful for planning the transition of your CAPA SOP or other key procedures to JIRA Core. Remember, if you need help at any stage, that’s what we’re here for, so please get in touch.

Two new things we love in Confluence

$
0
0

The Atlassian tools and third-party plugins for Confluence are evolving all the time and at RadBee we’re always on the lookout for changes that might help to support your compliance and quality assurance needs. We’ve recently come across two gems. The first is a plugin that provides better control over reused content. The second is a brand new Audit Log available in Confluence 5.10.

Control your reused content with Include Version macro

Include Page is one of the most popular macros in Confluence. It means you only need to write blocks of content like your organisation overview or list of reference standards once, and they will automatically be replicated in other documents. A feature of this macro is that it always uses the most recent version of the included content. The dynamic nature of Confluence, where the content is refreshed each time a document is viewed, means that an may look different today than it looked yesterday, simply because included page has been modified elsewhere.
However, in some cases, content developers and quality managers need more control over content. The brand new ScriptRunner plugin for Confluence from Adaptavist meets this need, with a bundled macro called Include Version. This gives page authors the ability to decouple a particular block of included content from the automatic updates so when the included content is modified the changes won’t apply. And, as the name suggests, it means you can specify a particular version of the content to reuse.
If, for any reason you want to default back to using the latest version of an included page, it’s easy to set that up in the macro, and it will behave just like the Include Page macro.
This plugin offers loads of other capabilities and is well worth using for lots of other reasons, so I was pleasantly surprised that it offers this unexpected benefit.

Tip: To easily find all the page of your Confluence instance that use the Include Page macro, just enter the following phrase into Confluence’s search box: ‘macroName:Include Page’ and launch the search.

Better configuration control for your Confluence instance with Audit Log

Atlassian have just added an extra option to the configuration control and accountability tools for Confluence 5.10. Audit Log not only allows you to extend your site’s compatibility with FDA CFR 21 Part 11, it also actually promotes accountability for changes and can help in troubleshooting scenarios.
Audit Log will retain your change records for three years by default, but you can configure it to keep the records for as long as you need. Each change record indicates the time of the change, the user who made it and the details of the actual change. See a complete list of the details recorded on the Audit Log page of the Atlassian website.

Have you come across any new features in Confluence recently that you find helpful for quality management or compliance?

Training management with JIRA

$
0
0

Managing a life science company’s training records is notoriously difficult. Wouldn’t it be great if training could become a routine part of the way your team works, and if your training matrix and training record could be generated automatically? Well now all this is possible, with help from JIRA Core and RadBee.
Watch this video to learn more.

 

Compliance, usability and culture in quality management

$
0
0

The biggest challenge in building a quality management system (QMS) is accommodating the needs of both compliance inspectors and your team. However this is a challenge worth meeting, because only by aligning the needs of these two audiences can a QMS move away from being just an obligatory overhead to become a real game changer.
A company recently came to us for help with replacing their paper-based quality management system with an electronic system based on JIRA and Confluence. We agreed that we would start this major overhaul with their CAPA process. As the first part of the implementation, this would set benchmarks and help us map out guidelines for other processes.
When we launched the new system the users were delighted. Instead of CAPA forms they now completed CAPA issues in JIRA, which would double up as records to meet their regulatory compliance needs. The team soon got up to speed with processing test CAPAs through the system, using the fields in the CAPA issues to provide the main information, such as root cause analysis and decisions regarding the effectiveness of CAPAs. Where more detail was needed, such as an explanation of why an effectiveness decision hadn’t yet been made, they used JIRA to add comments.
While the team were getting on well with the new system, the quality manager was worried about having comments registered in JIRA. He was particularly concerned about the unstructured nature of the comments and the level of detail people were including.
What if someone added information in the comments that might have a negative impression on an inspector or auditor who saw it, he wondered? For example: ‘This issue occurs much more frequently than is recorded here. I have tried to raise this as nonconformity several times but have been told it’s not important’.
This is probably not something you would want an inspector to see, as it might cause them to ask difficult questions.
Because of these concerns, the quality manager asked us to disable the comment facility. However the team loved it. They argued that it made more sense to have these unofficial conversations within the system, rather than by email, and that some of the information included in the comments would be valuable for future reference.
We agreed that banning comments would take away one of the big selling points of putting CAPAs in JIRA – keeping everything in one place. It would also compromise its efficiency and usability.
We reassured the manager that once an organisation becomes accustomed to using an eQMS system, a culture develops around it. Everyone understands what the role of CAPAs is and knows what sort of comments it is appropriate to add.
However, in order to get to that point, you need to embrace the comment facility and manage the risk associated with it. Bear in mind that a user-friendly QMS platform will be adopted much more quickly than one with limited functionality, and the appropriate culture will soon develop.
Our client’s quality manager took our points on board and asked us to come up with a way to mitigate the risks involved. Here’s what we decided:

  • The client would add a disclaimer to their CAPA procedure and CAPA issue screens, explaining that comments are not part of the CAPA record.
  • A discussion about the role of comments would be included in the training of team members who would be involved in the CAPA process.

As the quality team, you have a great opportunity to leverage the benefits of eQMS, helping to spread the right values and habits within your organisation. To achieve this you will need to make some key decisions on the design of the system and, yes, that sometimes might mean taking risky decisions. However in our experience, the benefits you stand to gain make the risks well worth taking.

Planning for validation

$
0
0

Planning your project and planning your validation are very closely intertwined and, when your project process is mature, these will converge to become one thing.

Our focus here though is on the importance of allowing time in your project plan to make sure all the validation boxes are ticked. Whether you run your project using Waterfall or Agile methodology, the validation outline will be largely the same.

This is the list of validation elements you should have ready by the time the project moves to production:

Element
Typically is documented in
What this is
1 Plan Validation plan The plan should:
  1. Identify the system that will be delivered
  2. Specify the product or quality areas it will impact
  3. Outline the team and other resources that will be involved and the timeline
  4. List the validation elements that will be delivered (basically this list)
2 Supplier Supplier qualification Adding software and infrastructure suppliers to your approved suppliers list (ASL).
3 Process mapping This can be incorporated into your procedure or work instruction for the process A process map is a detailed flow diagram of the process that should occur once the project has been implemented.

Although this is not strictly required by regulations, process mapping helps to drive a good implementation.

4 Requirements User requirement specifications

In small projects requirement specifications may also be part of the validation plan

A list of system requirements. Each requirement is identified along with its connections to other traceability elements.

Often the bulk of requirements will be framed as work instructions that describe your process. Each step in the work instruction can then be identified as a requirement.

5 Risk analysis Risk analysis document

In small projects this may also be part of the validation plan

Identifies the steps that should be put in place to make the system safe.
6 Functional Specifications Functional specifications document

Functional and configuration document

Requirements translated into concrete software features and configuration elements. Each element is uniquely identified for traceability.
7 Infrastructure Validation plan

Infrastructure overview document

Identifies where the system will be developed, validated, produced and installed.
8 Testing your system Test report

Operational qualification (OQ), performance qualification (PQ) or OQ/PQ

Running test scripts, either manual or automatic. Each test will be uniquely identified for traceability.
9 Installation checklist Installation report

Installation qualification (IQ)

A checklist describing the installation steps. After execution the checklist should contain the actual results (success or failure), and document any unplanned steps taken.
10 Traceability matrixes Standalone traceability report or within the validation plan Tables that map:
  1. Each requirement to elements of the functional specification, proving that each requirement has been met
  2. Each risk mitigation to a user requirement

Each functional specification element will be tested to prove that the system works as outlined in the functional specification.

11 Instructions on how to use the system Operating procedures or work instructions Once the new system is in place, your team will work differently. This needs to be reflected in your operating procedures or work instructions.

Written well, these can be a great tool for onboarding people to the system.

12 Configuration management Operating procedures or work instructions Explains how future changes to the system will be made, eg how platform upgrades or changes to process specifications will be carried out.
13 Training Training plan A list of who needs to be trained to use the new system and when and how this will be done.
14 Migration of legacy data Migration plan or within the validation plan

The migration requirements may also be reflected in the requirement specifications and other traceability elements

Importing relevant existing documents or data to the new system.

If you identify that this step will be necessary, include it in your validation activities.

15 validation report Validation report The final step before the system goes into production.

This brings all the previous validation elements together along with an official conclusion that the system is ready for production.

 


Risk analysis in Jira

$
0
0

You can use Jira to manage all elements of the traceability matrix, including the risk analysis itself.

If your risks are all closely linked to requirements, this will help your team keep them in mind during the implementation work. In Jira, you can include a live link from each risk to the relevant mitigation. This will ensure that the traceability between risks and requirements is always current, without the need for manual maintenance.

Assuming you already maintain your requirements in Jira, we’ll show you how to set it up to include the risk elements.

To follow these guidelines, you will need to have the Risk Register plugin installed in Jira.

(Jira examples relate to Jira server, version 7.3.1.)

Setup the risk model according to your conventions
Setup the risk model according to your conventions

How to record your Risk Analysis in Jira

  1. If you haven’t already, add your list of requirements to Jira.
  2. As you perform the risk analysis for a specific requirement, complete the relevant risk analysis fields:
    • Add the current date to the ‘Risk analysis date’ field.
    • Indicate whether or not risks are identified.
  3. If risks are identified, create a new issue of type ‘Risk’. Describe the risk and qualify its severity, occurrence and detectability.
  4. Link the risk issue to the the requirement that triggered it. (Note: Several requirements may be linked to the same risk.)
  5. Define how risks will be mitigated, defining each mitigation as a new requirement (unless the requirement already exists). Create a ‘Mitigated by’ link between each risk and its mitigation(s). (Note: You could represent mitigations as functional specifications rather than requirements – both approaches have their merits. Either way, it’s important to make sure each mitigation is clearly identified and connected with the relevant system tests.)
  6. In the Risk issue, indicate the residual risk that remains once the relevant mitigation has been carried out.
Overview of risks identified (and mitigated)
Overview of risks identified and mitigated

Administration and setup

Before you can use Jira for risk analysis, a Jira administrator will need to set it up as follows:

  1. Define the following issue types in Jira and associate them with the Jira project that you use to record your specifications:
    • Requirement: add the following custom fields to this issue type:
      • Risk analysis date: the most recent date when risk analysis was carried out for this requirement
      • Conclusion of risk analysis: whether risks were identified or not
    • Functional specification
    • Risk: If you’re using the Risk Register plugin, this this issue type will be created automatically
  2. Configure Risk Register to support your model of FMEA analysis (see the example in Risk analysis for computerised systems):
    • Change the names and default values of the following four fields to support your risk analysis needs:
      • Impact → change to ‘Severity/Occurance’, and add the following options: ‘High/High’, ‘High/Medium’, ‘Medium/High’, ‘Medium/Medium’, ‘Medium/Low’, ‘Low/High’, ‘Low/Medium’, ‘Low/Low’
      • Probability → change to ‘Detectability’, and add the options ‘High’, ‘Medium’ and ‘Low’
      • Residual impact → Change the name to ‘Residual severity/Occurance’ – the options will automatically reflect those for ‘Severity/Occurance’
      • Residual probability → Change the name to ‘Residual risk’ – the options will automatically reflect those for ‘Probability’
    • Set up the risk model scale according to your conventions and define which combinations of severity, occurance and detectability map to the high, medium and low risk priorities.

Risk analysis for computerised systems

$
0
0

When done correctly, risk analysis can help to identify things that could go wrong in a process once it has been put into action. It may also trigger creative thinking about how to avoid those problems or at least prevent them from causing damage. Risk analysis makes it possible to build elements into a tool at an early stage of development that will make it more stable and safer to use.

Risk analysis is a regulatory requirement for all implementations that require validation, so has to be formally documented and approved (see When does a software tool require validation?).

When it comes to life sciences, risk analysis focuses mainly on identifying problems that could potentially affect the patients and health professionals using your product, rather than business risks, such as negative impacts on budget or timeline.

When to carry out risk analysis

It’s important to carry out the first risk analysis shortly after starting an implementation project, as soon as the requirements have been drafted. The analysis will need to be carried out again, or at least reviewed, each time the specifications evolve.

Risk analysis will usually prompt changes to planned features or the addition of new features to mitigate risk, so in itself triggers changes in specifications. Whatever the reasons behind a change in specifications, the associated risks should always be reassessed.

Avoid these two pitfalls:

  1. Delaying the first iteration of the risk analysis to a late stage of the implementation programme.
  2. Not reviewing the risk analysis when significant changes are made to the specifications.

These mistakes may result in the late discovery of risks that need mitigation, potentially leading to significant rework and even non-compliance.

Who should carry out the risk analysis?

Risk analysis requires a good understanding of your infrastructure and the technologies being used, as well insights from users of the system. Ideally the team will be led by someone with previous experience of leading risk analysis who is currently involved with other risk analysis processes. This will help newcomers become acquainted with the procedures involved and ensure the new process is consistent with other risk analysis your company is doing.

The risk analysis process

The risk analysis team will go through your project specifications methodically.

The guidelines below are based on failure modes and effects analysis (FMEA) and good automated manufacturing practice (GAMP) principles.

Preparing for the initial risk analysis meeting

Before the risk analysis meeting, prepare the following elements:

  1. A list of items to be analysed, based on your specifications. This list will frame the discussion and will ensure that all aspects of your implementation are considered. You could either base your list on user requirements or on processes implemented and the steps involved in each.
  2. Assuming you’re using the FMEA approach, outline your conventions for the following categories:
    • Severity – The main factor here is the potential impact on patients, health professionals or users of the computerised system. Another important factor is the risk to data integrity. A risk of losing significant data would often be considered as high severity. The severity levels will be highly dependent on the risk level associated with the system as a whole. For example a computerised system that records vigilance incidents and holds patient information will be associated with a higher level of risk than a training management system. Bear in mind that each potential risk that could affect a system cannot be considered more severe then the risk associated with the system as a whole.
    • Occurrence – Try to assess the likelihood of the potential problems actually happening. This is why it is critical to have people involved who can assess the likelihood from the perspective of system users.
    • Detection (or detectability)  How likely it is that a risky situation will be detected before it actually causes harm? For example, if a user needs to provide several confirmations before they delete data, it is less likely that they will unintentionally delete data.
    • Risk class – These classes are determinined based on a combination of severity and occurrence.
    • Risk priority – Rate each potential risk as high, medium or low priority, by combining their risk class and detection. Agree on what is meant by high, low and medium before you start.
    • Acceptable level of risk – The key purpose of doing the risk analysis is to identify practical ways to make the system safer, or reduce the risks involved to an acceptable level. The acceptable level of risk will be dictated by management at the organisational level.

Running the risk analysis meeting

Once the leader of the risk analysis team has set the scene by explaining the background and the purpose of the meeting, the rest of the session wil usually be structured as follows:

  1. Risk identification and qualification  Walk through each of the items on your list (requirements or process steps). For each item:
    • Ask the team what could go wrong. Each thing that could go wrong is a risk. List all the risks your team finds.
    • Qualify each of the risks for their severity, occurrence and detection. Your predefined formulas will then generate the risk priority. Note that the same risks may apply to different items. For example, providing incomplete data is a risk that might be applicable at many steps along the process.
  2. Risk mitigation – Review each of the risks you have identified in the previous steps and find ways to mitigate them through changes in the process. This might include adding control steps or changing process features. Also consider adding external mitigations, for example providing additional training or explanatory documentation. However, note that changing the process is typically a more effective approach than external mitigation.

Throughout the process, aim to mitigate risk to at least bring it down to an acceptable level.

Tools for risk analysis

Discover how to use Jira for your risk analysis

Example

An electronic quality management system (eQMS) is being implemented to support the corrective and preventive actions (CAPA) process.

Here’s a snippet from the risk analysis process:

  1. Initiation – the CAPA event is registered in the eQMS.
  2. The following risk was identified:
    • Wrong information included in the eQMS or information missing:
      • Severity: Medium
      • Occurance: High
      • Detectability: High
  3. This risk is mitigated by:
    • Defining some data fields in that phase as mandatory, enforcing more complete and accurate information.
    • Desiging subsequent process stages in a way that ensures data is reviewed by independent subject matter experts.

Process mapping

$
0
0

Whenever a new computer system is introduced or an old one updated, the goal is to better support one or more processes. For example, a document management system facilitates the process of authoring and approving documents while a service desk application streamlines the process of receiving and handling support requests.

Before embarking on any project that involves implementing a new computer system or updating an existing one, it’s important to agree on the eventual process it will support. This is why a process mapping exercise is critical. It gives you the opportunity to gather input from all stakeholders and align everyone around a common goal.

The end result will be a written process description. This will be a great starting point for developing your requirement specifications and the standard operating procedure related to that particular process.

The example below shows a process for reviewing and approving documents in Confluence,  based on the Six Sigma tool SIPOC (supplier, input, process, output, customer):

Process mapping example: review and approval of documents in Confluence
Process mapping example: review and approval of documents in Confluence

Requirement specifications

$
0
0

When teams set out to determine the requirement specifications for a computerised system, confusion often arises. However there are some simple steps you can take that will help you build up a clear picture of the requirements along with getting buy-in from everyone involved.

It’s all about the process

In order to determine the requirements accurately, you will first need a solid understanding of what the eventual computerised system needs to do. Whether you follow SIPOC (suppliers, inputs, process, outputs, customers) or another process mapping method, make sure you spend enough time exploring and questioning your beliefs about the target process. You can do this by conducting small group interviews and hosting discovery workshops, as well as reviewing relevant documentation.

You could then try developing a prototype. This can highlight things you might have inadvertently left out when you initially set out to describe what the system would need to do.

For example, when creating a new process setup in Jira, use a development system to devise a rough implementation of the process. You can then share that with the users and ask for feedback. This will help to identify fundamental errors like missing steps and duplication as well as revealing misconceptions and unexpected ideas.

If your platform doesn’t allow for quick prototyping, use storyboards instead.

A good description of the eventual process will form the basis of your user requirements and operating instructions. It will also help to make clear how regulatory requirements will impact the process, for example the need for electronic records to be created and training provided.

Taking regulatory requirements into account

It’s vital to consider the relevant regulations when working out the requirements of your implementation. Your best bet will be to add regulatory requirements to your list and manage them exactly as you would user requirements.

For the implementations we do, the two major sources of regulatory requirements are:

It’s important to follow legal requirements to the letter, so you you may want to consult an expert ensure that you fully understand how the regulations governing your industry will affect your implementation.

Requirement specifications in Jira

Jira is the perfect place to specify your requirements. Start by defining a dedicated custom issue type called ‘Requirement’. Using this to describe the requirements of the process will enable you to trace their connections with functional specifications and make sure none of the requirements are ignored in the development or testing process.

From Paper-Based to practical: How we digitised a paper-based QMS & ensured ISO-27001 compliance in just 5 weeks

$
0
0

An efficient quality management system (QMS) is the lifeblood of any company in the health industry. Failure to meet strict regulatory and compliance requirements could mean lost business or even a complete shutdown of operations.

The challenge

Biotech start-up SOPHiA GENETICS has had a QMS in place since its launch six years ago. VP of Quality Jasmine Beukema and her team work hard to ensure that SOPHiA has always met the medical device industry’s strict standards. But what began as a six-person company that could easily operate with a paper-based system had since grown to 150. From excessive paperwork to unnecessary overhead, they could no longer afford the inefficiencies – especially during critical times, such as audits.

With an ISO-27001 audit only five weeks away, Jasmine knew that something had to change if SOPHiA was going to get recertified.

The process

SOPHiA was already using Confluence, which was a good start, as Jasmine knew the software would support the electronic QMS they desperately needed. SOPHiA brought us onboard to quickly, accurately, and effectively digitise and modernize their system.

After finalizing our initial plans with SOPHiA, we set up a staging environment where we could install prototype implementation and also easily collaborate with Jasmine and her team.

Our main focus was to extend Confluence to support SOPHiA’s controlled documents—the official collection of documents kept for regulatory, operational, and quality processes. This would make finding procedures, linking procedures, and uploading supporting documents easy for SOPHiA.

Jasmine and her team monitored ongoing improvements and provided constant feedback on the staging site. Once we obtained approvals, we were then able to transfer these improvements to their production environment. Along the way, we taught SOPHiA optimal use of Confluence, including lots of tricks and shortcuts to become more adept and efficient with the software.

We also carried out the system validation, which is a formal, documented testing cycle to ensure that everything was complying with requirements.

Because of the critical deadline, we went the extra mile to ensure that SOPHiA’s controlled documents were ready within five weeks, just in time for their audit.

The results

We provided what SOPHiA desperately needed: an audit-ready, fully digitised system in under five weeks.

A few days after SOPHiA’s paper-based collection of documents were transitioned to Confluence, the company was audited for their ISO-27001 certification.

They passed with flying colors.

According to Jasmine, “This audit was much less stressful and less invasive on the company because we didn’t need to disturb people while they were working to request specific documents.”

Moreover, she sees many long-term benefits, including improved efficiency across the entire company. This includes the streamlining of several crucial processes and the elimination of the needless overhead that was wasting human resources. She estimates that the digitisation of their QMS will cut down on at least 30% of the preparation time before future audits.

Jasmine and her team are left not only with a more efficient system, but also the confidence that they’re meeting compliance and regulatory requirements, and that their new quality management system will withstand future company changes and growth.

For more details on how we helped SOPHiA digitise their QMS to prove to auditors that they’re handling data in the most secure way possible and obtain critical recertification in just five weeks, read the full case study.

 

How to validate computerised systems used for GxP and Medical devices environments. Computers system validation in an Agile age

$
0
0

8 February 2018, Stuttgart, Germany (9:00-17:00m)

Venue: Milaneo Office Center, Heilbronner Straße 74, Stuttgart, Germany

In an age where computerised systems make the difference between winning and loosing, we need to find ways to continuously and relentlessly innovate our systems.  Today, compliance is no longer enough: we need a validation framework which helps the business move quicker and release often.

In this one day conference we will explore the newest trends in computers system validation (CSV). Experts will share perspectives on standards and guidelines for setting up a CSV framework, as well as on available technologies and tools which can reduce the burden of CSV.

The talks will focus on CSV in GXP (GMP, GLP, GDP, GVP, GCP) or medical devices (MedDev) environments, when compliance with regulations like 21 CFR Part 211, Part 820, Part 11 or EMA EudraLex Vol. 4 – Annex 11 is mandatory.

Programme

  • 09:00-09:30 Registration and coffee
  • 09:30-09:45 Introduction
  • 09:45-10:45 Regulations and Standards – GXP & SDLC vs. GAMP 5
    • Mr. Markus Roemer, Ambassador ISPE DACH
  • 10:45-11:45 Using JIRA and Confluence as eQMS
    • Mrs. Rina Nir, CEO of RadBee, An Atlassian solution partner
  • 11:34-12:00 Coffee break
  • 12:00-12:45 Using CMMI and Scrum: SW development for GMP products and supplier audits
    •  Mr. Stellan Ott, CEO at Wolfram Ott & Partner GmbH
  • 12:45-13:30 Lunch
  • 13:30-14:30 GXP cloud solutions and software deployment
    • Mr. Keith Williams, Member of ISPE GAMP European Committee
  • 14:30-15:30 Using Confluence as eValidation Tool – example for a GAMP category 5 application for GVP
    • Mr. Markus Roemer, Ambassador ISPE DACH
  • 15:30-15:45 Coffee break
  • 15:45-16:30 Making JIRA and Confluence GXP-ready
    • Mrs. Rina Nir, CEO of RadBee, An Atlassian solution partner
  • 16:30-17:00 Experts panel and closure

Join the conference

How to validate software development tools used for GXP and MedDev?

Traceability matrices

$
0
0

Without the right tools in place, traceability matrices can easily bring a project to the brink of collapse. Still today there are project managers who delay moving to production for a significant length of time while they put together a traceability matrix for the validation report.

Fortunately, with the addition of appropriate plugins, Jira provides an excellent solution. Using it to manage your specifications and tests will eliminate the need for tedious manual work by integrating traceability throughout every stage of the project. As user requirements and other elements emerge and evolve, the traceability will be updated, and a fully up-to-date traceability matrix will be available at any time.

What are the elements of a traceability matrix?

  1. You analyse requirements to identify risks, and these risks needs to be mitigated. Those mitigations then become requirements. So a requirement will be connected with a related risk and that risk will be related to another requirement that mitigates it. When these links are all clearly documented we can describe this as bi-directional traceability.
  2. Functional specifications are prescriptive and specific. In the case of configurable computer system, configuration specifications will often be dictated by functional specifications. Each functional specification has to be triggered by, or traceable to a requirement.
  3. Tests are how you demonstrate that the system meets the specifications. You first need to plan the tests, and the plan will only be complete when all functional specifications can be traced down to tests.
  4. The system can be declared validated only when there has been a successful test run for each test.
Computer systems validation traceability matrix
Computer systems validation traceability matrix

Choosing the right technology to manage traceability

Anyone who has ever tried using Excel to manage traceability can testify that it doesn’t scale well and is very tedious to use, even for the smallest project.

A good tool to manage traceability will:

  1. be seamlessly integrated with your specification management – this eliminates duplication of effort
  2. offer strong reporting capability, showing the different levels of traceability in a clear forrnat
  3. be flexible, allowing you to define the report layout and content
  4. make it easy to spot traceability gaps

Managing traceability in Jira

Jira comes, out of the box, with the capability to link issues. So, if you have a requirement and a functional specification set up as Jira issues, you just need to link them to establish traceability.

It’s a good idea to use meaningful link names, like ‘traceability link’, to differentiate traceability links from other links that may exist between issues.

Requirement to functional spec in Jira
Requirement to functional spec in Jira

There are many add-ons available that provide powerful features to extend Jira’s core issue-linking capability. The additional capabilities available range from auto-calculating coverage status to visually displaying the hierarchy of links and reporting.

Here are some concrete examples:

  1. Links Hierarchy for Jira & Agile provides a visual tree of multilevel links for each issue, so you can see the complete path from requirement to test for each and every issue.
  2. Xray for Jira and Test Management for Jira are both test management suites that provide clear visibility of test coverage and offer several convenient tracability views. Test Management for Jira creates traceability reports, which can be easily exported.
  3. Xporter for Jira and PDF View for Jira are both report generation tools, which you can use to export traceability reports in almost any layout you want.

 

 

Using Test Management for Jira, each requirement displays its test and test run coverage
Using Test Management for Jira, each requirement displays its test and test run coverage
XRay for Jira reports traceability from requirements all the way to defects
XRay for Jira reports traceability from requirements all the way to defects
Links Hierarchy for Jira
The Links Hierarchy plugin creates a tree view of multilevel links

How we Saved a medical device company time, compliance headaches and money

$
0
0

For international health industry companies, ensuring regulatory compliance is difficult. Add into the mix an outdated paper-based Quality Management System (QMS), and the task to meet regulatory requirements across multiple locations in an efficient, cost-effective, timely manner becomes impossible – and threatens to derail a busy, successful international organization for non-compliance. 

The challenge

 An in vitro diagnostics (IVD)  medical device and clinical lab organization with offices in the U.S. and Europe desperately needed a new QMS to better manage a vast quantity of paperwork. Beyond the risk of non-compliance, it was a daily challenge to ensure that procedures were standardised and the latest documents were being referenced.

For example, thanks to differences in time zones and schedules, it wasn’t unusual to lose days worth of productivity over small things, such as a signature or an approval. Furthermore, paperwork they relied heavily on — Document Approval Forms — was comprised of two separate documents so staff members had to manage both simultaneously and track down printed copies to be sure they matched. And other everyday necessities, such as internal complaint tracking, corrective action tracking, resolution, and personnel training, were also needlessly complicated by the antiquated paper-based QMS.

This led to perhaps the biggest issue of all: training new personnel to use this inefficient system, which was riddled with complicated processes for handling simple tasks.

Although this company had explored other options to update their QMS, none were successful. Understandably, management was a bit reluctant to try again for fear of another failed attempt. But internal complaints about using the paper-based system were mounting; the time had come to make compliance-related processes easier, using a customised solution that was simple to use and implement company-wide.

 The process

After examining numerous options, the company’s Vice President of Regulatory Affairs and Quality settled on Jira as the base software for its digitised QMS because not only is it “the biggest name in bug tracking in the software industry,” but also because it’s user-friendly and is continually being updated and improved. He then retained us to customize the software so that it was easy to implement a new, streamlined QMS company wide. Topping the list of the VP’s priorities was to make the system user friendly, so the learning curve for new staff would be minimal.

Working from a flowchart of how the company wanted the electronic system to behave, we implemented the system in a staging environment – starting with a customised change control form module that would standardise our client’s controlled documents into one easy process. Once approved, we uploaded the company’s hundreds of CCFs and numerous additional documents in just two days.

The final step to completing the digitised QMS was to develop a custom training management module, which made all types of training accessible and intuitive, from HR practices to department-specific processes. Beyond the benefit of saving the company significant time — thanks to the module’s ease of use — the training management module also allows the company to clearly demonstrate to auditors that all staff are properly trained on necessary procedures.

The results

The Senior Vice President of Regulatory Affairs and Quality estimates 500% more regulatory compliance efficiency and a resource cost savings of $150,000- $200,000 annually. Today 75 employees have access to the electronic QMS and rely on it daily. In less than a year, over 700 CCFs have been processed through the new system – and internal complaints about using the QMS are a thing of the past.

“We deployed our new quality management system from start to finish over a few months, for a fraction of what other systems cost,” the Senior Vice President of Regulatory Affairs and Quality raved.

For more details, read the full case study.

Stuttgart Summit: Highlights & takeaways on how to validate software development tools for GXP and MedDev

$
0
0

Last week a select group of 15 health sector professionals representing businesses from startups to large pharmaceutical companies gathered in Stuttgart, Germany for expert insights on implementing and maintaining effective quality systems. Featuring presentations by four industry leaders, attendees left the conference with actionable information – regardless of if they were initiating a new system or improving an existing one.

Given the focus on QMS sustainability and overall efficiency, the presenters illuminated their points with real-world examples of systems that are compliant but not excessive, are proportional to the clinical safety risk, and that use software and infrastructure to effectively lessen the burden of compliance.

Presenters included:

  1. Ott Stellan, CEO of ott+partner: Stellan presented a compelling look at his company’s transformation from a cumbersome and wasteful paper-based quality system to a functional digital system using the Capability Maturity Model Integration model (CMMI) as the guideline for the quality processes, and a Microsoft team foundation server as the electronic platform for their project work. Along with a significant reduction in waste and increase in efficiency, the company culture also underwent a dramatic shift – for the better. As a supplier to pharmaceutical companies, ott+partner is audited frequently and their new processes have helped them consistently pass audits with flying colors.
  2. Rina Nir, CEO of RadBee: An expert in implementing quality processes in Jira and Confluence, Nir explained the many benefits of using these platforms to reduce friction with quality processes and increase staff engagement and efficiency. She also provided three case studies using these Atlassian tools to support quality processes, showcasing how to use Confluence for controlled documents and JIRA for Corrective and Preventive Action (CAPA) management and to support Software Development Life Cycle (SDLC) and an eValidation approach.
  3. Markus Roemer, CCS Consultant, demonstrated how he used Confluence to support SDLC and traceability. Despite a difficult start and a reluctant team, the system he devised and implemented was eventually overwhelmingly adopted and has now stood the test of time, successfully passing numerous audits over the last several years. Additionally, Roemer also provided an overview of regulations and standards that impact the industry, highlighting especially prevalent misconceptions and how some companies evolve a nonsensical and excessive set of processes.
  4. Keith Williams, CEO of C3, gave guidance around the various infrastructure options available to businesses as the cloud becomes mainstream in GXP environments. From “on-premise,” to “hosted,” “private cloud,” and so on, he discussed the costs, benefits and risks that each option presents. Because there’s no one rule that fits all, Williams emphasised the need to avoid excessive spending, and advised companies to instead to seek out the fiscal and efficiency sweet spot where the compliance and quality strategy aligns with the patient and business risk.

Key takeaways from the conference included:

  • The importance of finding the optimal point between doing too little and too much when it comes to implementing quality processes.
  • How Jira and Confluence support effective QMS management, and why these tools are so engaging to users.
  • Why moving an ineffective paper system to an effective eQMS using SDLC tools and eValidation is very achievable – as long as timeline expectations are managed.
  • How a sound process to follow before implementing a new QMS involves setting your own risk approach, challenging it, and then getting input from the quality, business and compliance owners. From there, it’s essential to prototype and test to be sure it accomplishes all key goals before company-wide implementation.
  • Why to ensure sustainability, it’s essential to first agree on what you need to do for quality and regulatory compliance in your business, then transparently track the processes. By continuously challenging those processes and measuring the performance, your system will dynamically evolve and improve.

Attendees delighted in seeing examples and receiving knowledge from different viewpoints and were particularly enthusiastic about real use cases. They also noted how enjoyable it was to be a part of this “friendly group” and appreciated the detailed insight into what must be done for compliance sake – and what’s excessive. The conference also revealed an important “need” to further discuss issues of “sensible approach to computer system validation,” and agile modern development trends within GXP and medical devices.

We are planning future events now and will keep you posted. Please feel free to let us know what topics you’d like to see covered as it relates to devising and maintaining an effective eQMS.

Agile and design controls: From story to specifications

$
0
0

Doing Agile in a GXP or medical devices environment requires you to serve two masters: you need to practice the Agile disciplines on one side, and embrace regulatory requirements on the other.

In order to strike the balance, you must understand the difference between “stories” (Agile term) and “specifications” (Design Controls term). From my experience working with many teams that are “Agile first,” I have seen firsthand how easy it is for people to believe that Agile stories are the same as specifications.

Not only are they not the same, but also the difference is critical to how you work and your “Definition of Done.”

story describes a new requirement or a modification to an existing one. There is a certain language formula to a story, including emphasising business value and all the other Agile-esque nuances. The key is that it does not necessarily describe the product, but rather the incremental change that is required.

In Software Development Life Cycle (SDLC) terms, a specification (i.e. a user requirement) expresses a regulatory requirement that the product (or the system) needs to meet in a subsequent version.

For example:

An Electrocardiogram (ECG) device provides an innovative report that features a special analysis of the ECG signs.

  1. Version 1.0 is developed:
    1. Development gets a story: “Doctor needs a report so that [details of the report are provided].”
    2. User requirement: “Doctor needs a report with the following specs: [details of the reports are part of that specification].” (In this case, you could use the exact same language as the story.)
  2. Enter Version 2.0 development:
      1. Development gets a story: “Doctor needs the report to also include a calculation of X in a separate column.”
      2. User requirement: “Doctor needs a report with the following specs,” and details of the report are part of the specifications. These details (of that extra column) are now updated to also include the new data mentioned in the story of version 2.0.

As the product evolves, stories describe the small changes that are introduced throughout the development cycle. A specification of the product in a given release version provides the description of the feature at the particular time of release. This is not to say that a story cannot also be a specification – it can be, but it certainly is not always the case. As a product matures, and each version is an increment, a story becomes less likely to be a specification.

Updated versions of a product may include the following types of specifications:

  • Those that remain the same as they were in the previous version.
  • Those that change from previous version.
  • Entirely new specification elements, which are added if new features are implemented.

Once you understand this difference, you can still work Agile. However, now your “Definition of Done” will account for this. A story is done only when specifications have been updated to reflect how the product is after implementing the story.

While you could leave specification updates to anytime before the release, making it part of your story work (“Definition of Done”) is advantageous because it gives you a more realistic idea of your progress. If you don’t update specifications on an ongoing basis, then you risk unexpectedly delaying the release because you have to add them at the end. Specifications also become a helpful, more detailed reference for developers and testers during the release that more closely reflect the development version they’re using.

When designing in a regulated environment, it’s crucial, then, that you get your story straight – and differentiated from your specifications. Not only will it clarify your mission, but it also ensures a happy ending for your version.

The art of creating complete & accurate records for FDA inspections from Jira

$
0
0

If you need to maintain information for an FDA inspection or submission, Jira is a great tool to use. Be aware, however, that the Code of Federal Regulations Title 21, part 11  requires you to be able to export that data from the system into a readable, precise document:

11.10. (b) The ability to generate accurate and complete copies of records in both human readable and electronic form suitable for inspection, review, and copying by the agency. Persons should contact the agency if there are any questions regarding the ability of the agency to perform such review and copying of the electronic records.

When setting up Jira to support regulated process, you must ensure the export includes all elements of the Jira issue, including:

  • Regular and custom fields in a human-friendly layout
  • The history of the issue
  • Information about all electronic signatures provided for the ticket
  • Any necessary supportive attachments
  • Sub-task information with all the bits of information (history, electronic signatures, etc.)
  • The export date, time, and name of the person who exported it
  • Company-specific styling, including logo, disclaimers, etc.

My go-to tool when setting up an FDA-compatible export is Midori Global Consulting’s PDF View plugin for Jira. This powerful plugin allows you to export any FDA-required issue into a world-class, well-designed, professional document. There are plenty of informative examples included with the tool, which takes you a long way towards implementing the export you need. All custom field types or rich text situations are covered, and any necessary attachment is integrated in its original format as an embedded file. You also can invoke dynamic scripts in the export process, giving you access not only to sophisticated calculations but also to Jira’s entire third-party API universe. For example, I use InLabs’ Electronic Signature  to easily add customisable electronic signatures. The resulting PDF is a self-contained and complete package of information that, using the tool’s own API, can easily be exported as part of the issue workflow or in any automation cycle.

The only downside of this plugin is you need technical know-how to implement it. The export requires basic notions of three technologies: .FO Syntax, Velocity templates, and Jira’s own API, so be advised you will need access to development skills to implement it.

The export contains the complete history of the Jira issue (record)
The export contains the complete history of the Jira issue (record)

 

Signature manifestations are part of the exported Jira issue (record)
Signature manifestations are part of the exported Jira issue (record)

Because your business relies on creating accurate, complete records, it’s imperative you use the best tools possible to generate your exports. There’s an art to delivering the right information that satisfies regulations.

Heading toward hands-free validation

$
0
0

Over the course of working with dozens of innovative, exceptional companies, I’m always struck by their commonalities of purpose and process. While they all are creating products poised to help us all live happier, healthier and more productive lives, ironically enough many tend to put themselves through sadly torturous, inefficient circumstances before their products can see the light of day.

And more often then not, the validation cycle is the place where all hell breaks loose.

Case in point:

One of the biggest and most innovative pharma companies in the world hired us to set up a Jira Service Desk for them.

The contract was signed in late June.

By mid September, the Jira Service Desk installation on their development environment and configuration were all completed.

This left validation… which took a jaw-dropping six months in order to finally make it into production.

This means:

It took just 12 weeks to gather requirements, iterate through several versions of configurations, and complete vetting technical issues and specification freeze. The technical specifications and manual test scripts were also finished during that period.
It took more than double that amount of time – 30 weeks — for validation, qualification and installation to production.
These numbers are more staggering when you realise that almost all of the delay was due to paperwork: manual adjustments, formal review and approval processes. The actual technical defects or infrastructure issues we faced were very limited, and all were resolved swiftly.

The lesson from this particular situation is clear – for those companies reliant on lengthy, paper-driven validation processes, innovation is slowed to snail’s pace. By the time it gets to make an impact, it is, well, old news.

Clearly, the industry is in transition. New organisations in particular are refusing to align themselves with archaic practices. It has already become mainstream to create all the design elements within tools, which automatically generate traceability matrices. In the near future, these tools will have to up their game and produce all the validation records (i.e. requirements specifications, test scripts, etc.) simultaneously as part of the Continuous Integration cycle.

Automation will not stop there, but instead will cover the complete spectrum of validation and qualification activities. Manual test scripts will be replaced with complete automation of the test cycle. Installation will be automatic and include qualifications tests.

Recently I met a startup that has the bold vision to deliver the first version of their product using a completely hands-free validation, installation and qualification cycle. Their current technical challenge is that their automatic cycle is using 64 servers and takes too long. Their tests cycle takes 24 hours.

Should I have told them that in some places validation takes as much as 30 weeks?

As it stands, the regulatory challenge they will be facing might be much larger then their technological one. Will inspectors know how to assess their hands-free approach?

While the answer to that is unclear in today’s transitional world, what is clear is that is where we are heading. It’s valid to want to automate inefficient systems, and more than that, it’s actually imperative. The companies that master the regulatory and technical challenges of hands-free validation today will steer their way clear to competitive advantage tomorrow.

Viewing all 68 articles
Browse latest View live