Sunday, 24 January 2021

Question? Prior Method Validation (To design a Protocol)

 CONSIDERATIONS PRIOR TO METHOD VALIDATION

Test method validation is a requirement for entities engaging in the testing of pharmaceutical products for the purpose of drug exploration,development, and manufacture for human use. It also of great value for any type of routine testing that requires consistency and accuracy.

Procedure validation is a cornerstone in the process of establishing an analytical procedure. The aim   of procedure validation is to demonstrate that the procedure, when run under standard conditions, will satisfy the requirement of being fit for use. To maximize the likelihood of a successful validation,         It is imperative that all aspects of the procedure be well understood prior to the validation. Surprising discoveries (whether "good" or "bad") during validation should be carefully evaluated to determine whether the procedure was adequately developed. Moreover, pre-validation work can reveal suitable approaches to reduce the total size of the validation experiment without increasing the risk of drawing the wrong conclusion. General principles and plans for sample preparation, general principles, experimental design, data collection, statistical evaluation, and choice of acceptance criteria should be documented in a validation experimental protocol signed before initiation of the formal validation.

Questions considered prior to validation may include the following:

·         What are the allowable ranges for operational parameters, such as temperature and time, that impact the performance of the analytical procedure?

o    Robustness of these ranges can be determined using a statistical design of experiments (DOE).

·         What are the ruggedness factors that impact precision?

o    Factors such as analyst, day, reagent lot, reagent supplier, and instrument that impact the precision of a test procedure are called ruggedness factors. When ruggedness factors impact precision, reportable values within the same ruggedness grouping (e.g., analyst) are correlated. Depending on the strength of the correlation, a statistical analysis that appropriately accounts for this dependence may be necessary. Ruggedness factors can be identified empirically during pre-validation or based on a risk assessment.

·         Are statistical assumptions regarding data analysis reasonably satisfied?

o    These assumptions may include such factors as normality, homogeneity of variance, and independence. It is useful during pre-validation to employ statistical tests or visual representations to help answer these questions.

·         What is the required range for the procedure?

o    The range of an analytical procedure is the interval between the upper and lower levels of an analyte that has been demonstrated to be determined with a suitable level of precision, accuracy, and linearity using the procedure as written.

·     Do accepted reference values or results from an established procedure exist for validation of accuracy?

o    If not, as stated in International Council for Harmonisation (ICH) Q2, accuracy may be inferred once precision, linearity, and specificity have been established.

·   How many individual determinations will compose the reportable value, and how will they be aggregated?

o    To answer this question, it is necessary to understand the contributors to the procedure variance and the ultimate purpose of the procedure. Estimation of variance components during pre-validation provides useful information for making this decision.

·         What are appropriate validation acceptance criteria?

o    The validation succeeds when there is statistical evidence that the assay is no worse than certain pre-specified levels for each relevant validation parameter.

       What defines the assay as fit for use, and how does this relate to acceptance criteria?

·         How large a validation experiment is necessary?

o    Validation experiments should be properly powered to ensure that there are sufficient data to conclude that the accuracy and precision can meet pre-specified acceptance criteria. Computer simulation is a useful tool for performing power calculations.

o    Efficiencies (both cost and statistical) can be gained if assessment of linearity, accuracy, and precision can be combined.

    On the basis of the answers to these and similar questions, one can design a suitable validation experimental protocol.

OOS Investigation Case Study-8 (Organic Impurity)

In Pharmaceutical Industry, OOS investigation and root cause identification is very important topic. If you are not able to identify the exact root cause, then your effort should be looked in your investigation to convince the regulatory auditors.


Here I am sharing another case study to understand it in a better way-

OOS observed in Organic Impurity test.
Description of Event:
OOS result is reported in Organic Impurity.
Result: 0.95%  (Known Impurity: Named X)
-Limit is NMT 0.70%.

Preliminary investigation:
Checked pressure graph,System suitability parameters, Calculation etc. and No laboratory error is identified from preliminary investigation.
Re-measurement  (Hypothesis testing):
Hypothesis testing is performed to rule out instrument error and vial filling error,  etc.
But no any error is identified and all above possibilities are ruled out.

Reviewed the trend of previous 10 released batches and stability trend data of Validation batches and no any failure observed for X impurity, even in ACC (40/75) condition the maximum impurity % is 0.3% and during batch release it never be more than 0.1%


Now what is the next step. . .Based on trend data it  looks like that, this is not a true failure.
But we can not perform the Re-analysis by saying that this is an erratic result.
Then what is our next step as an investigator.
Step- 1
Inject same subject sample on PDA detector along with impurity X preparation and record the spectra and peak purity of the subject impurity (it helps to rule out whether it is impurity X or some other extraneous peak eluted on same RT.

If you have facility of Mass Spectrometer (LC-MS) , develop a MS compatible method and inject the solution on LC-MS to know the Mass and can conclude whether it is Impurity X  or Y (extraneous)              or X+ Y . If it is Y or X+Y then by knowing mass of Y you can suspect the contaminated product.

If it is extraneous peak then we have to rue out whether it come from manufacturing or Laboratory.

First we have to rule out product contamination by injecting all steps previous product (Mfg) on LC-MS.

 Inject all product standard  which are used in manufacturing before this product (cover all equipment) in same chromatographic condition on LC-MS (If LCMS is not available then you can do activity  on PDA detector).

Inference: If root cause is identified from above exercise, then perform the reanalysis and release the batch, if not then.....
Step-2
Review the force degradation study to check in which condition Impurity X is increased and find the possibility for same during analysis in laboratory and rule out same by negative experiment (If required).

Then it's confirmed that this failure is not a product failure, further we have to rule out that this peak is come due to any product contamination during manufacturing to support this statement further follow step-3.
Step-3
Again re-look in laboratory investigation and rule out all probabilities i. e. filter interference (if sample required high pressure during filtration). Rule out all possible reasons through "SYSTEMATIC ROOT CAUSE ANALYSIS TOOLS like 6M, 5Why/2H, Brainstorming, Fault Tree Analysis, Graphical Analysis (Histogram, Pareto, Run chart). etc.
If required keep sample for stability study with reduce testing (3, 12 and 24 months) , which data help you to convince auditor that initial failure was not  product quality issue.

By all these efforts, if we still not have exact root cause, but we find some most probable cause in laboratory then we can do the Re-analysis and if Re-analysis is complying to the specification and with in trend to other previous batches then It will give confidence to auditor that there is no any issue with the product quality (Batch release/reject decision is conscious call of Quality Head).

Wednesday, 23 December 2020

CDP (Comparative Dissolution Profiles)

In Pharmaceutical Industries the Comparative Dissolution Profiles (CDP) is very important study.

For CDP study enovator sample is required along with our sample. 

The comparison factors can be expressed by two approaches: f1 (the difference factor) and f2 (the similarity factor). Two dissolution profiles to be considered similar and bioequivalent, f1 should be between 0 and 15 whereas f2 should be between 50 and 100.

Dissolution profiles data should be generated in a comparative manner as follows:

  • At least 12 dosage units (e.g. tablets, capsules) of each batch must be tested individually, and mean and individual results reported. 
  • The percentage of nominal content released are measured at a minimum of three (3) suitably spaced time points (excluding zero time point) to provide a profile for each batch (e.g. at 5, 15, 30 and 45 minutes, or as appropriate to achieve virtually complete dissolution).
  • The batches are tested using the same apparatus and, if possible, on the same day.
  • The stirrer used is normally a paddle at 50 rpm for tablets and a basket at 100 rpm for capsules. However, other systems or speeds may be used if adequately justified and validated.
  • Test conditions are those used in routine quality control or, if dissolution is not part of routine quality control, any reasonable, validated method.
  • The f2 value must be between 50 and 100.
  • If more than 85 per cent of the active substance is dissolved within 15 minutes or before 15 minutes  in all tested batches, dissolution profiles are considered to be similar without the need to calculate the similarity factor.
  • In recent years, FDA has placed more emphasis on a dissolution profile comparison in the area of post-approval changes and biowaivers. Under appropriate test conditions, a dissolution profile can characterize the product more precisely than a single point dissolution test. A dissolution profile comparison between pre-change and post-change products for SUPAC related changes, or with different strengths, helps assure similarity in product performance and signals bioinequivalence.

    Among several methods investigated for dissolution profile comparison, f2 is the simplest. Moore and Flanner proposed a model independent mathematical approach to compare the dissolution profile using two factors, f1 and f2 (1).

    where Rt and Tt are the cumulative percentage dissolved at each of the selected n time points of the reference and test product respectively. The factor f1 is proportional to the average difference between the two profiles, where as factor f2 is inversely proportional to the average squared difference between the two profiles, with emphasis on the larger difference among all the time-points. The factor f2 measures the closeness between the two profiles. Because of the nature of measurement, f1 was described as difference factor, and f2 as similarity factor (2). In dissolution profile comparisons, especially to assure similarity in product performance, regulatory interest is in knowing how similar the two curves are, and to have a measure which is more sensitive to large differences at any particular time point. For this reason, the f2 comparison has been the focus in Agency guidances.

Monday, 16 November 2020

Significant Changes in Stability sample analysis

As per ICH significant change” the changes occur in the drug product during the stability study in Accelerated condition (ACC). 

 In general, “significant change” for a drug product is defined as:

1. A 5% change in assay from its initial value; or failure to meet the acceptance criteria for potency when using biological or immunological procedures; 

2. Any degradation product’s exceeding its acceptance criterion; 

3. Failure to meet the acceptance criteria for dissolution for 12 dosage units. 

4. Failure to meet the acceptance criterion for pH; 

5. Failure to meet the acceptance criteria for appearance, physical attributes, and functionality test (e.g., color, phase separation, re-suspendibility, caking, hardness, dose delivery per actuation); however, some changes in physical attributes (e.g., softening of suppositories, melting of creams) may be expected under accelerated conditions;     

An ANDA applicant should submit 6 months of accelerated stability  data and 6 months of long-term stability data at the time of submission.However, if 6 months of accelerated data show a significant change or failure of any attribute, the applicant should also submit 6 months of intermediate data at the time of submission.

If accelerated data show a significant change or failure of any attribute in one  or more batches, an applicant should submit intermediate data for all three batches. In addition, the submission should contain a failure analysis.

Case Study:

For Assay: As per above guidance if 5% change in assay from initial then it is significant change,

 e.g. if initial assay is 96.2% and 03 months ACC (40°C/75% RH) assay is 101.3% (different in assay from initial assay is 5.1%), the said result is investigated through OOT and no laboratory error is identified.

Based on above guidance significant change in assay is confirmed, Is it right to start intermediate study in all 3 batches?

As per my view , here no needs to start intermediate condition stability study, we should justify with consultation of Regulatory affairs.

  


Sunday, 30 August 2020

HPLC Guard Column: Use & Benefits in Gradient method


In Pharmaceutical Industries Related Substances test analysis is very critical, particularly when the HPLC method analysis is gradient method . To get smooth baseline, no additional peak is always challenge in gradient method analysis.

In such cases Guard column is useful to get smooth baseline and exclusion of any additional peak due to impurity available in solvent which is used in mobile phase and diluents preparation.

"If you have already validated method, then method Equivalency data is required before routine use of Guard column".

SecurityGuard HPLC Guard columns

What is a guard column?

A guard column is a protective column or cartridge installed between the injector and the analytical column. It serves to remove the impurities and suspended solids from reaching the analytical column. Typically it has a length of about 2 cm and internal diameter of 4.6 mm. Guard columns are packed with pelicullar particles of around 40 μm size to offer negligible pressure drop. 

 The proficient operation of HPLC instrument is dependent on freedom of mobile phase and sample from chemical impurities or solid suspensions. Precaution and handling of use of  mobile phase  discusses measures that should be adopted during preparation and use of mobile phase. Importance of cleaned sample (Centrifuge or filter with syringe filter)  injection is always benefits.

HPLC column is a critical component of the HPLC system which requires careful handling and protection. It is expensive to keep replacing columns frequently so your objective should be to maximize the useful life of the column so that every time you get the desired accuracy and consistency of results.

The chromatographic behavior of the HPLC column begins to decline over use due to gradual accumulation of impurities and suspensions. Particles larger than 2μm present in mobile phase or sample start to deposit on the inlet frit of the column thereby disturbing uniformity of flow. Smaller particles result in increased backpressure so they begin to block the flow path in the stationary phase.

Nature of contaminants:

  • Highly retained compounds such as fatty acids in reverse phase separations
  • Irreversibly retained compounds like residual proteins which were not removed completely at time of sample extraction. 
  • Particulate impurities can result from non filtration of samples, particulates released by wear of system components such as seals in the pump or injector.
  • Crystalline deposits resulting from drying of residual buffers inside column. Washing of columns with HPLC grade water after use or buffer solutions prevents such salt deposit formation.

Desirable features of guard columns:

  • Guard column should have preferably the same packing as the analytical column to eliminate separation complications
  • Internal ID of guard column should be comparable to analytical column to minimize back-pressure. Shorter guard column length is preferable but it should be long enough to prevent strongly retained compounds from reaching the main column
  • Frit facing the injector should be removable for cleaning by removal of about 2 mm of material and filling with fresh material
  • Disposable cartridge type guard columns are convenient and economical to use compared to refillable guard columns.

Guard columns need to be changed on regular basis but intermediate change becomes necessary through observation of changes in chromatographic behaviour such as increase in backpressure, peak broadening and, changes in retention time of peaks. However, the frequency of change can be decided on the basis of chemical composition of sample, presence of highly retained or irreversibly retained components, injection volume or number of injections.

 


OOS Investigation case study -7 (Instrument error)

OOS Investigation case study-7 (Assay)

OOS observed in Assay test. 
(Single preparation test and duplicate injections from same vial) 
During investigation when you found any root cause in Preliminary or hypothesis testing, before planning of re-analysis, Use any of Investigation tools (5 Why, 6 M etc to rule all other probabilities)Review it thoroughly and plan the negative experiment if required to make the investigation more adequate. 

During investigation when you found any root cause in Preliminary or hypothesis testing, before planning of re-analysis, Use any of Investigation tools (5 Why, 6 M etc to rule all other probabilities)Review it thoroughly and plan the negative experiment if required to make the investigation more adequate. 

Description of Event:
OOS result observed in Assay test.
Result: 97.4% (From same vial Injection-1: 99.8%, Injection-2:94.9%)
-Limit: 95.0 - 105.0%.

Mean result is within specification limit, but one injection result from same vial is 94.9% which is not complying to the specification limit, hence OOS initiated.


Preliminary investigation:
During preliminary investigation checked all possibilities for lower Assay results like Instrument error (No pressure fluctuation , no air bubble in mobile phase/Rinse line ) Calculation error (wrong weight, wrong potency etc.), Standard preparation error (recovery factor of standard and control standard is 99.8%) and all other possible causes for lower result, but no error is identified in preliminary investigation.

During review of pressure graph it was notice that though there is no pressure fluctuation, but at zero time (during injection) subject injection pressure is lower than other all injections pressure (Blank, Standard, control standard and injection-1 of sample (99.8% result).

Zoom pressure graph: (In normal scale graph this pressure difference may not be visible).

Based on above observation there might be possibility that during second injection due lower pressure complete planned volume (20µl) is not drawn by injector.

So to rule the instrument error hypothesis testing should be planned as still we are not sure that which result is true i.e. 99.8% or 94.9% .
Re-measurement:

Hypothesis testing is performed to rule out instrument error, vial filling error, dilution error etc. Hypothesis is planned on different HPLC system and all other things are remain same.
Same vial result is found within specification limit (Mean 99.6%, Injection-1: 99.7%,  Injection-2: 99.5%)
Refilled (100.1%) and re-dilution (99.8%) results are found well with in specification limit. 

Based on outcome of hypothesis , repeat testing/re-analysis can be planned from same aliquot/sample and invalidate the initial OOS result.

Now the question is , 
Why one injection result is lower?, Is this due to low pressure at zero time, If yes, then why lower pressure at zero time in one injection? Is this momentary instrument malfunction? Or any other reason. We have to hand over instrument to service engineer to identify the root cause for low pressure.

Based on service engineer report ,scientific rational and justification we can conclude the OOS with proper CAPA.  

Impact Assessment:

As an Impact assessment we have to evaluate at least last 05 analysis on same instrument and after rectification of error at least 03 analysis.


Saturday, 18 July 2020

Human Error





                            HUMAN ERROR

In Pharmaceutical Industries human error and its reduction is very bib challenge.
During regulatory audit most of the investigator asking data of human error trend for OOS/OOT/Deviatrion/Incident and way forward to reduce the human error.
“Human Error is commonly defined as a failure of a planned action to achieve a desired outcome”.
Human error Categories :
Failures of action, or unintentional actions, are classified as   
  1-Skill-based errors: This error type is categorised into--
             A-Slips of action 
             B-Lapses of memory.
         2-Mistakes: Failures in planning are referred  to as mistakes, which are categorised as 
         A:Rule based
         B: knowledge-based
  
Skill-based errors:
Slip of Action tend to occur during highly routine activities, when attention is diverted from a task, either by thoughts or external factors. Generally when these errors occur, the individual has the right knowledge, skills, and experience to do the task properly. The task has probably been performed correctly many times before. Even the most skilled and experienced people are susceptible to this type of error. As tasks become more routine and less novel, they can be performed with less conscious attention – the more familiar a task, the easier it is for the mind to wander. This means that highly experienced people may be more likely to encounter this type of error than those with less experience. This also means that re-training and disciplinary action are not appropriate responses to this type of error.
A memory lapse occurs after the formation of the plan and before execution, while the plan is stored in the brain. This type of error refers to instances of forgetting to do something, losing place in a sequence, or even forgetting the overall plan. 
A slip of action is an unintentional action. This type of error occurs at the point of task execution, and includes skipping or reordering a step in a procedure, performing the right action on the wrong object, or performing the wrong action on the right object.
Slips and lapses can be minimised and mitigated through workplace design, use of checklists, independent checking of completed work, discouraging interruptions, reducing external distractions, and active supervision.

Mistakes:
Mistakes are failures of planning, where a plan is expected to achieve the desired outcome, however due to inexperience or poor information the plan is not appropriate. People with less knowledge and experience may be more likely to experience mistakes. Mistakes are not committed ‘on purpose’; as such, disciplinary action is an inappropriate response to these types of error.  
 Mistakes can be minimised and mitigated through robust competency assurance processes, good quality training, proactive supervision, and a team climate in which co-workers are comfortable observing and challenging each other. 
 Mistakes can be rule-based or knowledge-based.
Rule-based mistakes refer to situations where the use or disregard of a particular rule or set of rules results in an undesired outcome. Some rules that are appropriate for use in one situation will be inappropriate in another.  
Knowledge-based mistakes result from ‘trial and error’. In these cases, insufficient knowledge about how to perform a activity to get a accurate result.

Violation:
Failure to apply a good rule is also known as a violation. Violations are classified as human error when the intentional action does not achieve the desired outcome, or results in unanticipated adverse consequences. Violations tend to be well-intentioned, targeting desired outcomes such as task completion and simplification. 

Note: Violations are classified as human error only when they fail to achieve the desired outcome. Where a violation does achieve the desired outcome, and does not cause any other undesired outcomes, this is not human error. These types of violations may include violation of a bad rule, such as a procedure that, if followed correctly, would give unexpected data. In such cases, a review of the rules and procedures is advisable.