Try Statistical Software for Free
What is NP Chart?
NP chart is a type of control chart used to monitor the number of defects or nonconformities in a sample of items. It is named after its two parameters: N (the number of items in the sample) and P (the proportion of defective items in the sample).
The chart plots the number of defects as a function of the sample size, allowing managers to identify changes in the production process or other factors that may be affecting quality control. The NP chart is particularly useful for processes with a high rate of defect occurrences, as it is able to detect small shifts in the proportion of defects in a sample.
When to use NP Chart?
An NP chart is used when monitoring the proportion of nonconforming items or defects in a process. It is used when the sample size is constant, and the data being collected are binary, meaning they are either conforming or nonconforming.
The chart is particularly useful in processes where the proportion of nonconforming items is low but the consequences of nonconformity are severe, such as in manufacturing or healthcare. The chart helps to identify trends or shifts in the process that could be causing an increase in nonconforming items and helps to prevent further defects.
Guidelines for correct usage of NP Chart
 This chart should be used only for binomial distribution data, i.e when the outcome of a test / observation can only be one of the two categories. Example: good/bad, pass/fail, ok/defective, etc.
 If the defects can be counted on an unit inspected, then U chart or C chart should be used instead of NP or P chart
 The data should be arranged in ascending order of time of sample collection.
 The data should be collected at approximately equally spaced time intervals.
 If the frequency is too hight, it increases the data collection efforts / cost
 If the frequency is too low, the time window for detecting changes or finding an RCA could be too wide.
 Select an optimum frequency based on knowledge and critically of the process.

Each subgroups should be a collection of samples collected within a short period of time.

Typically a subgroup consists of consecutive parts from a sampling point / manufacturing machine or line.

The idea of a subgroup is that the time window is short enough to ensure there are no /insignificant variation in conditions such as personnel, settings, batch, environmental conditions, etc.

Collecting right subgroup helps in distinguishing common cause from special causes.

 The subgroups must be large enough to show on an average 5 defectives or more in the NP chart. For example, if the defective rate is 0.06, then the subgroup size should be = roundup(5/0.06) = 84. If we use lee then optimal subgroup size, the ability to detect special causes is diminished.
 The control limits should be calculated based on 25 or more subgroups. Preliminary analysis can be done using 15 or more subgroups.
Alternatives: When not to use NP chart
 If you can count the number of defects on each item, use U Chart or C chart

If your subgroup sizes are not equal you can still use NP chart, however the centre line will not be straight. For unequal subgroup sizes, using p chart will generate a straight centreline.
NP Chart Example
A bulb manufacturing company wants to monitor the defective rate of its bulbs from a specific line. They decide to collect samples every 2 hours, and record the defective bulbs. Sample data for NP Chart is as follows:
 After gathering the data, she uses mathematical formula for finding the p bar, np bar, Upper Control Limit (UCL) and Lower Control Limit(LCL).
 Now, after calculating p bar, np bar, UCL and LCL, she analyzes the data with the help of https://qtools.zometric.com/
 After using the above mentioned tool, she fetches the useful graph as follows:
How to generate NP Chart?
The guide is as follows:
 Login in to QTools account: https://qtools.zometric.com/
 On the home page, one will see NP Chart under control charts.
 Click on NP Chart and will reach the dashboard.
 Next, update the data manually or can completely copy (Ctrl+C) the data from excel sheet and paste (Ctrl+V) it here.
 Next, you need to select the desired Check Rules.
 Finally, click on calculate at the bottom of the page and you will get desired results.
On the dashboard of NP Chart, the window is separated into two parts.
On the left part, Data Pane is present. Data can be fed manually or the one can completely copy (Ctrl+C) the data from excel sheet and paste (Ctrl+V) it here.
On the right part, there are many options present as follows:
 Process proportion: If process proportion is provided, this value is considered to be the centerline. If not, Zometric QTools calculates the centerline from the data provided.
 Check Rule 1: 1 point > K Stdev from center line: Default K for rule 1 is K=3. Rule 1 test points that are outliers. Given the assumption that the data is normally distributed, and the process is stable (i.e no external causes of variation), the probability of any point beyond 3 std deviation limits is extremely low. Rule 1 indicates the possibility of a sudden change and calls for further RCA any any causes that could be "sudden" in nature.
 Check Rule 2: K points in a row on same side of center line: Default K for rule 2 is K = 9. If there are K points in a row on the same side of the center line in a dataset, it suggests that there may be a bias or trend in the data that is causing the values to cluster together. This could be due to a variety of factors, such as measurement error, sampling bias, or a shift in mean because of wrong setting.
 Check Rule 3: K points in a row, all increasing or all decreasing: Default K for rule 3 is K = 6. If K points in a row are either increasing or decreasing continually, it indicates a gradual trend, and calls for investigating possible root causes that are gradual in nature, eg: loosening of nut/settings, accumulation of dirt, tool wear & tear, leakage, lubricant level, etc.
 Check Rule 4: K points in a row, alternating up and down: Default K for rule 4 is K=14. If K points in a row are alternating up and down, it fails the test of "randomness". More likely than not, it could be that the data is being fudged, or there could be a mixup of data from two separate sources alternatively.