Method: 2P Weibull distribution and Good Linear Unbiased Estimator (EN 12603)#

Statistical evaluation of failure stresses.

The procedure is as follows:

  • Calculate the stress at failure for each specimen and sort them in increasing order of magnitude: \( \sigma_i \), where \( i \) is the rank number.

  • Calculate the probability of failure \( P_{f,i} \) for each measured value. Use the following estimator: \( P_{f,i} = \frac{i-0.3}{n+0.4} \) (Median rank probability estimator) where \( n \) is the number of tested samples.

  • Determine the Weibull parameters \( \beta \) (shape parameter) and \( \theta \) (scale parameter) using the Good Linear Unbiased Estimators according to section 6.2.

  • Estimation of the confidence interval for the shape (\(\beta\)) parameter is given in 8.1.

  • Estimation of the confidence intervals for G(x) is given in 8.2 and the results of the computations are given in Table A.2.

  • Estimation of the confidence intervals of the scale (\(\theta\)) parameter is given in 8.3.

  • The desired confidence intervals for x when G(x) is given is determined numerically by the method in 8.4.1.

  • Put the Weibull distribution into linearized form. Enter the measured data into the Weibull mesh.

  • Determine the x% fractile value of the bending tensile strength \( f_y \) using the regression line and using the confidence interval.

  • Plot the cumulative distribution function \( F(x) \) and the density function \( f(x) \).

Note: You can use the 🚀 `Live Code` button in the top right to activate the interactive features and use Python interactively!

Once the "Live Code" is enabled it is advised to "Run All" cells first to load all the necessary packages and functions.

Afterwards, any changes can be made in the input form and when the "Evaluate" button is clicked the changes are recorded.

Finally, the last two cells can be run individually by clicking on the "Run" button to produce the Weibull plots.
Hide code cell source
import numpy as np
import pandas as pd
from scipy.stats import linregress, t, chi2, stats, weibull_min
import matplotlib.pyplot as plt
from matplotlib.ticker import ScalarFormatter
import ipywidgets as widgets
from IPython.display import display, clear_output, HTML

%config InlineBackend.figure_format = 'svg'

Definition of useful functions and coefficients#

  • \(\kappa_{n}\) coefficients according to Table 3.

  • Calculation of \(\hat{\beta}\).

  • Convert failure stress to equivalent failure stress for a constant load given a reference time period.

  • Weibull pdf and cdf distributions

  • Calculationf of Anderson Darling goodness of fit metric \( p_{AD} \)

Hide code cell source
kappa_n_dict = {
    2: 0.6931,
    3: 0.9808,
    4: 1.1507,
    5: 1.2674,
    6: 1.3545,
    7: 1.1828,
    8: 1.2547,
    9: 1.3141,
    10: 1.3644,
    11: 1.4079,
    12: 1.4461,
    13: 1.3332,
    14: 1.3686,
    15: 1.4004,
    16: 1.4293,
    17: 1.4556,
    18: 1.4799,
    19: 1.3960,
    20: 1.4192,
    21: 1.4408,
    22: 1.4609,
    23: 1.4797,
    24: 1.4975,
    25: 1.5142,
    26: 1.4479,
    27: 1.4642,
    28: 1.4796,
    29: 1.4943,
    30: 1.5083,
    31: 1.5216,
    32: 1.4665,
    33: 1.4795,
    34: 1.4920,
    35: 1.5040,
    36: 1.5156,
    37: 1.5266,
    38: 1.4795,
    39: 1.4904,
    40: 1.5009,
    41: 1.5110,
    42: 1.5208,
    43: 1.5303,
    44: 1.4891,
    45: 1.4984,
    46: 1.5075,
    47: 1.5163,
    48: 1.5248,
    49: 1.5331,
    50: 1.5411,
    51: 1.5046,
    52: 1.5126,
    53: 1.5204,
    54: 1.5279,
    55: 1.5352,
    56: 1.5424,
    57: 1.5096,
    58: 1.5167,
    59: 1.5236,
    60: 1.5304
}
def get_kappa_n(n):
    if n > 60:
        return 1.5692
    if n in kappa_n_dict:
        return kappa_n_dict[n]
    return "Invalid sample number"
Hide code cell source
def calculate_beta_hat(n, kappa_n, stress_data, s):
  
    # Calculate the sum terms
    sum1 = np.sum(np.log(stress_data[s:n]))
    sum2 = np.sum(np.log(stress_data[:s]))

    # Calculate beta
    beta_hat = (n * kappa_n) / ((s / (n - s)) * sum1 - sum2)

    return beta_hat

def equivalent_failure_stress(sigma_f, t_f, t_eq=60, n=16):
    """
    Convert failure stress to equivalent failure stress for a constant load.
    
    Parameters:
    sigma_f (float): Original failure stress.
    t_f (float): Actual failure time.
    t_eq (int): Reference time period (in seconds). Default is 60s.
    n (int): Exponent value. Default is 16.
    
    Returns:
    float: Equivalent failure stress for a given reference period.
    """
    try:
        if t_eq not in [1, 3, 5, 60]:
            raise ValueError("Reference time period must be one of: 1, 3, 5, or 60 seconds.")
        sigma_f_eq = sigma_f * (t_f / (t_eq*(n+1))) ** (1 / n)
        return sigma_f_eq
    
    except ValueError as e:
        print(e)
        return None

# Weibull PDF
def weibull_pdf(x, theta, beta):
    return (beta / theta) * (x / theta)**(beta - 1) * np.exp(-(x / theta)**beta)

# Weibull CDF
def weibull_cdf(x, theta, beta):
    return 1 - np.exp(-(x / theta)**beta)

def goodness_of_fit_test_weibull(data, beta, theta):
    # Fit the Weibull distribution using the provided parameters
    fitted_dist = weibull_min(c=beta, scale=theta)
    
    # Sort the data
    data_sorted = np.sort(data)
    n = len(data_sorted)

    # Calculate the cumulative distribution function values of the fitted distribution
    cdf_vals = fitted_dist.cdf(data_sorted)

    # Compute the AD^2 component
    AD2 = -n - np.sum((2 * np.arange(1, n+1) - 1) / n * 
                      (np.log(cdf_vals) + np.log(1 - cdf_vals[::-1])))

    # Calculate AD*
    AD_star = (1 + 0.2 / np.sqrt(n)) * AD2

    # Compute the p-value using the equation provided
    p_AD = 1 / (1 + np.exp(-0.1 + 1.24 * np.log(AD_star) + 4.48 * AD_star))
    
    return p_AD

Estimation of the confidence interval for the shape (\(\beta\)) parameter is given in 8.1#

Hide code cell source
def calculate_beta_hat_theta_hat(n,kappa_n,sorted_stress, s):
    # Point estimators
    beta_hat = calculate_beta_hat(n, kappa_n, sorted_stress, s)
    theta_hat = np.exp((1 / n) * np.sum(np.log(sorted_stress)) + 0.5772 / beta_hat)
    print(f"beta_hat is: {beta_hat:.2f} N/mm2")
    print(f"theta_hat is: {theta_hat:.2f} N/mm2")
    return beta_hat, theta_hat

def confidence_interval_beta(f1,beta_hat, alpha=0.05):

    # Estimation of the confidence interval for the shape parameter is given in 8.1
    # For the 95 % confidence intervals, 1 - α/2 = 0,975 and α/2 = 0,025
    df = f1 # Degrees of freedom based on Table 4
    print(f"Degrees of freedom are: {f1:.2f}")
    alpha = alpha
    chi2_lower_quantile = chi2.ppf(alpha / 2, df)
    chi2_upper_quantile = chi2.ppf(1 - alpha / 2, df)
    print(f"Lower quantile: {chi2_lower_quantile:.6f}")
    print(f"Upper quantile: {chi2_upper_quantile:.2f}")

    # Calculate the upper and lower limits for beta
    beta_lower = (beta_hat * chi2_lower_quantile) / f1
    beta_upper = (beta_hat * chi2_upper_quantile) / f1
    print(f"Beta lower: {beta_lower:.2f}")
    print(f"Beta upper: {beta_upper:.2f}")
    return beta_lower, beta_upper

Estimation of the confidence intervals for G(x) is given in 8.2 and the results of the computations are given in Table A.2.#

Hide code cell source
def auxiliary_factors(beta_hat,theta_hat, n, r_over_n, stress_or_theta):
    # Auxiliary factor y
    y = beta_hat*np.log(theta_hat/stress_or_theta)
    # Auxiliary factor v
    if r_over_n == 1:
        A = 1.162/n
        B = 0.6482/n + 0.805/n**2 + 1.13/n**3
        C = -0.2309/n + 0.15/n**2 + 1.78/n**3 
        v = A + B*y**2 - 2*C*y
    else:
        raise ValueError("The dataset is censored!")
    # Auxiliary factor gamma
    # Initialize f2 and H_f2 arrays with the same shape as v
    f2 = np.zeros_like(v)
    H_f2 = np.zeros_like(v)
    # Conditions for v <= 2
    condition1 = v <= 2
    f2[condition1] = (8 * v[condition1] + 12) / (v[condition1]**2 + 6 * v[condition1])
    H_f2[condition1] = (15 * f2[condition1]**2 + 5 * f2[condition1] + 6) / (15 * f2[condition1]**3 + 6 * f2[condition1])
    # Conditions for 2 < v <= 5
    condition2 = (v > 2) & (v <= 5)
    f2[condition2] = 3.509 - 1.3055 * v[condition2] + 0.2480 * v[condition2]**2 - 0.0175 * v[condition2]**3
    H_f2[condition2] = 0.08832 + 0.3218 * v[condition2] - 0.0167 * v[condition2]**2
    # Check if any value is outside the acceptable range
    if np.any((v < 0) | (v > 5)):
        raise ValueError("All elements of v must be between 0 and 5")
    gamma_ = np.exp(-y + H_f2)
    return gamma_, f2
# Confindence interval calculation for the P_f or G(x)
def G_ob(beta, theta, stress_or_theta,n, r_over_n, alpha=0.05):
    gamma_, f2 = auxiliary_factors(beta, theta, n, r_over_n, stress_or_theta) 
    df = f2 # Degrees of freedom
    alpha = alpha
    chi2_upper_quantile = chi2.ppf(1 - alpha / 2, df)
    P_f_upper_limit = 1 - np.exp(-gamma_*(chi2_upper_quantile/f2))
    return P_f_upper_limit
def G_un(beta, theta, stress_or_theta,n, r_over_n, alpha=0.05):
    gamma_, f2 = auxiliary_factors(beta, theta, n, r_over_n, stress_or_theta) 
    df = f2 # Degrees of freedom
    alpha = alpha
    chi2_lower_quantile = chi2.ppf(alpha / 2, df)
    P_f_lower_limit = 1 - np.exp(-gamma_*(chi2_lower_quantile/f2))
    return P_f_lower_limit

def confidence_intervals_Gx(beta_hat, theta_hat, n, r_over_n, sorted_stress):

    P_f_upper_limit_list = []
    P_f_lower_limit_list = []

    for stress in sorted_stress:
        P_f_upper_limit_list.append(float(G_ob(beta_hat, theta_hat, stress, n, r_over_n, alpha=0.05)))
        P_f_lower_limit_list.append(float(G_un(beta_hat, theta_hat, stress, n, r_over_n, alpha=0.05)))

    print(f"G(x)_ob upper limit: {P_f_upper_limit_list}")
    print(f"G(x)_un lower limit {P_f_lower_limit_list}")

    return P_f_lower_limit_list, P_f_upper_limit_list

Estimation of the confidence intervals of the scale (\(\theta\)) parameter is given in 8.3.#

Hide code cell source
def confidence_interval_theta(beta_hat, theta_hat, n, r_over_n):
    theta_ob = theta_hat
    theta_un = theta_hat
    max_iterations = 1000
    tolerance=0.001
    i = 0
    for _ in range(max_iterations):

        new_theta_ob = theta_ob / (np.log(1 / (1 - G_un(beta_hat, theta_hat, theta_ob, n, r_over_n, alpha=0.05)))**(1/beta_hat))
        new_theta_un = theta_un / (np.log(1 / (1 - G_ob(beta_hat, theta_hat, theta_un, n, r_over_n, alpha=0.05)))**(1/beta_hat))
        # Check convergence
        if (abs(new_theta_ob - theta_ob) < tolerance  and
                abs(new_theta_un - theta_un) < tolerance ):
            break
        theta_ob = new_theta_ob
        theta_un = new_theta_un
        print(f"Iteration number {i} | (\u03B8)_ob: {theta_ob:.3f} | (\u03B8)_un: {theta_un:.3f}")
        i += 1

    theta_lower = theta_un
    theta_upper = theta_ob
    return theta_lower, theta_upper

The desired confidence intervals for x when G(x) is given is determined numerically by the method in 8.4.1.#

Hide code cell source
def confidence_interval_stress(beta_hat, theta_hat, n, r_over_n, beta_lower, beta_upper, G_x=0.05, alpha=0.05):
    # Given selected G(x) values
    G_x_values = [0.99, 0.95, 0.80, 0.6321, 0.10, 0.05, 0.02, 0.01]

    # Calculate x values for each G(x)
    x_values = [float(theta_hat * (-np.log(1 - G))**(1/beta_hat)) for G in G_x_values]

    # Print results
    for G, x in zip(G_x_values, x_values):
        print(f"For G(x) = {G:.4f}, \u03C3 = {x:.2f} [N/mm2]")

    P_f_upper_limit_selected_list = []
    P_f_lower_limit_selected_list = []

    for stress in x_values:
        P_f_upper_limit_selected_list.append(float(G_ob(beta_hat, theta_hat, stress, n, r_over_n, alpha=alpha)))
        P_f_lower_limit_selected_list.append(float(G_un(beta_hat, theta_hat, stress, n, r_over_n, alpha=alpha)))
    print("\n")
    print(f"P_f upper limit: {P_f_upper_limit_selected_list}")
    print(f"P_f lower limit {P_f_lower_limit_selected_list}")

    G_x2 = G_x  # Replace with the desired failure probability 
    x2 = float(theta_hat * (np.log(1/(1 - G_x2)))**(1/beta_hat)) 
    print(f"The desired G(x) = {G_x2*100} % probability of failure and the corresponding stress value is x2 = {x2:.2f} [N/mm2]")
    # Find the smallest x in x_values that is greater than x1
    greater_than_G1 = [G for G in G_x_values if G >= G_x2]

    if greater_than_G1:
        nearest_G = min(greater_than_G1)
        index = G_x_values.index(nearest_G)
        x1_hat = x_values[index]
        print(f"The nearest x value greater than {G_x2*100}% is {x1_hat:.2f} [N/mm2] corresponding to G_hat = {nearest_G*100}%")
    else:
        print(f"No x values are greater the set value for x1!")


    G_un_x1 = P_f_lower_limit_selected_list[index]
    G_ob_x1 = P_f_upper_limit_selected_list[index]
    # Calculate x2,ob,z and x2,un,z using the given equations
    x2_ob_z = x1_hat * (np.log(1 - G_x2) / np.log(1 - G_un_x1))**(1/beta_upper)
    x2_un_z = x1_hat * (np.log(1 - G_x2) / np.log(1 - G_ob_x1))**(1/beta_lower)

    print(f"x2: {x2:.2f} [N/mm2]")
    print(f"x2,ob,z: {x2_ob_z:.2f} [N/mm2]")
    print(f"x2,un,z: {x2_un_z:.2f} [N/mm2]")
    
    return x2, x2_ob_z, x2_un_z

Weibull plot: Graphical representation of the estimated distribution function according to 7.2#

Hide code cell source
def weibull_plot(dataset, iteration, Weibull_distribution_parameters, Regression_line_parameters, stress_x_percentile, stress_x_percentile_CI_upper, stress_x_percentile_CI_lower, beta_hat, theta_hat, beta_upper, beta_lower, theta_upper, theta_lower, sorted_stress,P_f_lower_limit_list,P_f_upper_limit_list, n, target_G_x=0.05, target_alpha=0.05):
    global min_stress_data, max_stress_data  # Declare these as global to modify the outer variables
    # Linearize the Weibull distribution
    ln_stress = np.log(sorted_stress)

    # G(x) values
    G1 = 0.6321
    G2 = 0.01

    # Calculate transformed y-values
    y1 = np.log(np.log(1/(1 - G1)))
    y2 = np.log(np.log(1/(1 - G2)))

    # Calculate x-values in log space
    x1 = np.log(theta_hat)
    x2 = np.log(theta_hat * (0.01005)**(1/beta_hat))

    # Calculate the slope (m)
    slope = (y2 - y1) / (x2 - x1)

    # Equation of the line in the form y = mx + c
    # Use y1 = m * x1 + c to find c
    intercept = y1 - slope * x1
    G_x_line = slope*ln_stress + intercept

     # Calculate x-values in log space
    x1_upper = np.log(theta_upper)
    x2_upper = np.log(theta_upper * (0.01005)**(1/beta_upper))

    # Calculate the slope (m)
    slope_upper = (y2 - y1) / (x2_upper - x1_upper)

    # Equation of the line in the form y = mx + c
    # Use y1 = m * x1 + c to find c
    intercept_upper = y1 - slope_upper * x1_upper
    G_x_line_upper = slope_upper*ln_stress + intercept_upper

         # Calculate x-values in log space
    x1_lower = np.log(theta_lower)
    x2_lower = np.log(theta_lower * (0.01005)**(1/beta_lower))

    # Calculate the slope (m)
    slope_lower = (y2 - y1) / (x2_lower - x1_lower)

    # Equation of the line in the form y = mx + c
    # Use y1 = m * x1 + c to find c
    intercept_lower = y1 - slope_lower * x1_lower
    G_x_line_lower = slope_lower*ln_stress + intercept_lower

    # Median rank estimator for the cumulative probability. 
    G_x = np.array([ (i -0.3)/(n + 0.4) for i in range(1,n+1)])

    ln_ln_G_x = np.log(np.log(1 / (1 - G_x)))
    ln_ln_G_x_ob = np.log(np.log(1 / (1 - np.array(P_f_upper_limit_list))))
    ln_ln_G_x_un = np.log(np.log(1 / (1 - np.array(P_f_lower_limit_list))))


    # Compute mean of observed values
    mean_ln_Pf = np.mean(ln_ln_G_x)

    # Calculate SS_res and SS_tot
    SS_res = np.sum((ln_ln_G_x - G_x_line) ** 2)
    SS_tot = np.sum((ln_ln_G_x - mean_ln_Pf) ** 2)

    # Calculate R^2
    R_squared = 1 - (SS_res / SS_tot)
    print(f"The R^2 value is: {R_squared}")

    print(f"slope is {slope} and intercept is {intercept}")

    scale = theta_hat
    shape = beta_hat
    
    Weibull_distribution_parameters[dataset] = (scale,shape)
    Regression_line_parameters[dataset] = (slope, intercept, R_squared)
    # Determine the 5% fractile
    print(f"{target_G_x*100}% Fractile Stress: {stress_x_percentile:.2f} MPa")
    print(f"{int((1-target_alpha)*100)}% Confidence Interval of 5% Fractile: ({stress_x_percentile_CI_lower:.2f}, {stress_x_percentile_CI_upper:.2f}) MPa")

        # List of marker symbols to cycle through
    # Uncomment more symbols for larger number of datasets
    markers = [
        'o',  # Circle
        'v',  # Triangle down
        '^',  # Triangle up
        's',  # Square
        'D',  # Diamond
        'p',  # Pentagon
        '*',  # Star
        'h',  # Hexagon 1
        'H',  # Hexagon 2
        #'.',  # Point
        #',',  # Pixel
        #'<',  # Triangle left
        #'>',  # Triangle right
        #'1',  # Tri-down
        #'2',  # Tri-up
        #'3',  # Tri-left
        #'4',  # Tri-right
        #'+',  # Plus
        #'x',  # X
        #'d',  # Thin diamond
        #'|',  # Vertical line
        #'_',  # Horizontal line
    ]
    # List of line styles to cycle through
    line_styles = [
        '-',    # Solid line
        '--',   # Dashed line
        '-.',   # Dash-dot line
        ':',    # Dotted line
        # Uncomment more styles for larger number of datasets
        # '',     # No line
    ]

    # List of colors to cycle through
    colors = [
        'b',    # Blue
        'g',    # Green
        'r',    # Red
        'c',    # Cyan
        # 'm',    # Magenta
        'y',    # Yellow
        'k',    # Black
        # Uncomment more colors for larger number of datasets
        # 'w',   # White
        # '#1f77b4', # Light blue
        # '#ff7f0e', # Orange
        # '#2ca02c', # Light green
        # '#d62728', # Light red
        # '#9467bd', # Purple
    ]
    # Select marker symbol from the list
    marker = markers[iteration % len(markers)]
    # Select line style and color from the lists
    line_style = line_styles[iteration % len(line_styles)]
    color = colors[iteration % len(colors)]
    # Creating a range of x values
    
    if min(sorted_stress) < min_stress_data:
        min_stress_data = min(sorted_stress)
    if max(sorted_stress) > max_stress_data:
        max_stress_data = max(sorted_stress)

    # Equation of the line in the form y = mx + c
    # Use y1 = m * x1 + c to find c
    x = np.linspace(0,1000,1000)
    y = weibull_cdf(x, scale, shape)
    if display_confidence_intervals_ON:
        y_lower = weibull_cdf(x, theta_lower, beta_lower)
        y_upper = weibull_cdf(x, theta_upper, beta_upper)
        plt.fill_between(x, y_upper, y_lower, color='grey', alpha=0.3, label=f'{int((1-target_alpha)*100)}% Confidence interval')
    plt.scatter(sorted_stress, G_x, color= color, s=6, label=f'{dataset}; \u03B2={shape:.2f} & \u03B8={scale:.2f}',marker = marker)
    plt.plot(x, y, color=color)

    
    

Interactive Data Input#

This widget allows users to input datasets, set analysis parameters, and optionally convert failure stress values.

What can be done:#

Choose the number of datasets (default: 3).
Enter dataset names and values (comma-separated).
Set target values:

  • stress fractile (default: 5%)

  • confidence interval: Enter the alpha value. (default: 95%)

  • x-limits (default: lower limit = min_stress_level - 20, upper limit =max_stress_level + 20)

Turn confidence intervals ON/OFF.

Convert failure stress (optional):

  • Toggle conversion of failure stress to equivalent failure stress for a selected reference period ON/OFF, by clicking Yes/No respectively.

  • Enter time-to-failure values.

  • Choose a reference time period (1s, 3s, 5s, 60s).

How It Works:#

1️⃣ Click “Confirm” to generate input fields for the failure stress datasets.
2️⃣ (Optional) Click “Yes” and enter the time-to-failure values corresponding to each failure stress dataset.
3️⃣ Enter values and click “Evaluate” to process the data.

Note: The script checks for errors in the input or mismatch in the dimensions between the time-to-failure datasets and the failure stress datasets.
Hide code cell source
# Create an HTML widget for instructions
instructions = widgets.HTML(
    value="""
    <p style="font-size: 16px">Enter the data separated by commas, with decimal point (e.g. "1.44, 2.33, 4.22, 3.01,...")</p>
    <b style="font-size: 16px">Data protection declaration: The data entered will not be saved or transmitted over the network.</b>
    """
)

blank_space = widgets.HTML(
    value="""<br>"""
)


# Widget to ask user how many datasets they want
num_datasets_input = widgets.IntText(
    value=3,  # Default value
    description="Number of Datasets:",
    style={'description_width': 'auto'},
    layout=widgets.Layout(width='250px')
)

# Button to confirm number of datasets
confirm_button = widgets.Button(description="Confirm")

# Container to hold dataset input fields dynamically
dataset_container = widgets.VBox()

# Container for target values
target_inputs = widgets.VBox()

# Toggle button for confidence interval display (Boolean)
confidence_toggle = widgets.ToggleButton(
    value=False,  # Default: Show Confidence Intervals
    description="Show Confidence",
    tooltip="Click to toggle confidence interval display"
)

# Button to evaluate inputs
evaluate_button = widgets.Button(description="Evaluate", disabled=True)

# Output container for display
output = widgets.Output()

# Variables to hold dynamic widgets
dataset_name_inputs = []  # List to hold dataset name input widgets
dataset_value_inputs = []  # List to hold dataset text input widgets
time_to_failure_inputs = [] # List to hold time to failure input widgets

# Conversion Widget
convert_toggle = widgets.ToggleButtons(
    options=["No", "Yes"],
    description="Convert to equivalent constant failure stress for a reference time period?",
    style={'description_width': 'auto'},
)

# Dropdown for conversion factor
conversion_factor_dropdown = widgets.Dropdown(
    options=[1, 3, 5, 60],
    description="Select reference time period[s]:",
    style={'description_width': 'auto'},
    value=60 # Set the default value to 60 seconds
)

additional_conversion_inputs = widgets.VBox()
# Function to show/hide dropdown based on toggle selection
def handle_conversion(change):
    global time_to_failure_inputs
    if change['new'] == "Yes":
        conversion_factor_dropdown.layout.display = "block"
        num_datasets = num_datasets_input.value
        time_to_failure_inputs = []

        conversion_layout = []
        for i in range(num_datasets):
            time_period_input = widgets.Textarea(
            description=f"Time to failure {i+1}:",
            placeholder="Enter values separated by commas...",
            layout=widgets.Layout(width="500px", height="50px"),
            style={'description_width': 'auto'}
        )
            time_to_failure_inputs.append(time_period_input)
            conversion_layout.append(time_period_input)

        # Update the additional inputs container with the new input fields
        additional_conversion_inputs.children = conversion_layout
    else:
        conversion_factor_dropdown.layout.display = "none"
        additional_conversion_inputs.children = []

convert_toggle.observe(handle_conversion, names="value")

# Hide conversion factor dropdown initially
conversion_factor_dropdown.layout.display = "none"

# Function to generate input fields dynamically
def generate_inputs(event):
    global dataset_name_inputs, dataset_value_inputs
    
    num_datasets = num_datasets_input.value
    dataset_name_inputs = []
    dataset_value_inputs = []

    # Lists to arrange input areas
    dataset_layout = []
    
    for i in range(num_datasets):
        
        # Dataset Name Input
        dataset_name = widgets.Text(
            description=f"Name {i+1}:",
            placeholder=f"Dataset {i+1}",
            layout=widgets.Layout(width="250px"),
            style={'description_width': 'auto'}
        )
        dataset_name_inputs.append(dataset_name)
        
        # Create dataset input field (values)
        dataset_text = widgets.Textarea(
            description=f"Values {i+1}:",
            placeholder="Enter values separated by commas...",
            layout=widgets.Layout(width="500px", height="50px"),
            style={'description_width': 'auto'}
        )
        dataset_value_inputs.append(dataset_text)
        

        # Add the widgets to the layout
        dataset_layout.append(widgets.HBox([dataset_name, dataset_text]))

    # Create input fields for Target Stress Fractile and Confidence Interval
    target_g_text = widgets.FloatText(
        description="Target stress fractile:",
        value=0.05, layout=widgets.Layout(width="300px"),
        style={'description_width': 'auto'}
    )
    
    target_alpha_text = widgets.FloatText(
        description="Target confidence interval:",
        value=0.05, layout=widgets.Layout(width="300px"),
        style={'description_width': 'auto'}
    )

    # Input fields for x limits
    lower_x_limit_text = widgets.FloatText(
        description="Lower x limit:",
        layout=widgets.Layout(width="300px"),
        style={'description_width': 'auto'}
    )
    
    upper_x_limit_text = widgets.FloatText(
        description="Upper x limit:",
        layout=widgets.Layout(width="300px"),
        style={'description_width': 'auto'}
    )

    # Update the target inputs container
    target_inputs.children = [target_g_text, target_alpha_text, lower_x_limit_text, upper_x_limit_text]

    # Update the dataset container with new input fields
    dataset_container.children = dataset_layout

    # Enable the evaluate button
    evaluate_button.disabled = False

# Attach event to confirm button
confirm_button.on_click(generate_inputs)

# Function to evaluate inputs
def evaluate(event):
    with output:
        clear_output()
        global stress_values,target_G_x,target_alpha,display_confidence_intervals_ON, lower_x_limit, upper_x_limit, convert_to_equivalent_stress, reference_period, time_to_failure, equivalent_stress_values
        stress_values = {}
        time_to_failure = {}
        equivalent_stress_values = {}

        try:
            # Get user's choice from the toggle button
            show_confidence_intervals = confidence_toggle.value  
            # Extract dataset values
            for i in range(len(dataset_value_inputs)):
                dataset_name = dataset_name_inputs[i].value.strip() or f"Dataset {i+1}"
                try:
                    data_values = [float(x.strip()) for x in dataset_value_inputs[i].value.split(",") if x.strip()]
                except ValueError as parse_error:
                    print(f"Error parsing dataset values for '{dataset_name}': {parse_error}\n")
                    continue
                
                stress_values[dataset_name] = np.array(data_values)

            # Extract target values
            target_G_x = float(target_inputs.children[0].value)
            target_alpha = float(target_inputs.children[1].value)
            lower_x_limit = float(target_inputs.children[2].value)
            upper_x_limit = float(target_inputs.children[3].value)
            # Display results
            for dataset_name in stress_values.keys():
                print(f"{dataset_name}: {stress_values[dataset_name]}")
            
            print(f"\nTarget stress fractile: {target_G_x * 100}%")
            print(f"Target confidence interval: {int((1 - target_alpha) * 100)}%")

             # Show confidence interval only if the toggle is ON
            if show_confidence_intervals:
                display_confidence_intervals_ON = True # Turn the confidence intervals ON in the Logarithmic CDF Weibull Plot
                print(f"Target confidence interval display: ON")
            else:
                display_confidence_intervals_ON = False # Turn the confidence intervals OFF in the Logarithmic CDF Weibull Plot
                print(f"Target confidence interval display: OFF")
                
            if lower_x_limit != 0 or upper_x_limit != 0:
                print(f"Lower x limit: {lower_x_limit}, Upper x limit: {upper_x_limit}")
            else:
                print(f"Default values for Lower x limit and Upper x limit.")

                        # Handle conversion
            convert_to_equivalent_stress = convert_toggle.value
            reference_period = conversion_factor_dropdown.value if convert_to_equivalent_stress == "Yes" else None

            if convert_to_equivalent_stress == "Yes":
                print(f"\nConversion of failure stress: Yes, Reference time period [s]: {reference_period}")
                # Extract dataset values
                for i in range(len(time_to_failure_inputs)):
                    dataset_name = dataset_name_inputs[i].value.strip() or f"Dataset {i+1}"
                    try:
                        time_to_failure_values = [float(x.strip()) for x in time_to_failure_inputs[i].value.split(",") if x.strip()]
                    except ValueError as ttf_error:
                        print(f"Error parsing time-to-failure values for '{dataset_name}': {ttf_error}")
                        continue
                    # Handle data length mismatch
                    try:
                        if len(stress_values[dataset_name]) != len(time_to_failure_values):
                            raise ValueError(f"Error: Mismatch in data lengths for {dataset_name}: "
                                             f"{len(data_values)} data values vs {len(time_to_failure_values)} time-to-failure values.")
                    except ValueError as length_error:
                        print(f"\n{length_error}")
                        continue
    
                    time_to_failure[dataset_name] = np.array(time_to_failure_values)
                    print(f"{dataset_name}; n = {len(time_to_failure[dataset_name])}; Time to failure values[s]: {time_to_failure_values}")

                    stress_data_eq = equivalent_failure_stress(stress_values[dataset_name],time_to_failure[dataset_name],reference_period)
                    equivalent_stress_values[dataset_name] = np.around(stress_data_eq,2)
                    print(f"The equivalent failure stress for {reference_period} seconds is {np.around(stress_data_eq,2)}")

            else:
                print("Conversion of failure stress: No")
            
        except ValueError:
            print("Error: Please enter valid comma-separated numbers or check your input.")

# Attach event to evaluate button
evaluate_button.on_click(evaluate)

# Arrange and display widgets
display(instructions,widgets.VBox([num_datasets_input, confirm_button, dataset_container, target_inputs,confidence_toggle, convert_toggle,conversion_factor_dropdown,additional_conversion_inputs,evaluate_button, output]))

Statistical evaluation: Uncensored sample#

Hide code cell source
if 'stress_values' not in globals() or 'stress_values' not in locals():
    print("Please enter all the necessary input data in the above input fields!")
else:
    plt.figure(figsize=(10, 6))
    iteration = 0; 
    min_stress_data = 100000
    max_stress_data = -10000
    # Initialize an empty dictionary to store the scale & shape value of a Weibull distribution corresponding to each dataset.
    Weibull_distribution_parameters = {}
    Regression_line_parameters = {}
    for dataset, stress_data in stress_values.items():
        if stress_data.size > 0 and np.any(stress_data != 0):
            print(f"##############################################################")
            print(f"#  Statistical evalution for {dataset} according to EN12603  #")
            print(f"##############################################################")

            if convert_to_equivalent_stress == "Yes" and len(time_to_failure) > 0:
                stress_data = equivalent_stress_values[dataset]
                print(f"Failure stress is converted to equivalent failure stress for a reference period of {reference_period} seconds.")
            # Sort the stress in increasing order
            sorted_stress = np.sort(stress_data)

            # Calculate probability of failure (P_f,i)
            n = len(sorted_stress)
            kappa_n = get_kappa_n(n)
            print(f"Kappa_n for n = {n} is approximately {kappa_n:.4f}")
            s = int(np.floor(0.84*n))
            r = n
            r_over_n = r/n

            if r_over_n == 1:
                f1 = n*(3.085 - 3.84/n) # In case of uncensored samples
                # Or use linear interpolation
            # else:
            # # Implement the f1 value in case of censored samples r/n < 1

            print("Estimation of the confidence interval for the shape \u03B2 parameter is given in 8.1")
            print(f"___________________________________________________________________________________")
            print("\n")
            beta_hat, theta_hat = calculate_beta_hat_theta_hat(n,kappa_n,sorted_stress, s)
            beta_lower, beta_upper = confidence_interval_beta(f1,beta_hat, alpha=target_alpha)
            print("\n")

            print("Estimation of the confidence intervals for G(x) is given in 8.2 and the results of the computations are given in Table A.2.")
            print(f"__________________________________________________________________________________________________________________________")
            print("\n")
            P_f_lower_limit_list, P_f_upper_limit_list = confidence_intervals_Gx(beta_hat, theta_hat, n, r_over_n, sorted_stress)
            print("\n")

            print("Estimation of the confidence intervals of the scale \u03B8 parameter is given in 8.3.")
            print(f"____________________________________________________________________________________")
            print("\n")
            theta_lower, theta_upper = confidence_interval_theta(beta_hat, theta_hat, n, r_over_n)
            print("\n")

            print("The confidence intervals for x when G(x) is given is determined numerically by the method in 8.4.1.")
            print(f"_______________________________________________________________________________________________________")
            print("\n")
            x2,x2_ob_z, x2_un_z = confidence_interval_stress(beta_hat, theta_hat, n, r_over_n, beta_lower, beta_upper, G_x=target_G_x, alpha=target_alpha)
            print("\n")

            stress_x_percentile, stress_x_percentile_CI_upper, stress_x_percentile_CI_lower = round(x2,2),round(x2_ob_z,2), round(x2_un_z,2)
            # Weibull plot
            weibull_plot(dataset,
                         iteration,
                         Weibull_distribution_parameters,
                         Regression_line_parameters,  
                         stress_x_percentile, 
                         stress_x_percentile_CI_upper,
                         stress_x_percentile_CI_lower,
                         beta_hat, 
                         theta_hat,
                         beta_upper,
                         beta_lower,
                         theta_upper,
                         theta_lower, 
                         sorted_stress,
                         P_f_lower_limit_list,
                         P_f_upper_limit_list, 
                         n,
                         target_G_x=target_G_x, 
                         target_alpha=target_alpha)
            iteration += 1
            print("\n")

            # Determine the 0.8% fractile
            G_x_08 = 0.008
            x_08,x_08_ob_z, x_08_un_z = confidence_interval_stress(beta_hat, theta_hat, n, r_over_n, beta_lower, beta_upper, G_x=G_x_08, alpha=target_alpha)
            stress_08_percentile, stress_08_percentile_CI_upper, stress_08_percentile_CI_lower = round(x_08,2), round(x_08_ob_z,2), round(x_08_un_z,2)

            # Determine the 5% fractile
            G_x_5 = 0.05
            x_5,x_5_ob_z, x_5_un_z = confidence_interval_stress(beta_hat, theta_hat, n, r_over_n, beta_lower, beta_upper, G_x=G_x_5, alpha=target_alpha)
            stress_5_percentile, stress_5_percentile_CI_upper, stress_5_percentile_CI_lower = round(x_5,2), round(x_5_ob_z,2), round(x_5_un_z,2)
            # Determine the 50% fractile
            G_x_50 = 0.5
            x_50,x_50_ob_z, x_50_un_z = confidence_interval_stress(beta_hat, theta_hat, n, r_over_n, beta_lower, beta_upper, G_x=G_x_50, alpha=target_alpha)
            stress_50_percentile, stress_50_percentile_CI_upper, stress_50_percentile_CI_lower = round(x_50,2), round(x_50_ob_z,2), round(x_50_un_z,2)

            # Calculate the mean of the data
            mean = np.mean(stress_data)

            # Calculate the standard deviation of the data for a sample
            std_dev = np.std(stress_data, ddof=1)  # Use ddof=1 for sample standard deviation or ddof=0  for population standard deviation

            # Calculate the coefficient of variation
            coefficient_of_variation = (std_dev / mean) * 100
            p_value = goodness_of_fit_test_weibull(sorted_stress, beta_hat, theta_hat)

            # Define the data for the table containing integers
            table_data1 = [
                ("Fractile [%]", "Stress [MPa]", "95% CI lower [MPa]", "95% CI upper [MPa]"),
                ("0.8%", stress_08_percentile, stress_08_percentile_CI_lower, stress_08_percentile_CI_upper),  # Row 1 
                ("5%", stress_5_percentile,stress_5_percentile_CI_lower,stress_5_percentile_CI_upper),         # Row 2 
                ("50%", stress_50_percentile,stress_50_percentile_CI_lower,stress_50_percentile_CI_upper),      # Row 3 
                (f"Selected {target_G_x*100}%", stress_x_percentile,stress_x_percentile_CI_lower,stress_x_percentile_CI_upper)      # Row 3 
            ]

            # Generate HTML for the table
            html_content1 = f"""
            <table border='1' style='border-collapse: collapse;'>
                <caption style='caption-side: top; font-weight: bold;'>{dataset} (n={n})</caption>
            """
            for row in table_data1:
                html_content1 += "<tr>"
                for item in row:
                    html_content1 += f"<td style='padding: 5px;'>{item}</td>"
                html_content1 += "</tr>"
            html_content1 += "</table>"

             # Define the data for the table containing integers
            table_data2 = [
                ("Min Stress [MPa]", "Max Stress [MPa]", " Mean Stress [MPa]", "Coeff. of variation [%]", "Goodness of fit, p<sub>AD</sub>"),
                (round(min(sorted_stress),2), round(max(sorted_stress),2), round(np.mean(sorted_stress),2), round(coefficient_of_variation,2), round(p_value*100,2)),
            ]

            # Generate HTML for the table
            html_content2 = f"""
            <table border='1' style='border-collapse: collapse;'>
                <caption style='caption-side: top; font-weight: bold;'>{dataset} (n={n})</caption>
            """
            for row in table_data2:
                html_content2 += "<tr>"
                for item in row:
                    html_content2 += f"<td style='padding: 5px;'>{item}</td>"
                html_content2 += "</tr>"
            html_content2 += "</table>"


            # Add regression equation and R-squared below table 2
            slope = Regression_line_parameters[dataset][0]
            intercept = Regression_line_parameters[dataset][1]
            R_squared = Regression_line_parameters[dataset][2]
            html_content3 = f"""
            <p>Regression line for "<strong>{dataset}</strong>" (<strong>n={n}</strong>) is: <strong>y = {slope:.2f}x {'+' if intercept > 0 else '-'} {abs(intercept):.2f}; R<sup>2</sup>= {R_squared:.3f}</strong></p>
            """

            # Display the table using HTML
            display(HTML(html_content1),HTML(html_content2),HTML(html_content3))


    plt.xlabel(f'Failure stress, $\u03C3_f$ [MPa]')
    plt.ylabel('Probability of failure, $P_f$')
    plt.title('Logarithmic CDF Weibull Plot (EN12603)')

    probs = [0.001, 0.005, 0.01, 0.02, 0.05, 0.1, 0.25, 0.50, 0.75, 0.90, 0.96, 0.99]
    #num_ticks = 15 # Define how many intervals you want
    #xticks = np.linspace(0, max_stress_data, num_ticks).astype(int)
    xticks = [ 5, 10, 15, 20, 30, 45, 60, 100, 150, 200]

    # Draw vertical lines
    vertical_lines = xticks
    for x in vertical_lines:
        plt.axvline(x=x, color='grey', alpha = 0.5, linestyle='--', linewidth=0.5)
    #plt.axvline(x=45, color='r', alpha = 0.5, linestyle='--', linewidth=0.5)
    # Set custom y-ticks and log scale
    SMALL = 1.0e-20    # attempt to protect against invalid arguments to log
    def forwardY( p ):
        return np.log(np.fmax(SMALL, -np.log(np.fmax(SMALL,1-p))))

    def inverseY( q ):
        return 1 - np.exp(-np.exp(q))

    def forwardX(x):
        return np.log(np.fmax(SMALL,x))

    def inverseX(y):
        return np.exp(y)


    plt.ylim(bottom=0.001, top=0.99)
    plt.xscale('function', functions=(forwardX,inverseX))
    plt.yscale('function', functions=(forwardY,inverseY))
    plt.yticks(probs)

    # Set x-ticks and their visible labels
    plt.xticks(vertical_lines, [str(v) for v in vertical_lines])
    if lower_x_limit != 0 or upper_x_limit != 0:
        plt.xlim(left=lower_x_limit, right=upper_x_limit)
    else:
        plt.xlim(left=min_stress_data-20, right=max_stress_data+20)

    # Get handles and labels, then filter to make them unique
    handles, labels = plt.gca().get_legend_handles_labels()
    unique_labels_set = set()
    unique_handles_labels = []

    for handle, label in zip(handles, labels):
        if label not in unique_labels_set:
            unique_handles_labels.append((handle, label))
            unique_labels_set.add(label)

    # Construct separate lists for handles and labels, and position the legend
    if unique_handles_labels:
        unique_handles, unique_labels = zip(*unique_handles_labels)
        plt.legend(unique_handles, unique_labels, loc='upper left')
    #plt.grid(True)
    plt.grid(True, which='both', linestyle='--', linewidth=0.5)
    plt.show()
##############################################################
#  Statistical evalution for Dataset 1 according to EN12603  #
##############################################################
Kappa_n for n = 32 is approximately 1.4665
Estimation of the confidence interval for the shape β parameter is given in 8.1
___________________________________________________________________________________


beta_hat is: 5.22 N/mm2
theta_hat is: 183.90 N/mm2
Degrees of freedom are: 94.88
Lower quantile: 69.821963
Upper quantile: 123.72
Beta lower: 3.84
Beta upper: 6.81


Estimation of the confidence intervals for G(x) is given in 8.2 and the results of the computations are given in Table A.2.
__________________________________________________________________________________________________________________________


G(x)_ob upper limit: [0.049685444520963795, 0.08276843951323543, 0.10541161885510708, 0.1570459626253179, 0.1570459626253179, 0.19039537804696027, 0.24200401770139346, 0.2560384938712513, 0.2632219671554391, 0.40725522438969086, 0.5368701132574196, 0.5986698115843654, 0.6370781733305309, 0.6761999693951293, 0.6858941146929407, 0.7520609631717019, 0.8042861988840907, 0.8209662252567776, 0.8370320494499955, 0.8448172988027853, 0.8524286154162333, 0.8598597129763695, 0.867104617847575, 0.8810137517504355, 0.8810137517504355, 0.9121869360234095, 0.9231472436389307, 0.9337051095485126, 0.9374706459091817, 0.946269292423232, 0.9853198904998124, 0.9998446384261677]
G(x)_un lower limit [0.003744356752770628, 0.009026737471015678, 0.013760282701091753, 0.02780009355897639, 0.02780009355897639, 0.03921210858068791, 0.0604436323571953, 0.06696228386136338, 0.07042145251162746, 0.15701808791445426, 0.2615787413448878, 0.3194048672097972, 0.35768244294610196, 0.39840954439264886, 0.40876615758295276, 0.4822782969518761, 0.5441019785903976, 0.564704978540629, 0.5850286635713495, 0.5950685522917065, 0.605018543504348, 0.6148725760066656, 0.6246249066697338, 0.643803112840087, 0.643803112840087, 0.6896296622231475, 0.7070331054999188, 0.7246983029867655, 0.7312597946497552, 0.7472498295167389, 0.8416463176775233, 0.9531539678812389]


Estimation of the confidence intervals of the scale θ parameter is given in 8.3.
____________________________________________________________________________________


Iteration number 0 | (θ)_ob: 198.171 | (θ)_un: 171.768
Iteration number 1 | (θ)_ob: 197.682 | (θ)_un: 170.654
Iteration number 2 | (θ)_ob: 197.676 | (θ)_un: 170.516
Iteration number 3 | (θ)_ob: 197.676 | (θ)_un: 170.498
Iteration number 4 | (θ)_ob: 197.676 | (θ)_un: 170.496


The confidence intervals for x when G(x) is given is determined numerically by the method in 8.4.1.
_______________________________________________________________________________________________________


For G(x) = 0.9900, σ = 246.35 [N/mm2]
For G(x) = 0.9500, σ = 226.88 [N/mm2]
For G(x) = 0.8000, σ = 201.44 [N/mm2]
For G(x) = 0.6321, σ = 183.89 [N/mm2]
For G(x) = 0.1000, σ = 119.53 [N/mm2]
For G(x) = 0.0500, σ = 104.14 [N/mm2]
For G(x) = 0.0200, σ = 87.12 [N/mm2]
For G(x) = 0.0100, σ = 76.22 [N/mm2]


P_f upper limit: [0.9993496490475925, 0.9885360094714094, 0.8974893396047517, 0.7602182271860963, 0.19782603542436272, 0.1170111418190658, 0.057395853442802736, 0.0331107128440592]
P_f lower limit [0.9342559809306203, 0.8538221036995953, 0.6674494024762354, 0.49169580976504457, 0.042002533996962854, 0.016525197136764147, 0.0047961482382131715, 0.0018735636950746315]
The desired G(x) = 5.0 % probability of failure and the corresponding stress value is x2 = 104.14 [N/mm2]
The nearest x value greater than 5.0% is 104.14 [N/mm2] corresponding to G_hat = 5.0%
x2: 104.14 [N/mm2]
x2,ob,z: 122.83 [N/mm2]
x2,un,z: 82.69 [N/mm2]


The R^2 value is: 0.8968531689831755
slope is 5.22316276934552 and intercept is -27.235537068283655
5.0% Fractile Stress: 104.14 MPa
95% Confidence Interval of 5% Fractile: (82.69, 122.83) MPa


For G(x) = 0.9900, σ = 246.35 [N/mm2]
For G(x) = 0.9500, σ = 226.88 [N/mm2]
For G(x) = 0.8000, σ = 201.44 [N/mm2]
For G(x) = 0.6321, σ = 183.89 [N/mm2]
For G(x) = 0.1000, σ = 119.53 [N/mm2]
For G(x) = 0.0500, σ = 104.14 [N/mm2]
For G(x) = 0.0200, σ = 87.12 [N/mm2]
For G(x) = 0.0100, σ = 76.22 [N/mm2]


P_f upper limit: [0.9993496490475925, 0.9885360094714094, 0.8974893396047517, 0.7602182271860963, 0.19782603542436272, 0.1170111418190658, 0.057395853442802736, 0.0331107128440592]
P_f lower limit [0.9342559809306203, 0.8538221036995953, 0.6674494024762354, 0.49169580976504457, 0.042002533996962854, 0.016525197136764147, 0.0047961482382131715, 0.0018735636950746315]
The desired G(x) = 0.8 % probability of failure and the corresponding stress value is x2 = 73.02 [N/mm2]
The nearest x value greater than 0.8% is 76.22 [N/mm2] corresponding to G_hat = 1.0%
x2: 73.02 [N/mm2]
x2,ob,z: 94.37 [N/mm2]
x2,un,z: 52.50 [N/mm2]
For G(x) = 0.9900, σ = 246.35 [N/mm2]
For G(x) = 0.9500, σ = 226.88 [N/mm2]
For G(x) = 0.8000, σ = 201.44 [N/mm2]
For G(x) = 0.6321, σ = 183.89 [N/mm2]
For G(x) = 0.1000, σ = 119.53 [N/mm2]
For G(x) = 0.0500, σ = 104.14 [N/mm2]
For G(x) = 0.0200, σ = 87.12 [N/mm2]
For G(x) = 0.0100, σ = 76.22 [N/mm2]


P_f upper limit: [0.9993496490475925, 0.9885360094714094, 0.8974893396047517, 0.7602182271860963, 0.19782603542436272, 0.1170111418190658, 0.057395853442802736, 0.0331107128440592]
P_f lower limit [0.9342559809306203, 0.8538221036995953, 0.6674494024762354, 0.49169580976504457, 0.042002533996962854, 0.016525197136764147, 0.0047961482382131715, 0.0018735636950746315]
The desired G(x) = 5.0 % probability of failure and the corresponding stress value is x2 = 104.14 [N/mm2]
The nearest x value greater than 5.0% is 104.14 [N/mm2] corresponding to G_hat = 5.0%
x2: 104.14 [N/mm2]
x2,ob,z: 122.83 [N/mm2]
x2,un,z: 82.69 [N/mm2]
For G(x) = 0.9900, σ = 246.35 [N/mm2]
For G(x) = 0.9500, σ = 226.88 [N/mm2]
For G(x) = 0.8000, σ = 201.44 [N/mm2]
For G(x) = 0.6321, σ = 183.89 [N/mm2]
For G(x) = 0.1000, σ = 119.53 [N/mm2]
For G(x) = 0.0500, σ = 104.14 [N/mm2]
For G(x) = 0.0200, σ = 87.12 [N/mm2]
For G(x) = 0.0100, σ = 76.22 [N/mm2]


P_f upper limit: [0.9993496490475925, 0.9885360094714094, 0.8974893396047517, 0.7602182271860963, 0.19782603542436272, 0.1170111418190658, 0.057395853442802736, 0.0331107128440592]
P_f lower limit [0.9342559809306203, 0.8538221036995953, 0.6674494024762354, 0.49169580976504457, 0.042002533996962854, 0.016525197136764147, 0.0047961482382131715, 0.0018735636950746315]
The desired G(x) = 50.0 % probability of failure and the corresponding stress value is x2 = 171.43 [N/mm2]
The nearest x value greater than 50.0% is 183.89 [N/mm2] corresponding to G_hat = 63.21%
x2: 171.43 [N/mm2]
x2,ob,z: 184.54 [N/mm2]
x2,un,z: 152.37 [N/mm2]
Dataset 1 (n=32)
Fractile [%]Stress [MPa]95% CI lower [MPa]95% CI upper [MPa]
0.8%73.0252.594.37
5%104.1482.69122.83
50%171.43152.37184.54
Selected 5.0%104.1482.69122.83
Dataset 1 (n=32)
Min Stress [MPa]Max Stress [MPa] Mean Stress [MPa]Coeff. of variation [%]Goodness of fit, pAD
84.1253.6170.4924.870.28

Regression line for "Dataset 1" (n=32) is: y = 5.22x - 27.24; R2= 0.897

##############################################################
#  Statistical evalution for Dataset 2 according to EN12603  #
##############################################################
Kappa_n for n = 24 is approximately 1.4975
Estimation of the confidence interval for the shape β parameter is given in 8.1
___________________________________________________________________________________


beta_hat is: 18.68 N/mm2
theta_hat is: 49.26 N/mm2
Degrees of freedom are: 70.20
Lower quantile: 48.924389
Upper quantile: 95.26
Beta lower: 13.02
Beta upper: 25.35


Estimation of the confidence intervals for G(x) is given in 8.2 and the results of the computations are given in Table A.2.
__________________________________________________________________________________________________________________________


G(x)_ob upper limit: [0.10217423363713107, 0.15420618043131362, 0.25973333473641447, 0.2685017110701733, 0.2866907372365529, 0.3147915744844344, 0.3447745398771451, 0.4113365004389966, 0.4113365004389966, 0.4594799541035174, 0.6081877929001551, 0.6504780785035563, 0.6504780785035563, 0.662655805377826, 0.8198990567138384, 0.8922555557841555, 0.9002761237815852, 0.9216861247581938, 0.928454993895401, 0.946680306267368, 0.946680306267368, 0.946680306267368, 0.9796832030314432, 0.9990682838132943]
G(x)_un lower limit [0.00846586494744539, 0.018581143978833703, 0.05160115373404983, 0.05512970874187573, 0.06284450444057421, 0.07582188993557881, 0.09110134010286408, 0.13036544488148005, 0.13036544488148005, 0.16335911253624935, 0.28865309776575443, 0.33026959980237913, 0.33026959980237913, 0.3427069784203668, 0.5214178562227281, 0.6189336954727177, 0.6308675504741571, 0.6645396889470916, 0.6758858433860568, 0.7088507119248075, 0.7088507119248075, 0.7088507119248075, 0.7868041229540791, 0.9000949570241068]


Estimation of the confidence intervals of the scale θ parameter is given in 8.3.
____________________________________________________________________________________


Iteration number 0 | (θ)_ob: 50.474 | (θ)_un: 48.198
Iteration number 1 | (θ)_ob: 50.441 | (θ)_un: 48.080
Iteration number 2 | (θ)_ob: 50.440 | (θ)_un: 48.062
Iteration number 3 | (θ)_ob: 50.440 | (θ)_un: 48.059


The confidence intervals for x when G(x) is given is determined numerically by the method in 8.4.1.
_______________________________________________________________________________________________________


For G(x) = 0.9900, σ = 53.46 [N/mm2]
For G(x) = 0.9500, σ = 52.24 [N/mm2]
For G(x) = 0.8000, σ = 50.53 [N/mm2]
For G(x) = 0.6321, σ = 49.26 [N/mm2]
For G(x) = 0.1000, σ = 43.67 [N/mm2]
For G(x) = 0.0500, σ = 42.02 [N/mm2]
For G(x) = 0.0200, σ = 39.98 [N/mm2]
For G(x) = 0.0100, σ = 38.51 [N/mm2]


P_f upper limit: [0.9996193898847218, 0.9913266338780917, 0.9091577680840396, 0.7778498450196948, 0.2167592577551667, 0.13100861182784096, 0.06586912602280537, 0.03862943348832393]
P_f lower limit [0.9166331691894436, 0.8318676987257538, 0.6444802661247502, 0.47012560662813807, 0.03607442268858163, 0.013584128718236, 0.003716284727533048, 0.0013871809969934201]
The desired G(x) = 5.0 % probability of failure and the corresponding stress value is x2 = 42.02 [N/mm2]
The nearest x value greater than 5.0% is 42.02 [N/mm2] corresponding to G_hat = 5.0%
x2: 42.02 [N/mm2]
x2,ob,z: 44.27 [N/mm2]
x2,un,z: 38.90 [N/mm2]


The R^2 value is: 0.9571055622562351
slope is 18.683971836293015 and intercept is -72.81477504979435
5.0% Fractile Stress: 42.02 MPa
95% Confidence Interval of 5% Fractile: (38.90, 44.27) MPa


For G(x) = 0.9900, σ = 53.46 [N/mm2]
For G(x) = 0.9500, σ = 52.24 [N/mm2]
For G(x) = 0.8000, σ = 50.53 [N/mm2]
For G(x) = 0.6321, σ = 49.26 [N/mm2]
For G(x) = 0.1000, σ = 43.67 [N/mm2]
For G(x) = 0.0500, σ = 42.02 [N/mm2]
For G(x) = 0.0200, σ = 39.98 [N/mm2]
For G(x) = 0.0100, σ = 38.51 [N/mm2]


P_f upper limit: [0.9996193898847218, 0.9913266338780917, 0.9091577680840396, 0.7778498450196948, 0.2167592577551667, 0.13100861182784096, 0.06586912602280537, 0.03862943348832393]
P_f lower limit [0.9166331691894436, 0.8318676987257538, 0.6444802661247502, 0.47012560662813807, 0.03607442268858163, 0.013584128718236, 0.003716284727533048, 0.0013871809969934201]
The desired G(x) = 0.8 % probability of failure and the corresponding stress value is x2 = 38.05 [N/mm2]
The nearest x value greater than 0.8% is 38.51 [N/mm2] corresponding to G_hat = 1.0%
x2: 38.05 [N/mm2]
x2,ob,z: 41.27 [N/mm2]
x2,un,z: 34.08 [N/mm2]
For G(x) = 0.9900, σ = 53.46 [N/mm2]
For G(x) = 0.9500, σ = 52.24 [N/mm2]
For G(x) = 0.8000, σ = 50.53 [N/mm2]
For G(x) = 0.6321, σ = 49.26 [N/mm2]
For G(x) = 0.1000, σ = 43.67 [N/mm2]
For G(x) = 0.0500, σ = 42.02 [N/mm2]
For G(x) = 0.0200, σ = 39.98 [N/mm2]
For G(x) = 0.0100, σ = 38.51 [N/mm2]


P_f upper limit: [0.9996193898847218, 0.9913266338780917, 0.9091577680840396, 0.7778498450196948, 0.2167592577551667, 0.13100861182784096, 0.06586912602280537, 0.03862943348832393]
P_f lower limit [0.9166331691894436, 0.8318676987257538, 0.6444802661247502, 0.47012560662813807, 0.03607442268858163, 0.013584128718236, 0.003716284727533048, 0.0013871809969934201]
The desired G(x) = 5.0 % probability of failure and the corresponding stress value is x2 = 42.02 [N/mm2]
The nearest x value greater than 5.0% is 42.02 [N/mm2] corresponding to G_hat = 5.0%
x2: 42.02 [N/mm2]
x2,ob,z: 44.27 [N/mm2]
x2,un,z: 38.90 [N/mm2]
For G(x) = 0.9900, σ = 53.46 [N/mm2]
For G(x) = 0.9500, σ = 52.24 [N/mm2]
For G(x) = 0.8000, σ = 50.53 [N/mm2]
For G(x) = 0.6321, σ = 49.26 [N/mm2]
For G(x) = 0.1000, σ = 43.67 [N/mm2]
For G(x) = 0.0500, σ = 42.02 [N/mm2]
For G(x) = 0.0200, σ = 39.98 [N/mm2]
For G(x) = 0.0100, σ = 38.51 [N/mm2]


P_f upper limit: [0.9996193898847218, 0.9913266338780917, 0.9091577680840396, 0.7778498450196948, 0.2167592577551667, 0.13100861182784096, 0.06586912602280537, 0.03862943348832393]
P_f lower limit [0.9166331691894436, 0.8318676987257538, 0.6444802661247502, 0.47012560662813807, 0.03607442268858163, 0.013584128718236, 0.003716284727533048, 0.0013871809969934201]
The desired G(x) = 50.0 % probability of failure and the corresponding stress value is x2 = 48.31 [N/mm2]
The nearest x value greater than 50.0% is 49.26 [N/mm2] corresponding to G_hat = 63.21%
x2: 48.31 [N/mm2]
x2,ob,z: 49.43 [N/mm2]
x2,un,z: 46.42 [N/mm2]
Dataset 2 (n=24)
Fractile [%]Stress [MPa]95% CI lower [MPa]95% CI upper [MPa]
0.8%38.0534.0841.27
5%42.0238.944.27
50%48.3146.4249.43
Selected 5.0%42.0238.944.27
Dataset 2 (n=24)
Min Stress [MPa]Max Stress [MPa] Mean Stress [MPa]Coeff. of variation [%]Goodness of fit, pAD
41.2653.1747.876.6915.51

Regression line for "Dataset 2" (n=24) is: y = 18.68x - 72.81; R2= 0.957

##############################################################
#  Statistical evalution for Dataset 3 according to EN12603  #
##############################################################
Kappa_n for n = 14 is approximately 1.3686
Estimation of the confidence interval for the shape β parameter is given in 8.1
___________________________________________________________________________________


beta_hat is: 6.91 N/mm2
theta_hat is: 73.87 N/mm2
Degrees of freedom are: 39.35
Lower quantile: 23.926554
Upper quantile: 58.55
Beta lower: 4.20
Beta upper: 10.28


Estimation of the confidence intervals for G(x) is given in 8.2 and the results of the computations are given in Table A.2.
__________________________________________________________________________________________________________________________


G(x)_ob upper limit: [0.26032523175500333, 0.34804565953236277, 0.3535707051508711, 0.37965474245785835, 0.386849631710372, 0.4981135140786267, 0.6397409289304684, 0.6741875746890257, 0.8772765573180765, 0.8798691526266171, 0.9014700868148712, 0.9305284078738774, 0.9723217634251946, 0.9975108906573197]
G(x)_un lower limit [0.024026300775910503, 0.04917191885199901, 0.05115116826793842, 0.06118407334044407, 0.06415647143282899, 0.12228417104459366, 0.231700209364901, 0.26430563717624367, 0.5026691393935019, 0.5063550516604726, 0.5382095931299676, 0.5855006733224053, 0.6735154098683628, 0.7907942237914901]


Estimation of the confidence intervals of the scale θ parameter is given in 8.3.
____________________________________________________________________________________


Iteration number 0 | (θ)_ob: 80.604 | (θ)_un: 68.456
Iteration number 1 | (θ)_ob: 80.617 | (θ)_un: 67.667
Iteration number 2 | (θ)_ob: 80.619 | (θ)_un: 67.501
Iteration number 3 | (θ)_ob: 80.619 | (θ)_un: 67.465
Iteration number 4 | (θ)_ob: 80.619 | (θ)_un: 67.457
Iteration number 5 | (θ)_ob: 80.619 | (θ)_un: 67.455


The confidence intervals for x when G(x) is given is determined numerically by the method in 8.4.1.
_______________________________________________________________________________________________________


For G(x) = 0.9900, σ = 92.14 [N/mm2]
For G(x) = 0.9500, σ = 86.58 [N/mm2]
For G(x) = 0.8000, σ = 79.13 [N/mm2]
For G(x) = 0.6321, σ = 73.87 [N/mm2]
For G(x) = 0.1000, σ = 53.34 [N/mm2]
For G(x) = 0.0500, σ = 48.06 [N/mm2]
For G(x) = 0.0200, σ = 42.00 [N/mm2]
For G(x) = 0.0100, σ = 37.96 [N/mm2]


P_f upper limit: [0.9999066900515889, 0.9957718997707896, 0.9327813019782473, 0.815776856331291, 0.26434254015695924, 0.16757347827823021, 0.0890187058572729, 0.054208972599777416]
P_f lower limit [0.8620438482263995, 0.7726123771084055, 0.5894688499691345, 0.4213544139018254, 0.02493746053138801, 0.008433148156712922, 0.0019965347857691107, 0.0006662735120964713]
The desired G(x) = 5.0 % probability of failure and the corresponding stress value is x2 = 48.06 [N/mm2]
The nearest x value greater than 5.0% is 48.06 [N/mm2] corresponding to G_hat = 5.0%
x2: 48.06 [N/mm2]
x2,ob,z: 57.26 [N/mm2]
x2,un,z: 35.49 [N/mm2]


The R^2 value is: 0.9042639280235067
slope is 6.910233850866337 and intercept is -29.729755544705778
5.0% Fractile Stress: 48.06 MPa
95% Confidence Interval of 5% Fractile: (35.49, 57.26) MPa


For G(x) = 0.9900, σ = 92.14 [N/mm2]
For G(x) = 0.9500, σ = 86.58 [N/mm2]
For G(x) = 0.8000, σ = 79.13 [N/mm2]
For G(x) = 0.6321, σ = 73.87 [N/mm2]
For G(x) = 0.1000, σ = 53.34 [N/mm2]
For G(x) = 0.0500, σ = 48.06 [N/mm2]
For G(x) = 0.0200, σ = 42.00 [N/mm2]
For G(x) = 0.0100, σ = 37.96 [N/mm2]


P_f upper limit: [0.9999066900515889, 0.9957718997707896, 0.9327813019782473, 0.815776856331291, 0.26434254015695924, 0.16757347827823021, 0.0890187058572729, 0.054208972599777416]
P_f lower limit [0.8620438482263995, 0.7726123771084055, 0.5894688499691345, 0.4213544139018254, 0.02493746053138801, 0.008433148156712922, 0.0019965347857691107, 0.0006662735120964713]
The desired G(x) = 0.8 % probability of failure and the corresponding stress value is x2 = 36.75 [N/mm2]
The nearest x value greater than 0.8% is 37.96 [N/mm2] corresponding to G_hat = 1.0%
x2: 36.75 [N/mm2]
x2,ob,z: 48.36 [N/mm2]
x2,un,z: 23.94 [N/mm2]
For G(x) = 0.9900, σ = 92.14 [N/mm2]
For G(x) = 0.9500, σ = 86.58 [N/mm2]
For G(x) = 0.8000, σ = 79.13 [N/mm2]
For G(x) = 0.6321, σ = 73.87 [N/mm2]
For G(x) = 0.1000, σ = 53.34 [N/mm2]
For G(x) = 0.0500, σ = 48.06 [N/mm2]
For G(x) = 0.0200, σ = 42.00 [N/mm2]
For G(x) = 0.0100, σ = 37.96 [N/mm2]


P_f upper limit: [0.9999066900515889, 0.9957718997707896, 0.9327813019782473, 0.815776856331291, 0.26434254015695924, 0.16757347827823021, 0.0890187058572729, 0.054208972599777416]
P_f lower limit [0.8620438482263995, 0.7726123771084055, 0.5894688499691345, 0.4213544139018254, 0.02493746053138801, 0.008433148156712922, 0.0019965347857691107, 0.0006662735120964713]
The desired G(x) = 5.0 % probability of failure and the corresponding stress value is x2 = 48.06 [N/mm2]
The nearest x value greater than 5.0% is 48.06 [N/mm2] corresponding to G_hat = 5.0%
x2: 48.06 [N/mm2]
x2,ob,z: 57.26 [N/mm2]
x2,un,z: 35.49 [N/mm2]
For G(x) = 0.9900, σ = 92.14 [N/mm2]
For G(x) = 0.9500, σ = 86.58 [N/mm2]
For G(x) = 0.8000, σ = 79.13 [N/mm2]
For G(x) = 0.6321, σ = 73.87 [N/mm2]
For G(x) = 0.1000, σ = 53.34 [N/mm2]
For G(x) = 0.0500, σ = 48.06 [N/mm2]
For G(x) = 0.0200, σ = 42.00 [N/mm2]
For G(x) = 0.0100, σ = 37.96 [N/mm2]


P_f upper limit: [0.9999066900515889, 0.9957718997707896, 0.9327813019782473, 0.815776856331291, 0.26434254015695924, 0.16757347827823021, 0.0890187058572729, 0.054208972599777416]
P_f lower limit [0.8620438482263995, 0.7726123771084055, 0.5894688499691345, 0.4213544139018254, 0.02493746053138801, 0.008433148156712922, 0.0019965347857691107, 0.0006662735120964713]
The desired G(x) = 50.0 % probability of failure and the corresponding stress value is x2 = 70.05 [N/mm2]
The nearest x value greater than 50.0% is 73.87 [N/mm2] corresponding to G_hat = 63.21%
x2: 70.05 [N/mm2]
x2,ob,z: 75.59 [N/mm2]
x2,un,z: 59.74 [N/mm2]
Dataset 3 (n=14)
Fractile [%]Stress [MPa]95% CI lower [MPa]95% CI upper [MPa]
0.8%36.7523.9448.36
5%48.0635.4957.26
50%70.0559.7475.59
Selected 5.0%48.0635.4957.26
Dataset 3 (n=14)
Min Stress [MPa]Max Stress [MPa] Mean Stress [MPa]Coeff. of variation [%]Goodness of fit, pAD
53.1487.5468.7716.0826.65

Regression line for "Dataset 3" (n=14) is: y = 6.91x - 29.73; R2= 0.904

../_images/d8ba441afe94fede21a5888e9565d43a9150f6fb04c8b449526b4193a8b084c9.svg

Plot the cumulative distribution function \( F(x) \) and the density function \( f(x) \).#

Hide code cell source
if 'stress_values' not in globals() or 'stress_values' not in locals():
    print("Please enter all the necessary input data in the above input fields!")
else:
    for dataset, stress_data in stress_values.items():
            if stress_data.size > 0 and np.any(stress_data != 0):
                if convert_to_equivalent_stress == "Yes" and len(time_to_failure) > 0:
                    stress_data = equivalent_stress_values[dataset]
                # Creating a range of x values
                x = np.linspace(1,max(stress_data)+30,1000)
                # Calculate PDF and CDF
                pdf_values = weibull_pdf(x, Weibull_distribution_parameters[dataset][0], Weibull_distribution_parameters[dataset][1])
                cdf_values = weibull_cdf(x, Weibull_distribution_parameters[dataset][0], Weibull_distribution_parameters[dataset][1])
                # Calculate the x% fractile
                target_percentile = 1 - target_G_x
                x_percentile = Weibull_distribution_parameters[dataset][0] * (-np.log(target_percentile))**(1/Weibull_distribution_parameters[dataset][1])
                # Plotting
                plt.figure(figsize=(12, 6))
                # Plot PDF
                plt.subplot(1, 2, 1)
                plt.plot(x, pdf_values, label=f"PDF (\u03B8={Weibull_distribution_parameters[dataset][0]:.2f}, \u03B2={Weibull_distribution_parameters[dataset][1]:.2f})")
                plt.title(f'Weibull Probability Density Function for {dataset}')
                plt.xlabel('\u03C3 [MPa]')
                plt.ylabel('Density')
                plt.grid(True)
                plt.legend(loc='upper left')
                # Plot CDF
                plt.subplot(1, 2, 2)
                plt.plot(x, cdf_values, label=f"CDF (\u03B8={Weibull_distribution_parameters[dataset][0]:.2f}, \u03B2={Weibull_distribution_parameters[dataset][1]:.2f})")
                plt.hlines(target_G_x, 0, x_percentile, color='black', linestyle='-')
                plt.vlines(x_percentile, 0, target_G_x, color='black', linestyle='-')
                # Add a circular marker at the 5% fractile
                plt.scatter([x_percentile], [target_G_x], color='black', zorder=5)
                # Annotate the 5% fractile
                plt.text(x_percentile + 4, target_G_x + 0.01, f'({x_percentile:.2f} MPa; {target_G_x*100}%)', fontsize=10)
                plt.title(f'Weibull Cumulative Distribution Function {dataset}')
                plt.xlabel('\u03C3 [MPa]')
                plt.ylabel('Cumulative Probability')
                
                plt.grid(True)
                plt.legend(loc='upper left')
                plt.tight_layout()
                plt.show()
    
../_images/47baa2afa5dfd0abd567828e5be3deda957a7c942e477319a77ac4722c470c3e.svg ../_images/9a6c6e7d82838e173d3956cfab2d49b7e16141579fc1d0f3580d2ea8cdb6372c.svg ../_images/f1c082d42234ce394496cbe12b8cbc435a578c713726ef354e1358ce3209845f.svg