GWRATES complete exmaples

Short BBH (Binary Black Hole) example with three detectors

  • This part of the notebook is a short example to simulate a binary black hole mergers and calculate its rate (\(yr^{-1}\)).

  • All generated data is saved in the ler_data folder.

  • All interpolation data is saved in the interpolator_pickle folder.

[11]:
# call the GWRATES class
from ler.rates import GWRATES
  • class initialization

  • if you want the models and its parameters to print.

    ler = GWRATES()

  • set ‘npool’ according to your machine’s available CPU cores. Default is 4.

  • to check no. of cores,

    import multiprocessing as mp

    print(mp.cpu_count())

[12]:
ler = GWRATES(verbose=False, create_new_interpolator=False)
  • simulation of the GW CBC population

  • this will generate a json file with the simulated population parameters

  • by default 100,000 events will be sampled with batches of 50,000.

  • results will be saved in the same directory as json file.

  • resume=True will resume the simulation from the last saved batch.

  • if you need to save the file at the end of each batch sampling, set save_batch=True.

[13]:
param = ler.gw_cbc_statistics(size=100000, resume=False)
Simulated GW params will be stored in ./ler_data/gw_param.json
chosen batch size = 50000 with total size = 100000
There will be 2 batche(s)
Batch no. 1
sampling gw source params...
calculating snrs...
Batch no. 2
sampling gw source params...
calculating snrs...
saving all gw_params in ./ler_data/gw_param.json...
  • generate detectable events

  • note: here no input param is provided, so it will track the json file generated above

  • final rate is the rate of detectable events

[14]:
rate, param_detectable = ler.gw_rate()
getting gw_params from json file ./ler_data/gw_param.json...
total gw rate (yr^-1) (with step function): 434.79616560249093
number of simulated gw detectable events: 420
number of all simulated gw events: 100000
storing detectable gw params in ./ler_data/gw_param.json
  • look for available parameters

  • Note: This is for spin-less systems.

[15]:
param_detectable.keys()
[15]:
dict_keys(['zs', 'geocent_time', 'ra', 'dec', 'phase', 'psi', 'theta_jn', 'luminosity_distance', 'mass_1_source', 'mass_2_source', 'mass_1', 'mass_2', 'L1', 'H1', 'V1', 'optimal_snr_net'])
  • all gwrates initialization parameters, simulated parameters’s json file names and rate results are strored as json file in the ler_data directory by default.

  • Now, let’s see all the LeR class initialization parameters. This is either get from the json file ‘.ler_data/gwrates_params.json’ or from the ler object.

[16]:
# what are the saved files?
ler.json_file_names
[16]:
{'gwrates_param': 'gwrates_params.json',
 'gw_param': 'gw_param.json',
 'gw_param_detectable': 'gw_param_detectable.json'}
[17]:
# let me introduce two of the most use function in ler
from ler.utils import load_json, append_json
ler_params = load_json('ler_data/gwrates_params.json')
# ler_params = ler.gw_param_sampler_dict  # if you want to get it as an object
ler_params
[17]:
{'npool': '4',
 'z_min': '0.0',
 'z_max': '10.0',
 'size': '100000',
 'batch_size': '50000',
 'cosmology': 'LambdaCDM(H0=70.0 km / (Mpc s), Om0=0.3, Ode0=0.7, Tcmb0=0.0 K, Neff=3.04, m_nu=None, Ob0=None)',
 'snr_finder': '<bound method GWSNR.snr of <gwsnr.gwsnr.GWSNR object at 0x29cb6e680>>',
 'json_file_names': "{'gwrates_param': 'gwrates_params.json', 'gw_param': 'gw_param.json', 'gw_param_detectable': 'gw_param_detectable.json'}",
 'interpolator_directory': './interpolator_pickle',
 'gw_param_sampler_dict': {'z_min': '0.0',
  'z_max': '10.0',
  'event_type': 'BBH',
  'source_priors': "{'merger_rate_density': 'merger_rate_density_bbh_popI_II_oguri2018', 'source_frame_masses': 'binary_masses_BBH_popI_II_powerlaw_gaussian', 'zs': 'sample_source_redshift', 'geocent_time': 'sampler_uniform', 'ra': 'sampler_uniform', 'dec': 'sampler_cosine', 'phase': 'sampler_uniform', 'psi': 'sampler_uniform', 'theta_jn': 'sampler_sine'}",
  'source_priors_params': "{'merger_rate_density': {'R0': 2.39e-08, 'b2': 1.6, 'b3': 2.0, 'b4': 30}, 'source_frame_masses': {'mminbh': 4.98, 'mmaxbh': 112.5, 'alpha': 3.78, 'mu_g': 32.27, 'sigma_g': 3.88, 'lambda_peak': 0.03, 'delta_m': 4.8, 'beta': 0.81}, 'zs': None, 'geocent_time': {'min_': 1238166018, 'max_': 1269702018}, 'ra': {'min_': 0.0, 'max_': 6.283185307179586}, 'dec': None, 'phase': {'min_': 0.0, 'max_': 6.283185307179586}, 'psi': {'min_': 0.0, 'max_': 3.141592653589793}, 'theta_jn': None}",
  'cosmology': 'LambdaCDM(H0=70.0 km / (Mpc s), Om0=0.3, Ode0=0.7, Tcmb0=0.0 K, Neff=3.04, m_nu=None, Ob0=None)',
  'spin_zero': 'True',
  'spin_precession': 'False',
  'interpolator_directory': './interpolator_pickle',
  'create_new_interpolator': 'False'},
 'snr_calculator_dict': {'npool': '4',
  'mtot_min': '2.0',
  'mtot_max': '184.98599853446768',
  'ratio_min': '0.1',
  'ratio_max': '1.0',
  'mtot_resolution': '500',
  'ratio_resolution': '50',
  'sampling_frequency': '2048.0',
  'waveform_approximant': 'IMRPhenomD',
  'minimum_frequency': '20.0',
  'snr_type': 'interpolation',
  'psds': "[PowerSpectralDensity(psd_file='None', asd_file='/Users/phurailatpamhemantakumar/anaconda3/envs/ler/lib/python3.10/site-packages/bilby/gw/detector/noise_curves/aLIGO_O4_high_asd.txt'), PowerSpectralDensity(psd_file='None', asd_file='/Users/phurailatpamhemantakumar/anaconda3/envs/ler/lib/python3.10/site-packages/bilby/gw/detector/noise_curves/aLIGO_O4_high_asd.txt'), PowerSpectralDensity(psd_file='None', asd_file='/Users/phurailatpamhemantakumar/anaconda3/envs/ler/lib/python3.10/site-packages/bilby/gw/detector/noise_curves/AdV_asd.txt')]",
  'ifos': 'None',
  'interpolator_dir': './interpolator_pickle',
  'create_new_interpolator': 'False',
  'gwsnr_verbose': 'True',
  'multiprocessing_verbose': 'True',
  'mtot_cut': 'True'},
 'detectable_gw_rate_per_year': 434.79616560249093,
 'detectability_condition': 'step_function'}
  • plotting the redshift distribution of event parameters

[18]:
import matplotlib.pyplot as plt
from ler.utils import plots as lerplt


# input param_dict can be either a dictionary or a json file name that contains the parameters
plt.figure(figsize=(6, 4))
lerplt.param_plot(
    param_name='zs',
    param_dict='ler_data/gw_param_detectable.json',
    plot_label='zs (detectable)',
)
lerplt.param_plot(
    param_name='zs',
    param_dict='ler_data/gw_param.json',
    plot_label='zs (all)',
)
plt.xlim(0.001,10)
plt.grid(alpha=0.4)
plt.show()
getting gw_params from json file ler_data/gw_param_detectable.json...
getting gw_params from json file ler_data/gw_param.json...
../../_images/examples_rates_GWRATES_complete_exmaples_15_1.png

Custom functions

  • ler allows internal model functions to be change with custom functions.

  • It also allows to change the default parameters of the existing model functions.

First let’s look at what are the input parameters available for ler.GWRATES. The input paramters can divided into three categories

  1. GWRATES set up params

  2. CBCSourceParameterDistribution set up params (as kwargs)

  3. GWSNR set up params (as kwargs)

[19]:
from ler.rates import GWRATES
# default initialization
ler = GWRATES(npool=4, verbose=True)
z_to_luminosity_distance interpolator will be loaded from ./interpolator_pickle/z_to_luminosity_distance/z_to_luminosity_distance_0.pickle
differential_comoving_volume interpolator will be loaded from ./interpolator_pickle/differential_comoving_volume/differential_comoving_volume_0.pickle
merger_rate_density_bbh_popI_II_oguri2018 interpolator will be loaded from ./interpolator_pickle/merger_rate_density_bbh_popI_II_oguri2018/merger_rate_density_bbh_popI_II_oguri2018_0.pickle
psds not given. Choosing bilby's default psds
npool:  4
snr type:  interpolation
waveform approximant:  IMRPhenomD
sampling frequency:  2048.0
minimum frequency (fmin):  20.0
mtot=mass1+mass2
min(mtot):  2.0
max(mtot) (with the given fmin=20.0): 184.98599853446768
detectors:  None
min(ratio):  0.1
max(ratio):  1.0
mtot resolution:  500
ratio resolution:  50
interpolator directory:  ./interpolator_pickle
Interpolator will be loaded for L1 detector from ./interpolator_pickle/L1/partialSNR_dict_0.pickle
Interpolator will be loaded for H1 detector from ./interpolator_pickle/H1/partialSNR_dict_0.pickle
Interpolator will be loaded for V1 detector from ./interpolator_pickle/V1/partialSNR_dict_0.pickle

 GWRATES set up params:
npool =  4
z_min =  0.0
z_max =  10.0
event_type =  BBH
size =  100000
batch_size =  50000
cosmology =  LambdaCDM(H0=70.0 km / (Mpc s), Om0=0.3, Ode0=0.7, Tcmb0=0.0 K, Neff=3.04, m_nu=None, Ob0=None)
snr_finder =  <bound method GWSNR.snr of <gwsnr.gwsnr.GWSNR object at 0x29cae4a60>>
json_file_names =  {'gwrates_param': 'gwrates_params.json', 'gw_param': 'gw_param.json', 'gw_param_detectable': 'gw_param_detectable.json'}
interpolator_directory =  ./interpolator_pickle

 GWRATES also takes CBCSourceParameterDistribution params as kwargs, as follows:
source_priors= {'merger_rate_density': 'merger_rate_density_bbh_popI_II_oguri2018', 'source_frame_masses': 'binary_masses_BBH_popI_II_powerlaw_gaussian', 'zs': 'sample_source_redshift', 'geocent_time': 'sampler_uniform', 'ra': 'sampler_uniform', 'dec': 'sampler_cosine', 'phase': 'sampler_uniform', 'psi': 'sampler_uniform', 'theta_jn': 'sampler_sine'}
source_priors_params= {'merger_rate_density': {'R0': 2.39e-08, 'b2': 1.6, 'b3': 2.0, 'b4': 30}, 'source_frame_masses': {'mminbh': 4.98, 'mmaxbh': 112.5, 'alpha': 3.78, 'mu_g': 32.27, 'sigma_g': 3.88, 'lambda_peak': 0.03, 'delta_m': 4.8, 'beta': 0.81}, 'zs': None, 'geocent_time': {'min_': 1238166018, 'max_': 1269702018}, 'ra': {'min_': 0.0, 'max_': 6.283185307179586}, 'dec': None, 'phase': {'min_': 0.0, 'max_': 6.283185307179586}, 'psi': {'min_': 0.0, 'max_': 3.141592653589793}, 'theta_jn': None}
spin_zero= True
spin_precession= False
create_new_interpolator= False

 GWRATES also takes GWSNR params as kwargs, as follows:
mtot_min =  2.0
mtot_max =  184.98599853446768
ratio_min =  0.1
ratio_max =  1.0
mtot_resolution =  500
ratio_resolution =  50
sampling_frequency =  2048.0
waveform_approximant =  IMRPhenomD
minimum_frequency =  20.0
snr_type =  interpolation
psds =  [PowerSpectralDensity(psd_file='None', asd_file='/Users/phurailatpamhemantakumar/anaconda3/envs/ler/lib/python3.10/site-packages/bilby/gw/detector/noise_curves/aLIGO_O4_high_asd.txt'), PowerSpectralDensity(psd_file='None', asd_file='/Users/phurailatpamhemantakumar/anaconda3/envs/ler/lib/python3.10/site-packages/bilby/gw/detector/noise_curves/aLIGO_O4_high_asd.txt'), PowerSpectralDensity(psd_file='None', asd_file='/Users/phurailatpamhemantakumar/anaconda3/envs/ler/lib/python3.10/site-packages/bilby/gw/detector/noise_curves/AdV_asd.txt')]
ifos =  None
interpolator_dir =  ./interpolator_pickle
create_new_interpolator =  False
gwsnr_verbose =  True
multiprocessing_verbose =  True
mtot_cut =  True

 For reference, the chosen source parameters are listed below:
merger_rate_density =  merger_rate_density_bbh_popI_II_oguri2018
merger_rate_density_params =  {'R0': 2.39e-08, 'b2': 1.6, 'b3': 2.0, 'b4': 30}
source_frame_masses =  binary_masses_BBH_popI_II_powerlaw_gaussian
source_frame_masses_params =  {'mminbh': 4.98, 'mmaxbh': 112.5, 'alpha': 3.78, 'mu_g': 32.27, 'sigma_g': 3.88, 'lambda_peak': 0.03, 'delta_m': 4.8, 'beta': 0.81}
geocent_time =  sampler_uniform
geocent_time_params =  {'min_': 1238166018, 'max_': 1269702018}
ra =  sampler_uniform
ra_params =  {'min_': 0.0, 'max_': 6.283185307179586}
dec =  sampler_cosine
dec_params =  None
phase =  sampler_uniform
phase_params =  {'min_': 0.0, 'max_': 6.283185307179586}
psi =  sampler_uniform
psi_params =  {'min_': 0.0, 'max_': 3.141592653589793}
theta_jn =  sampler_sine
theta_jn_params =  None

As an example, I will change,

  • merger_rate_density_params’s default value of local merger rate density (\(R_0\)) to 2.5e-9 \(Mpc^{-3} yr^{-1}\). But, I am still using the default merger_rate_density function, which is ‘merger_rate_density_bbh_popI_II_oguri2018’.

  • source_frame_masses to a custom function. This is similar to the internal default function, i.e. PowerLaw+Peak model. I am using gwcosmo’s powerlaw_gaussian prior for this example.

  • gwsnr parameters: By default, it uses ‘IMRPhenomD’ waveform model with no spin. It uses interpolation method to find the ‘snr’ and it is super fast. But for the example below, I am using ‘IMRPhenomXPHM` with precessing spins. This is without interpolation but through inner product method. It will be slower.

Note: All custom functions should have ‘size’ as the only input.

[20]:
from gwcosmo import priors as p

# define your custom function of mass_1_source and mass_2_source calculation
# it should have 'size' as the only argument
def powerlaw_peak(size):
    """
    Function to sample mass1 and mass2 from a powerlaw with a gaussian peak

    Parameters
    ----------
    size : `int`
        Number of samples to draw

    Returns
    -------
    mass_1_source : `numpy.ndarray`
        Array of mass1 samples
    mass_2_source : `numpy.ndarray`
        Array of mass2 samples
    """


    # below is the gwcosmo default values
    mminbh=4.98  # Minimum mass of the black hole (Msun)
    mmaxbh=86.22  # Maximum mass of the black hole (Msun)
    alpha=2.63  # Spectral index for the powerlaw of the primary mass distribution
    mu_g=33.07  # Mean of the Gaussian component in the primary mass distribution
    sigma_g=5.69  # Width of the Gaussian component in the primary mass distribution
    lambda_peak=0.10  # Fraction of the model in the Gaussian component
    delta_m=4.82  # Range of mass tapering on the lower end of the mass distribution
    beta=1.26  # Spectral index for the powerlaw of the mass ratio distribution

    model = p.BBH_powerlaw_gaussian(
        mminbh=mminbh,
        mmaxbh=mmaxbh,
        alpha=alpha,
        mu_g=mu_g,
        sigma_g=sigma_g,
        lambda_peak=lambda_peak,
        delta_m=delta_m,
        beta=beta,
    )
    # sample mass1 and mass2
    mass_1_source, mass_2_source = model.sample(Nsample=size)

    return (mass_1_source, mass_2_source)
  • Initialize the class with the custom function

  • changing gwrates input params

[21]:
ler = GWRATES(npool=4, verbose=False,
    source_priors=dict(
        merger_rate_density='merger_rate_density_bbh_popI_II_oguri2018',
        source_frame_masses=powerlaw_peak,
    ),
   source_priors_params=dict(
        merger_rate_density=dict(
            R0=2.9e-08,
            b2=1.6,
            b3=2.0,
            b4=30
        ),
        source_frame_masses=dict(
            mminbh=4.98,
            mmaxbh=86.22,
            alpha=2.63,
            mu_g=33.07,
            sigma_g=5.69,
            lambda_peak=0.10,
            delta_m=4.82,
            beta=1.26
        ),
   ),
   waveform_approximant = 'IMRPhenomXPHM',
   snr_type='inner_product',
   spin_zero=False,
   spin_precession=True,
)
  • simulation of the GW CBC (compact-binary-coalescence) population

  • you can set the batch size at the gwarates initialization too.

  • or you can set it using the batch_size instance variable

  • if the sample size is large, it is recommended to set the batch size to a large number, but it will consume more RAM memory or simply your machine’s individual core won’t be able to handle it.

[22]:
ler.batch_size = 50000
ler.gw_cbc_statistics(size=100000, resume=False, output_jsonfile = 'new_gw_params.json');
Simulated GW params will be stored in ./ler_data/new_gw_params.json
chosen batch size = 50000 with total size = 100000
There will be 2 batche(s)
Batch no. 1
sampling gw source params...
calculating snrs...
solving SNR with inner product
100%|████████████████████████████████████████████████████████| 40696/40696 [00:56<00:00, 715.21it/s]
Batch no. 2
sampling gw source params...
calculating snrs...
solving SNR with inner product
100%|████████████████████████████████████████████████████████| 40758/40758 [00:55<00:00, 736.63it/s]
saving all gw_params in ./ler_data/new_gw_params.json...
  • you can also also called the generated parameters using an instance attribute. This instance attribute gets the dict from the json file generated above.

  • This is for spin-precessing systems, and it will contain extra (spin-related) parameters.

[23]:
param = ler.gw_param
print(param.keys())
dict_keys(['zs', 'geocent_time', 'ra', 'dec', 'phase', 'psi', 'theta_jn', 'a_1', 'a_2', 'tilt_1', 'tilt_2', 'phi_12', 'phi_jl', 'luminosity_distance', 'mass_1_source', 'mass_2_source', 'mass_1', 'mass_2', 'L1', 'H1', 'V1', 'optimal_snr_net'])
  • generate detectable events

  • and get the rate of detectable events

  • You have two choice to input the generated parameters, either as json file name or as a dict

[24]:
# ler.gw_rate(); # this is short hand for the following
rate, param_detectable = ler.gw_rate(
    gw_param='new_gw_params.json',
    snr_threshold=8.0,
    output_jsonfile='new_gw_params_detectable.json',
)
getting gw_params from json file ./ler_data/new_gw_params.json...
total gw rate (yr^-1) (with step function): 914.1071767309512
number of simulated gw detectable events: 883
number of all simulated gw events: 100000
storing detectable gw params in ./ler_data/new_gw_params_detectable.json
  • to check all the stored json file names

[25]:
ler.json_file_names
[25]:
{'gwrates_param': 'gwrates_params.json',
 'gw_param': 'new_gw_params.json',
 'gw_param_detectable': 'new_gw_params_detectable.json'}
  • again you can call the generated detectable events’ parameters using an instance attribute

  • Note: The data is not stored in this instance attribute, it is stored in the json file. This scheme is used to save RAM memory.

[26]:
param_detectable = ler.gw_param_detectable
print(param_detectable.keys())
dict_keys(['zs', 'geocent_time', 'ra', 'dec', 'phase', 'psi', 'theta_jn', 'a_1', 'a_2', 'tilt_1', 'tilt_2', 'phi_12', 'phi_jl', 'luminosity_distance', 'mass_1_source', 'mass_2_source', 'mass_1', 'mass_2', 'L1', 'H1', 'V1', 'optimal_snr_net'])

How to look for available model functions?

  • All available names are stored as a dict in ler instance

  • the keys of this dict shows the parameter type

  • the values are also dict, where the keys are the model function names and the values are their input parameters

[27]:
# let's look at one of the dict key
ler.available_gw_prior_list_and_its_params['source_frame_masses']
[27]:
{'binary_masses_BBH_popI_II_powerlaw_gaussian': {'mminbh': 4.98,
  'mmaxbh': 112.5,
  'alpha': 3.78,
  'mu_g': 32.27,
  'sigma_g': 3.88,
  'lambda_peak': 0.03,
  'delta_m': 4.8,
  'beta': 0.81},
 'binary_masses_BBH_popIII_lognormal': {'Mc': 30.0, 'sigma': 0.3, 'beta': 1.1},
 'binary_masses_BBH_primordial_lognormal': {'Mc': 30.0,
  'sigma': 0.3,
  'beta': 1.1},
 'binary_masses_BNS_gwcosmo': {'mminns': 1.0, 'mmaxns': 3.0, 'alphans': 0.0},
 'binary_masses_BNS_bimodal': {'w': 0.643,
  'muL': 1.352,
  'sigmaL': 0.08,
  'muR': 1.88,
  'sigmaR': 0.3,
  'mmin': 1.0,
  'mmax': 2.3}}
  • for looking at the choosen models and its input parameters

[28]:
print(ler.gw_param_samplers)
print(ler.gw_param_samplers_params)
{'merger_rate_density': 'merger_rate_density_bbh_popI_II_oguri2018', 'source_frame_masses': <function powerlaw_peak at 0x2ea7eadd0>, 'zs': 'sample_source_redshift', 'geocent_time': 'sampler_uniform', 'ra': 'sampler_uniform', 'dec': 'sampler_cosine', 'phase': 'sampler_uniform', 'psi': 'sampler_uniform', 'theta_jn': 'sampler_sine', 'a_1': 'sampler_uniform', 'a_2': 'sampler_uniform', 'tilt_1': 'sampler_sine', 'tilt_2': 'sampler_sine', 'phi_12': 'sampler_uniform', 'phi_jl': 'sampler_uniform'}
{'merger_rate_density': {'R0': 2.9e-08, 'b2': 1.6, 'b3': 2.0, 'b4': 30}, 'source_frame_masses': {'mminbh': 4.98, 'mmaxbh': 86.22, 'alpha': 2.63, 'mu_g': 33.07, 'sigma_g': 5.69, 'lambda_peak': 0.1, 'delta_m': 4.82, 'beta': 1.26}, 'zs': None, 'geocent_time': {'min_': 1238166018, 'max_': 1269702018}, 'ra': {'min_': 0.0, 'max_': 6.283185307179586}, 'dec': None, 'phase': {'min_': 0.0, 'max_': 6.283185307179586}, 'psi': {'min_': 0.0, 'max_': 3.141592653589793}, 'theta_jn': None, 'a_1': {'min_': 0.0, 'max_': 0.8}, 'a_2': {'min_': 0.0, 'max_': 0.8}, 'tilt_1': None, 'tilt_2': None, 'phi_12': {'min_': 0, 'max_': 6.283185307179586}, 'phi_jl': {'min_': 0, 'max_': 6.283185307179586}}

Using internal model functions

Mass distribution of BBH (mass-1, larger mass only)

  • compare the default mass distribution with the custom mass distribution

[29]:
# calling the default mass distribution model
mass_1_source, mass_2_source = ler.binary_masses_BBH_popI_II_powerlaw_gaussian(size=10000)
default_model_dict = dict(mass_1_source=mass_1_source)

# calling the custom mass distribution model
mass_1_source, mass_2_source = powerlaw_peak(size=10000)
custom_model_dict = dict(mass_1_source=mass_1_source)
  • let’s do a comparision plot between you custom model and the default model

[30]:
import matplotlib.pyplot as plt
from ler.utils import plots as lerplt

# let's do a comparision plot between you custom model and the default model
plt.figure(figsize=(6, 4))
lerplt.param_plot(
    param_name="mass_1_source",
    param_dict=custom_model_dict, # or the json file name
    plot_label='custom model',
);
lerplt.param_plot(
    param_name="mass_1_source",
    param_dict=default_model_dict,
    plot_label='default model',
);
plt.xlabel(r'source $m_1$ ($M_{\odot}$)')
plt.ylabel(r'$p(source mass_1)$')
plt.xlim(0,60)
plt.grid(alpha=0.4)
plt.show()
../../_images/examples_rates_GWRATES_complete_exmaples_40_0.png

Selecting particular number of detectable events

  • this is particularly useful when you want only the detectable events to be saved in the json file

  • detectable event rates will be calculated at each batches. Subsequent batch will consider the previous batch’s detectable events. So, the rates will become more accurate as the iterations increases and will converge to a stable value at higher samples.

  • you can resume the rate calculation from the last saved batch.

[31]:
from ler.rates import GWRATES
ler = GWRATES(verbose=False)
  • sampling till desired number of detectable events are found

[32]:
n_size_param = ler.selecting_n_gw_detectable_events(
    size=5000,
    snr_threshold=8.0,
    batch_size=50000,
    resume=False,
    output_jsonfile='gw_params_n_detectable.json',
    meta_data_file="meta_gw.json",
    )
collected number of detectable events =  0
collected number of detectable events =  209
total number of events =  50000
total gw rate (yr^-1): 432.7257076710505
collected number of detectable events =  408
total number of events =  100000
total gw rate (yr^-1): 422.3734180138484
collected number of detectable events =  613
total number of events =  150000
total gw rate (yr^-1): 423.0635706576618
collected number of detectable events =  867
total number of events =  200000
total gw rate (yr^-1): 448.7717566397139
collected number of detectable events =  1058
total number of events =  250000
total gw rate (yr^-1): 438.1088982927956
collected number of detectable events =  1283
total number of events =  300000
total gw rate (yr^-1): 442.732921006346
collected number of detectable events =  1484
total number of events =  350000
total gw rate (yr^-1): 438.93708146537176
collected number of detectable events =  1699
total number of events =  400000
total gw rate (yr^-1): 439.71350318966194
collected number of detectable events =  1896
total number of events =  450000
total gw rate (yr^-1): 436.1764708901179
collected number of detectable events =  2103
total number of events =  500000
total gw rate (yr^-1): 435.41730298192306
collected number of detectable events =  2313
total number of events =  550000
total gw rate (yr^-1): 435.3608359474292
collected number of detectable events =  2526
total number of events =  600000
total gw rate (yr^-1): 435.8313945682111
collected number of detectable events =  2732
total number of events =  650000
total gw rate (yr^-1): 435.1146975919433
collected number of detectable events =  2959
total number of events =  700000
total gw rate (yr^-1): 437.60607279516006
collected number of detectable events =  3150
total number of events =  750000
total gw rate (yr^-1): 434.79616560249093
collected number of detectable events =  3363
total number of events =  800000
total gw rate (yr^-1): 435.18437646463605
collected number of detectable events =  3549
total number of events =  850000
total gw rate (yr^-1): 432.2385410989469
collected number of detectable events =  3752
total number of events =  900000
total gw rate (yr^-1): 431.57545326469466
collected number of detectable events =  3954
total number of events =  950000
total gw rate (yr^-1): 430.8731926797617
collected number of detectable events =  4158
total number of events =  1000000
total gw rate (yr^-1): 430.44820394646604
collected number of detectable events =  4392
total number of events =  1050000
total gw rate (yr^-1): 433.021487375542
collected number of detectable events =  4595
total number of events =  1100000
total gw rate (yr^-1): 432.44337249858137
collected number of detectable events =  4786
total number of events =  1150000
total gw rate (yr^-1): 430.8352895597353
collected number of detectable events =  5022
total number of events =  1200000
total gw rate (yr^-1): 433.2433221539106
storing detectable gw params in ./ler_data/gw_params_n_detectable.json

 trmming final result to size=5000

Important Note: At each iteration, rate is calculated using the cummulatively increasing number of events. It become stable at around 2 million events. This is the number of events that is required to get a stable rate.

  • Now get the sampled (detectable) events.

[33]:
print(n_size_param.keys())
print(f"size of each parameters={len(n_size_param['zs'])}")
dict_keys(['zs', 'geocent_time', 'ra', 'dec', 'phase', 'psi', 'theta_jn', 'luminosity_distance', 'mass_1_source', 'mass_2_source', 'mass_1', 'mass_2', 'L1', 'H1', 'V1', 'optimal_snr_net'])
size of each parameters=5000
  • let’s see the meta file

[34]:
from ler.utils import load_json

meta = load_json('ler_data/meta_gw.json')
print(meta)
{'events_total': [50000, 100000, 150000, 200000, 250000, 300000, 350000, 400000, 450000, 500000, 550000, 600000, 650000, 700000, 750000, 800000, 850000, 900000, 950000, 1000000, 1050000, 1100000, 1150000, 1200000], 'detectable_events': [209.0, 408.0, 613.0, 867.0, 1058.0, 1283.0, 1484.0, 1699.0, 1896.0, 2103.0, 2313.0, 2526.0, 2732.0, 2959.0, 3150.0, 3363.0, 3549.0, 3752.0, 3954.0, 4158.0, 4392.0, 4595.0, 4786.0, 5022.0], 'total_rate': [432.7257076710505, 422.3734180138484, 423.0635706576618, 448.7717566397139, 438.1088982927956, 442.732921006346, 438.93708146537176, 439.71350318966194, 436.1764708901179, 435.41730298192306, 435.3608359474292, 435.8313945682111, 435.1146975919433, 437.60607279516006, 434.79616560249093, 435.18437646463605, 432.2385410989469, 431.57545326469466, 430.8731926797617, 430.44820394646604, 433.021487375542, 432.44337249858137, 430.8352895597353, 433.2433221539106]}

Utilizing the ANN feature of the gwsnr

  • This particularly useful when you are considering spin-precessing systems.

  • inner product method is slow, ann method is faster alternative.

  • But it is not very accurate in calculating the absolute value snr, but it is fine for checking the detectability of the events (accuracy>98%).

  • The example below is important when you want the accurate SNR value only for the detectable events.

[35]:
from ler.rates import GWRATES
# class initialization
ler = GWRATES(verbose=False, batch_size=100000, snr_type='ann', waveform_approximant='IMRPhenomXPHM')
[36]:
# event sampling
# make size=1000000 for better statistics
ler.gw_cbc_statistics(size=200000, resume=False, output_jsonfile = 'gw_params.json');
Simulated GW params will be stored in ./ler_data/gw_params.json
chosen batch size = 100000 with total size = 200000
There will be 2 batche(s)
Batch no. 1
sampling gw source params...
calculating snrs...
Batch no. 2
sampling gw source params...
calculating snrs...
saving all gw_params in ./ler_data/gw_params.json...
[37]:
# rate calculation
# snr recalculation
ler.gw_rate(snr_threshold=8.0,
            output_jsonfile='gw_params_detectable.json',
            snr_recalculation=True,  # if True, it will recalculate the SNR with inner product method
            threshold_snr_recalculation=6.0,  # re-calculate SNR for events with SNR greater than this value
            );
getting gw_params from json file ./ler_data/gw_params.json...
100%|██████████████████████████████████████████████████████████| 2254/2254 [00:06<00:00, 334.15it/s]
total gw rate (yr^-1) (with step function): 466.3706490569575
number of simulated gw detectable events: 901
number of all simulated gw events: 200000
storing detectable gw params in ./ler_data/gw_params_detectable.json
  • Now let’s repeat the above example with the inner product method to check the accuracy of the ann method.

  • This will take a while to run.

  • as of 24.02.2024 ann is available only for default detector network and psds.

  • contact the author for custom detector network and psds, or you can do it yourself by following the gwsnr’s ann model development example.

[38]:
from ler.rates import GWRATES
# class initialization
ler = GWRATES(verbose=False, batch_size=100000, snr_type='inner_product', waveform_approximant='IMRPhenomXPHM')
# event sampling
#
ler.gw_cbc_statistics(size=200000, resume=False, output_jsonfile = 'gw_params.json');
# rate calculation
ler.gw_rate(snr_threshold=8.0,
            output_jsonfile='gw_params_detectable.json',
            );
Simulated GW params will be stored in ./ler_data/gw_params.json
chosen batch size = 100000 with total size = 200000
There will be 2 batche(s)
Batch no. 1
sampling gw source params...
calculating snrs...
solving SNR with inner product
100%|████████████████████████████████████████████████████████| 92509/92509 [08:20<00:00, 184.77it/s]
Batch no. 2
sampling gw source params...
calculating snrs...
solving SNR with inner product
100%|████████████████████████████████████████████████████████| 92506/92506 [01:56<00:00, 795.14it/s]
saving all gw_params in ./ler_data/gw_params.json...
getting gw_params from json file ./ler_data/gw_params.json...
total gw rate (yr^-1) (with step function): 443.07799732825265
number of simulated gw detectable events: 856
number of all simulated gw events: 200000
storing detectable gw params in ./ler_data/gw_params_detectable.json
  • close enough

  • For more matching results, you can increase the number of events to 1,000,000 or more.

BNS (Binary Neutron Star) example

  • All you need is to change is event_type in class initialization to ‘BNS’.

  • But in this example, I will also change the detector network to [‘CE’, ‘ET’]. These are future 3rd generation detectors. Since, they are more sensitive, I will change the redshift range to 0-20 (z_max=20).

  • The default mass distribution model has a mass-cutoff of 2.3 Msun. So, the maximum possible redshifted total mass is (2.3+2.3)*(1+z_max)=96.6. This allows, gwsnr to have a good interpolation for the snr values.

  • Difference in the models for BNS and BBH are:

[39]:
from ler.rates import GWRATES

ler = GWRATES(event_type='BNS', ifos=['CE', 'ET'], z_min=0, z_max=20, mtot_max=96.6, verbose=True)
z_to_luminosity_distance interpolator will be loaded from ./interpolator_pickle/z_to_luminosity_distance/z_to_luminosity_distance_1.pickle
differential_comoving_volume interpolator will be loaded from ./interpolator_pickle/differential_comoving_volume/differential_comoving_volume_2.pickle
merger_rate_density_bbh_popI_II_oguri2018 interpolator will be loaded from ./interpolator_pickle/merger_rate_density_bbh_popI_II_oguri2018/merger_rate_density_bbh_popI_II_oguri2018_2.pickle
binary_masses_BNS_bimodal interpolator will be loaded from ./interpolator_pickle/binary_masses_BNS_bimodal/binary_masses_BNS_bimodal_0.pickle
npool:  4
snr type:  interpolation
waveform approximant:  IMRPhenomD
sampling frequency:  2048.0
minimum frequency (fmin):  20.0
mtot=mass1+mass2
min(mtot):  2.0
max(mtot) (with the given fmin=20.0): 96.6
detectors:  None
min(ratio):  0.1
max(ratio):  1.0
mtot resolution:  500
ratio resolution:  50
interpolator directory:  ./interpolator_pickle
Interpolator will be loaded for CE detector from ./interpolator_pickle/CE/partialSNR_dict_0.pickle
Interpolator will be loaded for ET1 detector from ./interpolator_pickle/ET1/partialSNR_dict_0.pickle
Interpolator will be loaded for ET2 detector from ./interpolator_pickle/ET2/partialSNR_dict_0.pickle
Interpolator will be loaded for ET3 detector from ./interpolator_pickle/ET3/partialSNR_dict_0.pickle

 GWRATES set up params:
npool =  4
z_min =  0
z_max =  20
event_type =  BNS
size =  100000
batch_size =  50000
cosmology =  LambdaCDM(H0=70.0 km / (Mpc s), Om0=0.3, Ode0=0.7, Tcmb0=0.0 K, Neff=3.04, m_nu=None, Ob0=None)
snr_finder =  <bound method GWSNR.snr of <gwsnr.gwsnr.GWSNR object at 0x2901a98a0>>
json_file_names =  {'gwrates_param': 'gwrates_params.json', 'gw_param': 'gw_param.json', 'gw_param_detectable': 'gw_param_detectable.json'}
interpolator_directory =  ./interpolator_pickle

 GWRATES also takes CBCSourceParameterDistribution params as kwargs, as follows:
source_priors= {'merger_rate_density': 'merger_rate_density_bbh_popI_II_oguri2018', 'source_frame_masses': 'binary_masses_BNS_bimodal', 'zs': 'sample_source_redshift', 'geocent_time': 'sampler_uniform', 'ra': 'sampler_uniform', 'dec': 'sampler_cosine', 'phase': 'sampler_uniform', 'psi': 'sampler_uniform', 'theta_jn': 'sampler_sine'}
source_priors_params= {'merger_rate_density': {'R0': 1.0550000000000001e-07, 'b2': 1.6, 'b3': 2.0, 'b4': 30}, 'source_frame_masses': {'w': 0.643, 'muL': 1.352, 'sigmaL': 0.08, 'muR': 1.88, 'sigmaR': 0.3, 'mmin': 1.0, 'mmax': 2.3}, 'zs': None, 'geocent_time': {'min_': 1238166018, 'max_': 1269702018}, 'ra': {'min_': 0.0, 'max_': 6.283185307179586}, 'dec': None, 'phase': {'min_': 0.0, 'max_': 6.283185307179586}, 'psi': {'min_': 0.0, 'max_': 3.141592653589793}, 'theta_jn': None}
spin_zero= True
spin_precession= False
create_new_interpolator= False

 GWRATES also takes GWSNR params as kwargs, as follows:
mtot_min =  2.0
mtot_max =  96.6
ratio_min =  0.1
ratio_max =  1.0
mtot_resolution =  500
ratio_resolution =  50
sampling_frequency =  2048.0
waveform_approximant =  IMRPhenomD
minimum_frequency =  20.0
snr_type =  interpolation
psds =  [PowerSpectralDensity(psd_file='/Users/phurailatpamhemantakumar/anaconda3/envs/ler/lib/python3.10/site-packages/bilby/gw/detector/noise_curves/CE_psd.txt', asd_file='None'), PowerSpectralDensity(psd_file='/Users/phurailatpamhemantakumar/anaconda3/envs/ler/lib/python3.10/site-packages/bilby/gw/detector/noise_curves/ET_D_psd.txt', asd_file='None'), PowerSpectralDensity(psd_file='/Users/phurailatpamhemantakumar/anaconda3/envs/ler/lib/python3.10/site-packages/bilby/gw/detector/noise_curves/ET_D_psd.txt', asd_file='None'), PowerSpectralDensity(psd_file='/Users/phurailatpamhemantakumar/anaconda3/envs/ler/lib/python3.10/site-packages/bilby/gw/detector/noise_curves/ET_D_psd.txt', asd_file='None')]
ifos =  ['CE', 'ET']
interpolator_dir =  ./interpolator_pickle
create_new_interpolator =  False
gwsnr_verbose =  True
multiprocessing_verbose =  True
mtot_cut =  True

 For reference, the chosen source parameters are listed below:
merger_rate_density =  merger_rate_density_bbh_popI_II_oguri2018
merger_rate_density_params =  {'R0': 1.0550000000000001e-07, 'b2': 1.6, 'b3': 2.0, 'b4': 30}
source_frame_masses =  binary_masses_BNS_bimodal
source_frame_masses_params =  {'w': 0.643, 'muL': 1.352, 'sigmaL': 0.08, 'muR': 1.88, 'sigmaR': 0.3, 'mmin': 1.0, 'mmax': 2.3}
geocent_time =  sampler_uniform
geocent_time_params =  {'min_': 1238166018, 'max_': 1269702018}
ra =  sampler_uniform
ra_params =  {'min_': 0.0, 'max_': 6.283185307179586}
dec =  sampler_cosine
dec_params =  None
phase =  sampler_uniform
phase_params =  {'min_': 0.0, 'max_': 6.283185307179586}
psi =  sampler_uniform
psi_params =  {'min_': 0.0, 'max_': 3.141592653589793}
theta_jn =  sampler_sine
theta_jn_params =  None
[40]:
ler.batch_size = 100000
param = ler.gw_cbc_statistics(size=200000, resume=False, save_batch=False)
Simulated GW params will be stored in ./ler_data/gw_param.json
chosen batch size = 100000 with total size = 200000
There will be 2 batche(s)
Batch no. 1
sampling gw source params...
calculating snrs...
Batch no. 2
sampling gw source params...
calculating snrs...
saving all gw_params in ./ler_data/gw_param.json...
[41]:
ler.gw_rate();
getting gw_params from json file ./ler_data/gw_param.json...
total gw rate (yr^-1) (with step function): 37728.84893984636
number of simulated gw detectable events: 72523
number of all simulated gw events: 200000
storing detectable gw params in ./ler_data/gw_param.json
  • Let’s plot the redshift and mass distribution

[42]:
import matplotlib.pyplot as plt
# ler.utils has a function for plotting histograms and KDEs
from ler.utils import plots as lerplt

params = ler.gw_param
params_detectable = ler.gw_param_detectable
[43]:
# sample source redshifts (source frame)
zs = params['zs']
zs_detectable = params_detectable['zs']

plt.figure(figsize=(6, 4))
lerplt.param_plot(
    param_name="zs",
    param_dict=params, # or the json file name
    plot_label='all',
    histogram=False,
);
lerplt.param_plot(
    param_name="zs",
    param_dict=params_detectable,
    plot_label='detectable',
    histogram=False,
);
plt.xlabel(r'source redshift ($z_s$)')
plt.ylabel(r'$p(z_s)$')
plt.xlim(0,15)
plt.grid(alpha=0.4)
plt.show()

../../_images/examples_rates_GWRATES_complete_exmaples_63_0.png
  • From this result, it should be known that you could set z_max lower (as well as mtot_max).

  • now, let’s source mass_distribution (source frame).

[45]:
plt.figure(figsize=(6, 4))
lerplt.param_plot(
    param_name="mass_1_source",
    param_dict=params, # or the json file name
    plot_label=r"$m_1^{src}$",
    histogram=False,
);
lerplt.param_plot(
    param_name="mass_2_source",
    param_dict=params,
    plot_label=r"$m_2^{src}$",
    histogram=False,
);
plt.xlabel(r'source redshift ($m^{src} M_{\odot}$)')
plt.ylabel(r'p($m_1^{src}$)')
plt.xlim(1,2.5)
plt.grid(alpha=0.4)
plt.show()
../../_images/examples_rates_GWRATES_complete_exmaples_65_0.png

Using custom detection criteria

  • let’s replace the default calculator in GWRATES with a ‘pdet’ calculator.

  • I choose a ‘pdet’ calculator from gwsnr that’s checks whether the event has snr>8 or not.

[1]:
from ler.rates import GWRATES
from gwsnr import GWSNR
import numpy as np
  • let’s look at the input and output of ‘pdet’ calculator

[2]:
snr_ = GWSNR(gwsnr_verbose=False, pdet=True)
psds not given. Choosing bilby's default psds
Interpolator will be loaded for L1 detector from ./interpolator_pickle/L1/partialSNR_dict_0.pickle
Interpolator will be loaded for H1 detector from ./interpolator_pickle/H1/partialSNR_dict_0.pickle
Interpolator will be loaded for V1 detector from ./interpolator_pickle/V1/partialSNR_dict_0.pickle
[3]:
# initialization pdet calculator
pdet_calculator = snr_.snr

# test
pdet_calculator(
    gw_param_dict=dict(
        mass_1=np.array([10.0]),
        mass_2=np.array([10.0]),
    )
)
[3]:
{'L1': array([1]), 'H1': array([1]), 'V1': array([1]), 'pdet_net': array([1])}

Note: pdet function has input==>’gw_param_dict’ and output==>’pdet_net’.

[4]:
ler = GWRATES(verbose=False, pdet_finder=pdet_calculator)
[5]:
gw_param = ler.gw_cbc_statistics();
Simulated GW params will be stored in ./ler_data/gw_param.json
chosen batch size = 50000 with total size = 100000
There will be 2 batche(s)
Batch no. 1
sampling gw source params...
calculating pdet...
Batch no. 2
sampling gw source params...
calculating pdet...
saving all gw_params in ./ler_data/gw_param.json...
[6]:
gw_param.keys()
[6]:
dict_keys(['zs', 'geocent_time', 'ra', 'dec', 'phase', 'psi', 'theta_jn', 'luminosity_distance', 'mass_1_source', 'mass_2_source', 'mass_1', 'mass_2', 'L1', 'H1', 'V1', 'pdet_net'])
[7]:
_, gw_param_detectable = ler.gw_rate()
getting gw_params from json file ./ler_data/gw_param.json...
given detectability_condition == 'pdet'
total gw rate (yr^-1) (with step function): 460.67688974549634
number of simulated gw detectable events: 445
number of all simulated gw events: 100000
storing detectable gw params in ./ler_data/gw_param.json
[ ]: