-
Notifications
You must be signed in to change notification settings - Fork 47
Example
In this tutorial you will learn how to optimize the selection the decay of a rare particle produced
in a proton-proton collision and heavy-ion collisions at the Large Hadron Collider at CERN. The goal of the exercise is to measure in proton-proton and heavy-ion collision the production rate of these particles, which carries very fundamental information about
Lambda(c) mesons are short lived particles few fractions of seconds after their production very close to the main interaction point of the collisions. Experimentally they can be identified by looking at their decay products. The main challenge for us is being able to discriminate between:
- real (or signal) candidates
- fake (background) candidates, not coming from a real decay, but produced in the random association of uncorrelated particles As shown in the Figure above, the signal component is visible as a peak in a histogram of invariant mass. We want to enhance this peak, and in particular increase the signal over background ratio of this signal.
Since we extract the amount of signal on a statistical basis (via a fit) we cannot know in data if one candidate is a s signal or a background candidate. We can however use a trick to prepare our machine learning sample. We therefore get signal events from dedicated simulations where we identify signal candidates. On the other side, we get background candidates from real data after excluding the region we can have real signals (green regions in the figure).
Selection variables (or features) can be used to discriminate between signal and background candidates.
As shown in the plot below, the distributions of the selection variables can be significantly different for signal and background candidates. We want to define an optimal selection, which makes use of all the useful selection variables to increase the purity of our signal sample. The complete list of selection variable for this analysis (and for all the others) can be found in our database file Lambda(c) variables
Download from lxplus the two files that contain data and Monte Carlo Lambda_c candidate in proton-proton collisions collected with the ALICE detector at CERN. From the main folder of the repository, execute the following lines replacing <my_cern_user> with your NICE name :
cd machine_learning_hep/data
mkdir inputroot
scp <my_cern_user>@lxplus.cern.ch:/afs/cern.ch/work/g/ginnocen/public/exampleInputML/*.root .
If you don't have a lxplus account, you can find the same files in this dropbox folder: dropbox
The file doclassification_regression.py in the folder machine_learning_hep is the main script you will use to perform the analysis. This macro provides several functionalities.
You can select the type of optimisation problem you want to perform. In our case we will keep the default values, which are the one needed for doing the Lambdac study.
mltype = "BinaryClassification"
mlsubtype = "HFmeson"
case = "Lc"
You can select the transverse momentum region you want to consider in the optimisation. In our case we will focus one range from 2 to 4 GeV/c as in the default settings.
var_skimming = ["pt_cand_ML"]
varmin = [2]
varmax = [4]
As it will be described later we need to define a training sample of pure signal and background candidates. The larger the number of candidates we will consider the more accurate (up to a certain point!) the optimisation will be. I would suggest to start with the default settings and increase it according to the computing power of your machine.
nevt_sig = 1000
nevt_bkg = 1000
By setting the parameter
loadsampleoption = 1
you prepare the ML sample. In our analysis case, signal candidates will be taken from Monte-Carlo simulations and 100 background candidates will be taken from data in a region of mass where no signal is present (called side-band regions).
By activating one (or more) of these bit you will activate different types of algorithms.
activate_scikit = 1
activate_xgboost = 1
activate_keras = 0
For a first look, we suggest to use XGBoost algorithms, which is the fastest.
By activating the two bits you will tell the script to run the training and the testing of your algorithms to identify the best parameters of your algorithms. In the training step, the trained models will be saved locally. In the testing step, a dataframe and a new TTree with the probabilities obtained for each algorithm for each candidate will be saved.
dotraining = 1
dotesting = 1
A long list of validation utilities, which includes score cross validation, ROC curves, learning curves, feature importance can be activated using the following bits:
docrossvalidation = 1
dolearningcurve = 1
doROC = 1
doboundary = 1
doimportance = 1