- bouchet's home page
- Posts
- 2016
- 2015
- December (1)
- November (3)
- October (2)
- September (2)
- August (2)
- June (2)
- April (5)
- March (2)
- February (3)
- January (2)
- 2014
- December (2)
- November (2)
- October (3)
- September (2)
- August (3)
- July (1)
- June (3)
- May (6)
- April (6)
- March (1)
- February (2)
- January (1)
- 2013
- December (2)
- November (3)
- October (3)
- September (4)
- August (1)
- July (1)
- May (4)
- April (6)
- March (4)
- February (3)
- 2012
- 2011
- December (2)
- November (2)
- October (4)
- September (1)
- August (2)
- July (6)
- June (2)
- May (3)
- April (3)
- March (2)
- 2010
- 2009
- December (2)
- November (1)
- October (3)
- September (1)
- August (1)
- July (1)
- June (2)
- April (1)
- March (2)
- February (2)
- January (1)
- 2008
- My blog
- Post new blog entry
- All blogs
TMVA for D0 reconstruction - HFT era -
using TMVA package from ROOT@RCF (5.34.09 )
A) Set up
signal sample : 100k D0 with power law Pt, 1 particle/event
background sample :
B) Training
For this step, I have chosen the default classifier of each method :
Variables used :
A) Set up
signal sample : 100k D0 with power law Pt, 1 particle/event
background sample :
- 100 MB hijing events
- 10-100 central hijing events
B) Training
For this step, I have chosen the default classifier of each method :
- Cuts
- BoostedFisher
- BDT
- MLP
- Likelihood
- SVM
- RuleFit
- KNN
Variables used :
- ptsum : sum of daughters Pt
- sL : decay length significance = decay length /error associated
- CosPointing : cosine of Pt pair x decay vertex vectors
- stpion : pion daughter track significance = dcaXY/ error
- stkaon: kaon daughter track significance = dcaXY/ error
- pidca : product of dcaXY of daughters
- thetaGJ : opening angle of pairs in rest frame
- probability : probability of fit
note : later add dcaZ of daughters
a) training w/o PXL hits requirement
b) training with PXL hits requirement (PXL>1 for both daughters)
summary :
a) training w/o PXL hits requirement
--- DataSetFactory : Signal -- training events : 7128 (sum of weights: 7128) - requested were 0 events --- DataSetFactory : Background -- training and testing events: 16564 (sum of weights: 16564) --- Factory : ----------------------------- --- Factory : Method: Cut value: --- Factory : ----------------------------- --- Factory : Likelihood: -0.075 --- Factory : BoostedFisher: -0.866 --- Factory : MLP: +0.469 --- Factory : SVM: +0.449 --- Factory : BDT: -0.007 --- Factory : RuleFit: -0.024 --- Factory : ----------------------------- --- Factory : Testing efficiency compared to training efficiency (overtraining check) --- Factory : -------------------------------------------------------------------------------- --- Factory : MVA Signal efficiency: from test sample (from training sample) --- Factory : Method: @B=0.01 @B=0.10 @B=0.30 --- Factory : -------------------------------------------------------------------------------- --- Factory : BDT : 0.563 (0.595) 0.808 (0.816) 0.960 (0.961) --- Factory : MLP : 0.501 (0.520) 0.801 (0.801) 0.955 (0.956) --- Factory : RuleFit : 0.499 (0.506) 0.775 (0.767) 0.947 (0.946) --- Factory : SVM : 0.438 (0.440) 0.780 (0.776) 0.944 (0.944) --- Factory : Likelihood : 0.434 (0.454) 0.736 (0.742) 0.939 (0.941 --- Factory : BoostedFisher : 0.436 (0.430) 0.750 (0.746) 0.923 (0.922) --- Factory : Cuts : 0.168 (0.171) 0.357 (0.368) 0.499 (0.544) --- Factory : --------------------------------------------------------------------------------a summary of the efficiency for the different classifiers is below.

b) training with PXL hits requirement (PXL>1 for both daughters)
--- DataSetFactory : ------------------------------------------------------ --- DataSetFactory : Signal -- training events : 5267 (sum of weights: 5267) - requested were 0 events --- DataSetFactory : Background -- training and testing events: 11580 (sum of weights: 11580) --- Factory : ----------------------------- --- Factory : Method: Cut value: --- Factory : ----------------------------- --- Factory : Likelihood: -0.111 --- Factory : KNN: +0.450 --- Factory : BoostedFisher: -0.887 --- Factory : MLP: +0.458 --- Factory : SVM: +0.454 --- Factory : BDT: -0.006 --- Factory : RuleFit: -0.041 --- Factory : ----------------------------- --- Factory : Testing efficiency compared to training efficiency (overtraining check) --- Factory : -------------------------------------------------------------------------------- --- Factory : MVA Signal efficiency: from test sample (from training sample) --- Factory : Method: @B=0.01 @B=0.10 @B=0.30 --- Factory : -------------------------------------------------------------------------------- --- Factory : BDT : 0.600 (0.608) 0.849 (0.852) 0.973 (0.971) --- Factory : MLP : 0.531 (0.526) 0.815 (0.812) 0.966 (0.963) --- Factory : KNN : 0.509 (0.526) 0.804 (0.820) 0.959 (0.960) --- Factory : RuleFit : 0.518 (0.508) 0.799 (0.796) 0.959 (0.956) --- Factory : SVM : 0.447 (0.459) 0.792 (0.791) 0.954 (0.951) --- Factory : Likelihood : 0.443 (0.433) 0.730 (0.732) 0.948 (0.943) --- Factory : BoostedFisher : 0.437 (0.446) 0.768 (0.763) 0.938 (0.935) --- Factory : Cuts : 0.206 (0.215) 0.353 (0.348) 0.510 (1.000) --- Factory : --------------------------------------------------------------------------------

summary :
- PXL>1 improves all the classifiers
- ordering btw them remains the same
c) test on real data.
BDT, MLP and SVM were tested on few files from Xin's latest production : star/institutions/lbl_prod/dongx/hft/Run14/AuAu200/Tracking_TPC_IST_PXL12/output_MuDst/
Plot below is for BDT. Basically from the data, the mvaResponse is compatible with the background. (yeah!)

BDT, MLP and SVM were tested on few files from Xin's latest production : star/institutions/lbl_prod/dongx/hft/Run14/AuAu200/Tracking_TPC_IST_PXL12/output_MuDst/
Plot below is for BDT. Basically from the data, the mvaResponse is compatible with the background. (yeah!)

Groups:
- bouchet's blog
- Login or register to post comments