Neutral final states
Minutes of the NFS-meeting held on 6/4/94:
===========================================
 
In this meeting we had the following reports:
---------------------------------------------
 
Report about calibrating the resolution of the
==============================================
shower angles phi and theta: O.Behnke
===========================
 
I showed the way the calibration is done:
 
- take gamma's from K+K-pi0 data sample (after 3c-fit) ~180k gamma's
- as resolution I define the RMS-value of the distribution
  phi_measured - phi_3cfit   (same for theta)
  -> I assume the errors on phi_3cfit to be negligible
- determine this resolution as function from:
  * conversion layer
  * number of w-hits
  * c.felders fit-error =: sf   (this is on the minis as dpasho, dtasho)
- I assume that the resolutions are linear dependant
  from sf (which is also what you observe in the data)
- applied calibration "algorithm":
  bin the data in groups of different conversion layer
  and numbers of w-hits; for each of these bins make a histogram
  of the resolution in bins of sf and do a linear fit:
  resolution(sf)      =  a0      +  a1     * sf
                |c.l.      |c.l.      |c.l.
                |w.h.      |w.h.      |w.h.
 
As a result of the calibration I showed the obtained a0 and a1
distributions as function of w-hits (and in four groups of
different conversion layer).
The a0 which one could (maybe) interpret as an offset of
the resolution due to physical effects (scattered showers,
asymmetric evolution) are decreasing with the number
of hits and for higher conversion layer. That the a0
are the highest for PID-conversions is not an unexpected result.
The a1 are increasing with the number of hits and decreasing
with the conversion layer. So this means that C.F. fit-errors
become more meaningful for showers with more hits (equals
more energy). For more quantitative information you have to
look at the plots.
 
Afterwards I made some consistency-checks with
the calibration with the same data.
Auxiliary Definition:  sc := calibrated resolution
I showed the distribution
rms((phi_measured - phi_3cfit)/sc) as function from
sc. For phi this looks very flat and close to 1
in most of the region of sc.
For theta there are small (~(5-10)%) deviations for very
small sc.
*
So this time I showed the way of calibration and that it
works in principal
the next time I will show some special corrections you have
to apply....and comparison with MC
*
 
 
The 'NEW LOOP METHOD': (B.Pagels)
======================
 
A new method to improve the lifetime resolution is under development
and testing: it uses some properties of the 'chis+1'-method and the old
loop method (now called 'OLD LOOP METHOD').
 (1) All variables and its errors are used:
        track parameter, photon impact points, photon energies,
        photon directions.
 (2) The method is semi-global, which means that about half the space of
     all possible solutions to the decay length reconstruction problem is
     taken into account.
 (3) A lifetime error is defined by the use of probabilities.
 
Ingredients for the new loop method are:
 (1) A 3c-fit: fixed decay length and free photon energies compared
     to the well-known 6c-fit.
 (2) The photon energy calibration of G.P.; the hit distributions are used!
 (3) The photon direction error calibration of O.B.
 
The new loop method varies the K0 decay length y between ymin and ymax and does
for each value of y the following:
 (1) Applies the 3c-fit
       ==> take chis_3c(y)**2
       ==> calculate probability w_3c(y) = exp(-0.5*chis_3c(y)**2)
 (2) Use the photon energies of the 3c-fit
       ==> calculate corresponding number of hits
       ==> calculate probability w_energy(y) with measured number of
           hits and Guidos calibration of hit distributions
 (3) Use the photon directions of the 3c-fit
       ==> calculate chis_angle**2(y) with measured photon angles
           and Olafs angle error calibration
       ==> calculate probability w_angle(y) = exp(-0.5*chis_angle(y)**2)
 (4) Calculate a total probability
       w(y) = w_3c(y) * w_energy(y) * w_angle(y)
 
After the decay length scan, w(y) is normalized and transformed to w(t) with
p_k0(y), t is the K0 lifetime. Then this probability density w(t) is used
to calculate a lifetime expectation value and a lifetime standard deviation.
 
Observations:
 (1) The new loop method recognizes two classes of bad events:
       i)  events with a broad probability density
       ii) events with two maxima is the probability density
 (2) There is a significant improvement compared to the old loop method
     (in terms of statistics or resolution or both - what ever one needs).
 (3) But on a Alpha 3000/300 workstation one events costs still 0.5 sec
     CPU time.
 
 
A.O.B.
======
* Thomas Ruf requested some MC of K0-->pi0pi0 with one converted photon
  to be able to study the contribution of this background to the 3 pi0
  decay. The need in CPU will be approximately 1 week of one alpha.
* The policy group requested a written report, summarizing the progress
  made in the Neutral Final States analysis since the Collaboration
  meeting. This report should be available by end of april.
* It is foreseen to have presentations in the next TUESDAY meeting (19/4)
  about the 2pi0 analysis with 4 detected photons in the calorimeter as
  well as with one converted photon.
 
Next meeting:
=============
 
 **********************************************************************
* We agreed to have the next NFS-meeting on Tuesday 19/4/94 at 9.30 h  *
* in the CPLEAR meeting room.                                 =======  *
 **********************************************************************