Neutral final states
 
Minutes of the following NFS-meetings held on:
==============================================
 
13/7 , 7/9 and 29/9/1993
 
Due to the fact, that there have been no NFS-minutes since july,
the following minutes contain the news of the last 3 NFS meetings !
 
Reports from the following people are summarized below:
-------------------------------------------------------
- Guido Polivka
- Andreas Schopper
- Marcin Wolter
- Olaf Behnke
- Bernd Pagels
 
The presentation of Thomas Ruf on his analysis of converted photon
data is summarized in a CPLEAR note (CP/NFS/004).
 
In addition Marion Schaefer has written a CPLEAR note on the
reconstruction of K0 --> Pi0 Pi0 with 3 or 5 photons (CP/NFS/003).
 
 
ENERGY CALIBRATION (G.P.)
=========================
 
1. Method
---------
 
   Adding an additional constraint on the pi0 mass, improved the
   energy resolution of the fit to better than 4% (absolute).
   Expecting a final resolution of (0.11..0.15)/sqrt(E), this should
   do no harm.
 
2. Hit Distributions
-------------------
 
   There is a good agreement in the mean and the sigma of the hit-
   distributions between KKpi0 data and the prototype. The resolution
   raises from 0.12/sqrt(E) for 110 MeV photons to 0.15/sqrt(E) for
   350 MeV photons (0.11/sqrt(E) resp. 0.145/sqrt(E) for the prototype).
 
   The present available CPGEANT simulation for the calorimeter gives
   about 20% less hits. But the hit resolution is comparable to data.
 
3. Parametrization
-------------------
 
   The parametrization will contain the all the parameters of the
   'Gerber' function plus the mean value of the hit distributions.
 
   The energy resolution (comparing the 'true' energy from the fit
   with the calibrated one) is a bit worse than the hit resolution
   (0.125/sqrt(E) (75 MeV gamma) to 0.18/sqrt(E) (450 MeV gamma)).
   This effect has been seen in the prototype.
 
4. Things left to do
--------------------
 
   a) Make the new calibration available.
   b) How to calculate the pi0 mass.
   c) Monte Carlo.
   d) Something written.
 
   Points a) and b) should be ok in week 29, point c) hopefully too
   (in first approximation).
 
=> A report will come end of september.
---------------------------------------
 
NFS-data production of 1992: (A.S.)
===================================
 
We received at CERN all the NFSFIT data from the outside institutes,
except from Rutherford Lab.
The NFSFIT production includes the Nakada-method, 1C-, 5C-, and 6C-fit
on the 2 track events with 4 detected photons with good Z-info.
The data of CERN, PSI, Basel and Lyon represent approx. 90% of the
total 1992 data.
 
The NFSFIT data were reprocessed on the new alpha-300 to apply the
LOOP method as well as probability cuts (1C>0.01, 6C>0.01) and
the Q-cut (Q<0.6). The LOOP method was only applied on data with
a K0 momentum bigger than 400 Mev/c.
The reprocessing took approx 6 days on the alpha and the data were
stored on 2 Exabytes.
 
The total number of events is summarized in the following table:
 
NFSFIT data          5'380'605 events
NFSLOOP data         1'414'935 events
after 1C and 6C
with 0.1 prob cut      841'221 events
     ---
 
The data as shown in the lifetime distribution for the status report
(after various analysis cuts) consisted of 336'500 events.
 
 
Efficiency of the 6-c fit after correction of the shower energy: (M.W.)
=======================================================================
 
A shower energy is not properly reproduced by the present version
of the MC. In order to study the influence of the shower energies
on the results of the 6-c fit the shower energy was replaced
by the smeared true gamma energy. The energy resolution
       D(E)/E = 15%/sqrt(E)
was used.
  The probability of the 6-c fit is very sensitive for the neutral
showers energies. After using the true energy the number of events
with the 6-c fit probability around zero decreased nearly by a factor
of two. This effects on the pairing choosen by the 6-c fit (finding the
pairs of photons originating from the same PI0).
  The reconstructed K0 decay length (and the K0 lifetime) are sensitive
for the changes of pairing, while the K0 momentum is not very
sensitive for the pairing.
  The K0 life time resolution obtained from the MC simulation
can be parametrized by two gaussians, one with the width of 0.5 tauS,
the second with the width of around 2 tauS. By using the smeared true
gamma energy instead of the shower energy the number of events in
the wide gaussian decreased from 46% down to 40% of the total number
of events. It shows, that the lifetime resolution is sensitive for
the proper gamma pairing. In the same time the efficiency of
the standard cuts (prob. of 6-c fit>0.1, Qcut<0.6) increased from
44% to 65%.
 
 
Background in the 2 pi0 channel: (M.W.)
=======================================
 
1. Background from 3pi0 events.
The reconstructed lifetime from the 3pi0 events (only 4 gammas
detected) is usually between 10 and 30 taus, so this type of
background contributes to the "tails" in the lifetime resolution.
The ratio of 3pi0 events to the good 2pi0 events can be reduced
by the cut on the probability of 1 C-fit and by the cut on gamma-gamma
invariant mass presented by Olaf:
  - after NFSFIT (mainly-asking for 4 neutral showers)  5%
  - after Prob 6 c-fit>0.1 and P(K0)>400 Mev/c          1.6%
  - after Olaf cut                                      0.5 %
 
2. Additional neutral showers.
In the golden PI+PI- sample one can find neutral showers
(i.e. neutral, good Z, no wire ambiguities) in 18% of events.
Half of them groups around charged tracks and the rest
is more or less randomly distributed over the whole CALO surface.
It seems, that the cut on the probability of 6 c-fit and on Olaf cut
on gamma-gamma invariant mass can reduce the background originating
from events, where three gammas and one fake shower were detected.
 
 
REPORT ON THE NEVER(EVER?)ENDING TAILSTORY: (O.B.)
==================================================
I MADE SOME TESTS WITH THE LATEST PI0PI0-4GAMMA
MONTE-CARLO DATA FROM MARCIN. IN THIS DATA
THE GAMMA-ENERGIES ARE FIRST CORRECTED TO THE
TRUE ONES AND THEN SMEARED BY HAND (GAUSSIAN).
STUDYING THIS DATASET I WAS ONLY INTERESTED IN EFFECTS
FROM THE LOOPMETHOD ON THE LIFETIMERESOLUTION.
I DECIDED TO CONCENTRATE ON EVENTS, WHERE THE 6C-FIT FOUND
THE RIGHT GAMMA-GAMMAPAIRING.
A FURTHER CUT WAS APPLIED ON THE K0-MOMENTUM (>400 MEV)
AND I ASKED FOR PROB6C>0.1.
AFTER THIS ALL I SHOWED TWO PLOTS WITH THE LIFE-
TIMERESOLUTION OF THE 6C-FIT, WHERE I MADE
ADDITIONALLY CUTS ON THE "VARIANCE" FROM THE LOOPMETHOD.
THE FIRST WITH A CUT VARIANCE<0.5 TS SHOWED A IMPROVED
LIFETIMERESOLUTION (RMS=0.95 TS) COMPARED TO THE
RESOLUTION WITHOUT THIS CUT (RMS=1.5). THE SECOND
WITH THE CUT VARIANCE>0.5 TS IS OF COURSE MUCH BROADER (RMS=1.88).
FOR VERY BIG VALUES OF THE VARIANCE THERE SEEMS
TO BE A DECORRELATION BETWEEN THE RESOLUTION OF THE 6C-FIT
AND THE VARIANCE. THIS MUST BE DUE TO THE GAMMAENERGY-
INFORMATION, WHICH THE 6C-FIT IS EXPLOITING, BUT THE
LOOPMETHOD DOES NOT TAKE CARE OF.
SO PHILLIPE SUGGESTED TO IMPLEMENT THIS ENERGYINFO
IN THE LOOPMETHOD, BY PUTTING IT INTO THE WEIGHT, THAT
EACH POINT ON THE LOOP IS GIVEN. THIS IS IN WORK NOW.
 
 
 
Improvement of the Q-cut (O.B.)
===============================
 
I reported about a short study
to find out, if the so called Q-cut
could be improved.
To remind you,
the Q-variable, introduced
by Bernd, is defined as
 
 Q = |(mij*mij+mkl*mkl)-(mik*mik+mjl*mjl)|/(mk0*mk0-2*mpi0*mip0)
 
where the mij is the ivariant
gamma-gamma mass of the photons i and j.
The invariant masses are obtained through the
6c-fit neutral decayvertex-reconstruction but (important!)
the pairings ij kl and ik jl are just the ones
the 6c-fit didn't" chose for reconstruction,
because they would not give the highest
probability in the fit.
 
As you all know, by cutting on Q<0.6
you could eventually reject almost all events,
where the 6c-fit had chosen the wrong pairing
(due to the accidential higher probability in the fit).
 
The Question was now, if by making a projection
from all the four variables mij,mkl,mik,mjl (of whom
only three are independant!) into the single value
Q, one could have lost information, that in principle
could be exploited for a more efficient cut.
 
So i studied with full Monte-Carlo Data from
Marcin the
        mij vs. mkl   double plots,
where the ij jl pairing is again one of the
pairings that the 6c-fit didn't chose.
In case the 6c-fit choses the wrong pairing for
reconstruction you observe two bumps at values
of 50 Mev vs. 450 Mev and 100 Mev vs. 100 Mev.
These can be explained, if you think about the
situation, in where the selection of a wrong
pairing by 6c-fit is most likely to happen.
So for instance assume that the pairing 12 34
is the correct one. If now the impactpoints
of 2 and 3 are very close together the 6c-fit
could eventually chose the wrong pairing 13 and 24.
So from the geometry it is then easy to understand
that the not chosen pair 23 has a very low mass and
the pair 14 a very high one. So this explains the
first bump at 50 Mev vs 450 Mev. The also not chosen
pairs 12 and 34 are symmetric and give the second bump
at 100 Mev vs. 100 Mev.
 
So in order to reject events with this dangerous
liasion of two impactpoints i introduced a cut
for all not chosen gamma-gamma-pairs, that their
invariant mass must lie above 80 MeV.
By playing around i found that by further asking
the same invariant masses to be smaller than
380 MeV optimum results where reached in terms
of wrongpairing-reduction/loss of right pairings.
The reason for this second cut is not so obvious
to me as for the first one.
 
Comparison of the performance with the Q-cut:
 
                                mij>80
                        Q<0.3   mij<380
 
right pairing    6099   4670    4485
 
wrong pairing     486     28       3
 
 
In the mij vs. mkl double plots one could spot
the wrongpairing-events, that were rejected by
the new cut, but not by the old Q-cut.
These were located at values of 40 Mev vs 350 Mev.
An explanation for this could be events, where
you have again two impactpoints of gammas from
different pi0 close together, but then also another
photon is going in the backward direction.
By drawing this on a sheet of paper you will realize
that in this case for the not chosen pairings you
might get twice values in the region of 40 MeV vs 350 Mev,
which have then a very low Q-value.
 
In principle one should check now from the
impactpoints-distributions the assumption
i made for the arising of wrong pairings that
two impactpoints are lying close together.
This could still be to much a simplification
of a more complex situation you could
deal better with...
 
 
 *** Pi0-Pi0 status *** (B.P.)
==============================
 
Topics of the talk:
-------------------
 (1) Statistics & private production
 (2) K-Pi identification & background
 (3) From q to qprime
 (4) What does the loop method
 (5) Conclusion
 
Item (1):
---------
 Our NFS_LOOP production immmediatly before Noullis status report used the
 following cuts:
   o prob-1c > 0.01
   o prob-6c > 0.01
   o q < 0.6
   o application of the loop method, if p_K0 > 400 MeV
   o no particle identification cuts !!!
 This gave a data sample of about 1.4 millon events and is the basis of
 my private production:
   o Cuts on primaries:
       e.g. momentum, track quality, remove kaon candidats in PID 22, etc.,
            But again no particle identificaton was done with the PID's.
                            ===> 15.4 % of all events were rejeted
   o prob-1c > 0.1          ===> 11.5 %
   o prob-6c > 0.1          ===>  8.9 %
   o p_k0 > 400 MeV         ===> 23.9 %
 About 571k events survied these cuts (108k above 3 taus). Without any
 further cuts there are at 15 taus 100 times less events than at 0 taus
 in the lifetime distribution, but we need a factor of about 3000-4000 to
 see CP violation. What is the backgound ???
 
Item (2):
---------
 The dE/dx band of the kaon candidat shows a significant contribution
 of pions and the dE/dx band of the pion candidat a kaon contribution.
 The amount of these background events can't be responsible for the
 background seen in the lifetime distribution. To reject these pipi
 and kk contributions in our data a set of cuts on the dE/dx signal of S1
 and the tof information was defined: 90% (60%) of the events at 0 taus
 (15 taus) survided the cuts.
 
Item (3):
---------
 As we know from the q-value and further studies of O.B. and M.W. the
 four 'wrong' gamma- gamma invariant masses of the 6c-fit are well suited
 to identify wrong pi0-gamma-gamma pairings. But only three of the four
 squared masses are linear independent. Therefore a Principal Component
 Analysis (PCA) was used to identify three new quantities, named x1, x2 and
 x3, that contain all the information of the previous four masses. This was
 done by using a mini MC 'trainig sample' of pi0-pi0 events with the correct
 pi0-gamma-gamma pairing.
 Then a second mini MC sample of pi0-pi0 events with the wrong pi0-gamma-gamma
 pairing was used to compare the (x1,x2,x3) distributions of the two sample. It
 came out, that the quantity
 
                  qprime = max ( |x1|, |x2| )
 
 is a very good 1-dimensional measure of wrong pi0-gamma-gamma pairings. Bad
 events are located at high qprime values.
 A look at MC pi0-pi0-pi0 background events proved the ability of the qprime
 distribution also to reject such events !!!
 
 In the data the qprime distribution versus different lifetimes bins shows
 at large lifetimes a dominating 3pi0-like background contribution. Cutting
 on different qprime values, this background is significant reduced, but on
 the cost of good events. Applying a hard cut (qprime < 0.4), one comes in
 the region of the expected level of CP violation (factor 3 missing), but
 then statistics is poor.
 
Item (4):
---------
 The effect of the loop method was studied by using a sample of events with
 low background (qprime < 0.4). The improvement in the lifetime resolution is
 very promising, but no answers can be given, whether tails in the lifetime
 resolution are reduced or not (are there any tails ???). The attempt to give
 sigma_1 and sigma_2 values of a double gausian parametrisation of the
 lifetime response function as a function of different cuts on the lifetime
 error of the loop method failed: the parametisation is too bad, if applied
 on the full lifetime range.
 Again, the loop method is not very effectice in terms of statistics.
 
Conclusion:
-----------
 o The lifetime distribution is dominated by background: there are
   contributions from pipi and kk, but the main contribution behaves like
   3pi0 in the new quantity qprime (but is it really 3pi0 ???).
 o The loop method improves the lifetime resolution significant, but what is
   the effect on tails in the lifetime resolution ?
 o We have no parametrisation of the lifetime distrribution over the full
   lifetime range.
 o The available cuts on the background and on the lifetime resolution are
   v e r y inefficient, e.g. prob-1c,prob-6c,improved q-cuts, qprime,
   sigma_loop.
   ===> need of additional tools, e.g shower angle, 14c-fit, calo hits instead
   E_gamma in cfits,...(?)
 o MC desparatly needed:
   (i)  Background: 3pi0 (!) , pipi, kk, kpik0pi0,...
   (ii) Pi0pi0 at all lifetimes
 
 
 
=========================
= Date of next meeting: =
=========================
 
We agreed to have the next NFS-meeting on tuesday 12/10/93
at 14.00h in the CPLEAR meeting room .