Dear Costas,
Thank you very much for your detailed notes.
Here I would like to explain what I'm trying to do.
Yes I'm working for IceCube experiment. We historically don't use physics flux for our neutrino simulation(NuGen) , instead, we generate simple power low spectrum (for now, let's stick on E^-1) and weight it later to physics flux (atmospheric flux model etc.)
We also make ALL neutrino interacted inside a detection volume with total column depth L(g/cm^2), thus the probability that a neutrino make interaction is
Pint = 1 - surviving_probability = 1 - exp (- Na * Xsec * L / M)
where
Xsec : total cross section for a proton / neutron target with given neutrino energy and flavor
Na * L / M : total number of target in L
Since GENIE has different cross sections for various ions this equation is not applicable for GENIE. But for now let's leave it as it is to make problem simple (I already found how to treat GENIE's case.)
The L will be extended as neutrino energy goes up, so this value must be calculated event by event.
Let's think about simulating (injecting) 10000 muon neutrinos with energy spectrum E^-1 from E^0 to E^2 within an injection circle radius R.
Injection area is PI*R^2 [cm^2], and the injection direction is 4*PI [sr].
After we get result, we plot, for example, energy of secondary leptons (if Charge Current interaction happens it will be muon, and the case of neutral current interaction it will be another neutrino with energy less than primary).
If we do not apply any weight, the integral of the histogram will be exactly 10000, because we made all neutrino interacted.
If we multiply only Pint as weight :
tree->Draw("log10(Secondary_Energy)", Pint)
then you will get pure number of interacted neutrinos when 10000 numu are injected. --- (A)
It's still not useful for physics analysis, we have to convert the vertical axis to # of events / sec.
For this conversion we need some calculation but it's basically just simple scale multiplied by a physics flux. --- (B)
The event weight will be :
weight = Pint * (PI*R^2 [cm^2]* 4*PI [sr]) * (E_intg / Ep^-1[GeV] * F(Ep, costheta)[1/GeV/cm^2/sr/s]) / Ngen
tree->Draw("log10(Secondary_Energy)", weight)
where
E_integ : integral of E^-1 from E^0 to E^2
Ep^-1 : pow(energy of primary neutrino of the event, -1)
F(Ep, costheta) : any physics flux as a function of primary energy and coszen (e.g. atmospheric neutrinos)
Ngen : number of generated neutrinos, in this example Ngen = 10000
That's all what we are doing now with our NuGen simulation.
Now, due to the recent low-energy extension of IceCube, we would like to use GENIE inside our simulation chain.
Because we still need to use our NuGen at high energy, we want to make our procedures as much as similar to we are currently doing, in order to make minimize user's confusion.
The most simple way is using exactly same weight equation as NuGen does, namely, find Pint value in Genie EventRecord object. The GMCJDriver actually calculates the value inside the GenerateEvent1Try function, but we can't use the value because Genie doesn't force to make interaction. Some injected neutrinos are thrown away and number of actually injected neutrinos are not same as number of generated neutrinos.
However, still we may use similar value for substitute of Pint. If we can draw something corresponds (A), making plot (B) should be quite straightforward.
That was my original question for this ML, how to draw plot (A). (and I'm sorry my explanation was not clear).
To make plot (A), I thought, call
mcj_driver->KeepOnThrowingFluxNeutrinos(false) ;
and use
Pint' = mcj_driver->GlobProbScale()
for substitute of Pint is the most simple way. (if we set fGenerateUnweighted = false we need to multiply EventRecord::Weight() to the Pint', I suppose?)
According to your comments
>
> You need to take into account that, internally, GENIE cranks up the tiny neutrino interaction probabilities by scaling them all with their maximum possible value for your particular experimental setup. This iinteraction probability scale is, effectively, scaling time. So if you find that your MC job corresponds to an exposure of T sec, but a Pmax probability scale was used, then the actual exposure is T/Pmax.
that seems to work for me… because here Pmax is a kind of weight parameter.
Is that make sense?
Best,
Kotoyo
On Feb 21, 2013, at 8:56 AM, [log in to unmask] wrote:
> Dear Kotoyo.
>
> We do not know the details of what you want to do. I understand that you simulate atmospheric neutrinos in IceCube. How about something like the following:
>
> The atmospheric neutrino flux drivers use input 2-D flux histograms from the BGLRS and FLUKA groups. They describe the energy and cosine(zenith_angle) distributions of various neutrino species. The histograms have bin contents given in units of # neutrinos per GeV per steradian per m^2 per sec, or something very similar.
> Take the 2-D histogram for the numu flux, integrate it over neutrino energy and cosine zenith angle and then multiply with 2*pi. The number you get, say N, is the number of flux numu per m^2 of surface perpendicular to the flux, per sec. Say that you generate events within a sphere of radius R which encloses the detector. Neutrinos from all directions see the same area, pi*R^2 (in m^2). So, in that spherical volume, you expect N*pi*R^2 flux numu per second. If you keep track of the number of flux neutrinos you have thrown inside that volume since the start of the MC job, you will know how much simulated time has elapsed. This is similar with what we do in the accelerator neutrino event generation apps where every flux neutrino thrown increments a 'protons on target' counter by an amount obtained by the beamline simulations.
> You need to take into account that, internally, GENIE cranks up the tiny neutrino interaction probabilities by scaling them all with their maximum possible value for your particular experimental setup. This iinteraction probability scale is, effectively, scaling time. So if you find that your MC job corresponds to an exposure of T sec, but a Pmax probability scale was used, then the actual exposure is T/Pmax.
>
> Please think this through to see whether something like what outlined above might work.
> Alternatively, you can select some simple sub-volume of your detector (simple shape and uniform, so that the number of targets seen for neutrinos of various directions is easy to calculate) and do the calculation described in the previous email based only on the number of events generated within that volume (assuming it is large enough so as not to run into problems with MC statistics).
>
> Also, please let us know what exactly you are trying to. Are you simulating contained atmospheric neutrino interactions and neutrino-induced upward-going muons in the same MC job? Although you can, this is grossly inefficient as for up going muons you need to consider an enormous volume and acceptance is, on average, tiny. Other methods are typically used for up going muons.
>
> cheers
> Costas
>
> --
> Dr Constantinos Andreopoulos
> Rutherford Appleton Laboratory
> Harwell Oxford campus
> OX11 0QX, UK
> tel (Mob): +44-(0)7540 847333
> tel (RAL): +44-(0)1235 445091
> fax (RAL): +44-(0)1235 446733
> http://costas.andreopoulos.eu
>
> Sent from my iPhone
>
> On 21 Feb 2013, at 00:19, "Kotoyo Hoshina" <[log in to unmask]> wrote:
>
>>
>> Dear Costas,
>>
>> Thank you for your answers, the sk_sample_norm_abs.C helps a lot to understand how to get an absolute scale.
>> So, it seems like GENIE event record doesn't store any information that I wanted to use as weight, instead, I have to get it from a separate calculation with convolution of `flux(Ev) * cross-section(Ev) * N' (where N = number of targets in path length), right?
>>
>> Now I got how to calculate the number, but unfortunately it's not simple to calculate it for my case.
>> First of all, my detection volume is a multi-material cylinder (upper part is ice, bottom part is rock).
>> Moreover the neutrino beam comes from 4-pi, wide flat beam. The path length within the detecter will depend on zenith angle and where each neutrino path through.
>> Thus, doing the flux * Xsec convolution is almost same as running another simulation…
>>
>> So I really want to calculate each event scale (weight) event by event. Is there any example code for such type of event weight somewhere in genie directory?
>>
>> If I can't find any example, my next plan is set GMCJDriver
>>
>> mcj_driver->KeepOnThrowingFluxNeutrinos(false) ;
>>
>> and use mcj_driver->GlobProbScale() as event weight. Then I know how much neutrinos are injected exactly and the weighted event should corresponds to the number of interacted neutrino… is my understanding correct?
>>
>> Thank you,
>> Kotoyo
>>
>>
>>
>> On Feb 20, 2013, at 1:38 PM, [log in to unmask] wrote:
>>
>>> Dear Kotoyo
>>>
>>> I am not sure you got it right.
>>> Don't use the cross-section stored in the generated event record.
>>>
>>> You need to integrate the `flux(Ev) * cross-section(Ev)' product.
>>> This will give you the expected number of events of type X for an exposure Y
>>> (X,Y depend on which cross-sections you choose, the absolute normalisation of your
>>> input flux etc). Compare the expected number of events X per exposure Y, with the actual
>>> generated number of events X. This will give you the absolute normalisation of
>>> your generated sample. It is very simple, and the details depend very much on your specific
>>> problem. Here is an example (with plenty of comments) of how I do this for the T2K/SuperK samples:
>>> https://genie.hepforge.org/trac/browser/trunk/src/contrib/t2k/sk_sample_norm_abs.C
>>>
>>> The above is sufficient for most event generation cases.
>>> For more complex event generation cases, such as the near detectors in the NuMI and JPARC
>>> beam-lines, where flux is described using beam simulation n-tuples, the absolute normalisation
>>> is computed by the specialised event generation apps included in GENIE.
>>>
>>> Phil: Yes, for the cases where one has a detector with multiple materials, and one doesn't use one
>>> of the GENIE T2K or NuMI event generation apps, it would be nice to have a tool to do what you suggest.
>>> There isn't any. However, even if you have multiple materials, you can still pick just a single material for the
>>> purpose of calculating the absolute normalisation of your entire sample.
>>>
>>> cheers
>>> Costas
>>>
>>>
>>> ---
>>> Dr. Constantinos (Costas) Andreopoulos
>>> Staff Scientist
>>>
>>> Science and Technology Facilities Council (STFC)
>>> Rutherford Appleton Laboratory
>>> Particle Physics Dept. (R1, 2.90)
>>> Harwell Oxford Campus, OX11 0QX, UK
>>> tel (office): +44-(0)1235 445091
>>> fax (office): +44-(0)1235 446733
>>> mobile: +44-(0)7540 847333
>>> w3: http://costas.andreopoulos.eu<http://costas.andreopoulos.eu/>
>>>
>>>
>>>
>>>
>>>
>>> On 20 Feb 2013, at 20:02, Kotoyo Hoshina wrote:
>>>
>>> Dear Philip,
>>>
>>> Thank you for your immediate response!
>>>
>>> With the total cross section given by the spline, along with your flux, you can calculate the expected total number of events as >flux*(cross section), and use it to scale your event histogram.
>>>
>>> I see, I think now I start to understand how to treat my problem.
>>> EventRecord holds cross section that the "selected" event. It also hold an interaction probability that the specific event make happen.
>>> So, when I pick up one event and ask "what is the probability that this event happens?", it's simply obtained with
>>> prob = evt->Probability().
>>> The sum of prob of all event will be how many neutrino actually interacted, when 2000 primary neutrino is injected
>>> (since I generated 2000 events). In other words, this is the event weight.
>>>
>>> Am I correct?
>>> (It looks very simple, and I'm a bit ashamed that I didn't understand it earlier)
>>>
>>> This gets a little tricky when your detector contains multiple elements, because you have to get multiple splines, and weight >them correctly, so I wonder if any of the GENIE developers know an easier way (or some existing tools that do this)
>>>
>>> Here I think you are trying to explain another way to weight event, which I'm using for our group's neutrino MC too.
>>> weight = 1 - exp (-N * total cross section * path length) (= N * total cross section * path length @ low E)
>>> In this case ALL injected neutrino will make interaction.
>>> Yes that would be nice if we have any good tools to do it easier! (I already wrote the code in my test program, though).
>>>
>>> I hope now I understand how to weight genie event correctly, but weighting is always tricky and dangerous if users don't know how the events are generated.
>>> So it would be nice if experts can add one short section "how to weight genie event" in genie user's manual in near future :).
>>>
>>> Thank you again,
>>> Kotoyo
>>>
>>>
>>> On Feb 20, 2013, at 9:49 AM, Philip Rodrigues <[log in to unmask]<mailto:[log in to unmask]>> wrote:
>>>
>>> Dear Kotoyo,
>>> It sounds like you want to make cross section plots (or something that you can easily calculate given the cross section, like events in a given exposure in your detector). For that you'll need the cross section spline file. I found it easier to work with a ROOT version of the spline file, which you can create from the XML spline file using gspl2root. With the total cross section given by the spline, along with your flux, you can calculate the expected total number of events as flux*(cross section), and use it to scale your event histogram.
>>>
>>> This gets a little tricky when your detector contains multiple elements, because you have to get multiple splines, and weight them correctly, so I wonder if any of the GENIE developers know an easier way (or some existing tools that do this)
>>> Hope that helps,
>>> Phil
>>>
>>> On 02/19/2013 07:17 PM, Kotoyo Hoshina wrote:
>>> Hello,
>>>
>>> I'm a new beginner of genie neutrino generator, and have a question about how to weight genie events.
>>> The "weight" I mean is NOT same thing as the field I can get from EventRecord with following function
>>> evt->Weight();
>>> which for now I set always 1.
>>>
>>> I generated 2000 genie event with gevgen command, and converted result .root file to gst format root file.
>>> When I draw 1D histogram of El parameter (energy of final-state lepton), the vertical axis will be number of generated events per energy bin. This is just unweighted MC histogram and can't be used for physics analysis.
>>> Now I want to convert this number to physics plot, number of interactions actually happened per 2000 primary neutrinos. Due to the small cross section, the vertical axis should become very small (~1e-36).
>>>
>>> According to Genie user manual, genie is using internal global probability scale which is chosen from highest input energy and longest path length. At first, I thought this number will be the constant event weight to get actual number of events :
>>>
>>> event_weight = gmcj_driver->GlobProbScale()
>>> ...
>>> gst->Draw("El", event_weight);
>>>
>>> but after I read GMCJDriver carefully I noticed it won't work, because the event loop inside GMCJDriver actually loops much more than 2000. There is no way to know how many neutrinos are actually injected, thus no way to get normalization factor.
>>> It is always simple to calculate event weight of a specific event : it's just the interaction probability (and easy to obtain from EventRecord), but weighting entire MC sets strongly depends on how these MC sets are generated...
>>> and I couldn't find any document about weighting procedures, instead, I found re-weighting staff (which currently I don't need.)
>>>
>>> Is there any document or example code?
>>>
>>> thank you,
>>> kotoyo
>>>
>>> --
>>> Scanned by iCritical.
>>
> --
> Scanned by iCritical.
|