When you can get to the web pages you'll find the instructions. Since
the chances of the rsync server working are correlated with the web
server functioning you'll note that there is nothing to gain from
knowing how to rsync.
Tim
On Feb 16, 2010, at 9:56 PM, David Nutter
<[log in to unmask]> wrote:
> Tim, while you're still on, are there instructions for rsyncing the
> JACs starlink? I can't see anything on the JAC webpages (most links
> seem to be broken.. server issues?)
>
> Cheers
> Dave
>
>
> On Wed, Feb 17, 2010 at 7:48 AM, David Nutter
> <[log in to unmask]> wrote:
>> Hi Tim.
>>
>> I'll have a go at rsyncing JACs starlink, though I should get my MSBs
>> remade first before breaking anything else!
>>
>> results of the bt command follow.
>>
>> Cheers
>> Dave
>>
>> daedalus spxdjn 215: gdb $SMURF_DIR/sc2clean
>> GNU gdb Red Hat Linux (6.5-37.el5_2.2rh)
>> Copyright (C) 2006 Free Software Foundation, Inc.
>> GDB is free software, covered by the GNU General Public License,
>> and you are
>> welcome to change it and/or distribute copies of it under certain
>> conditions.
>> Type "show copying" to see the conditions.
>> There is absolutely no warranty for GDB. Type "show warranty" for
>> details.
>> This GDB was configured as "x86_64-redhat-linux-gnu"...Using host
>> libthread_db library "/lib64/libthread_db.so.1".
>>
>> (gdb) run concat.sdf clean order=0 dcbox=50
>> Starting program: /home/soft/star-hawaiki/star64/bin/smurf/sc2clean
>> concat.sdf clean order=0 dcbox=50
>> [Thread debugging using libthread_db enabled]
>> [New Thread 46917776365200 (LWP 17836)]
>> [New Thread 1115605312 (LWP 17914)]
>> [New Thread 1126095168 (LWP 17915)]
>> [Detaching after fork from child process 17916. (Try `set detach-on-
>> fork off'.)]
>> Processing data from instrument 'SCUBA-2' for object 'URANUS' from
>> the
>> following observation :
>> 20091214 #15 scan
>>
>> [Detaching after fork from child process 17917.]
>>
>> Program received signal SIGSEGV, Segmentation fault.
>> [Switching to Thread 46917776365200 (LWP 17836)]
>> smf_begin_job_context (workforce=0x0, status=0x7fff7326df1c)
>> at smf_threads.c:702
>> 702 smf_threads.c: No such file or directory.
>> in smf_threads.c
>>
>>
>>
>> (gdb) bt
>> #0 smf_begin_job_context (workforce=0x0, status=0x7fff7326df1c)
>> at smf_threads.c:702
>> #1 0x000000000046c53f in smf_correct_steps (wf=0x0, data=0x2c618f8,
>> quality=0x0, dcthresh=150, dcthresh2=10, dcbox=50, dcflag=0,
>> nsteps=0x0,
>> status=0x7fff7326df1c) at smf_correct_steps.c:313
>> #2 0x00000000004339c6 in smurf_sc2clean (status=0x7fff7326df1c)
>> at smurf_sc2clean.c:317
>> #3 0x000000000040cd6c in smurf_mon (status=0x7fff7326df1c) at
>> smurf_mon.c:283
>> #4 0x000000000040cf48 in dtask_wrap_ (fstatus=0x7fff7326e298)
>> at dtask_wrap.c:8
>> #5 0x000000000040c697 in dtask_applic_ (context=<value optimized
>> out>,
>> actcode=<value optimized out>, aname=<value optimized out>,
>> actptr=<value optimized out>, seq=0x7fff7326dff0,
>> value=0x7fff7326e040 "concat.sdf clean order=0 dcbox=50", ' '
>> <repeats 167 times>..., schedtime=0x7fff7326dff4,
>> request=0x7fff7326dfec,
>> status=0x7fff7326e298, aname.len=0, value.len=444) at
>> dtask_applic.f:71
>> #6 0x00002aabe2656bb0 in dtask_obeydcl_ (
>> dtask_applic=0x40c5e0 <dtask_applic_>, name=<value optimized out>,
>> value=0x7fff7326e040 "concat.sdf clean order=0 dcbox=50", ' '
>> <repeats 167 times>..., status=0x7fff7326e298, name.len=<value
>> optimized out>, value.len=444)
>> at dts_obeydcl.f:160
>> #7 0x00002aabe26553c8 in dtask_dcltask_ (devinit=0x40c5d0
>> <devinit_>,
>> dtask_applic=0x40c5e0 <dtask_applic_>, status=0x7fff7326e298)
>> ---Type <return> to continue, or q <return> to quit---
>> at dts_dcltask.f:153
>> #8 0x000000000040c5c1 in MAIN_ () at dtask_main.f:140
>> #9 0x00000000005ab41e in main ()
>> (gdb)
>>
>>
>>
>>
>>
>>
>> On Wed, Feb 17, 2010 at 6:52 AM, Tim Jenness
>> <[log in to unmask]> wrote:
>>> The good news is that the current version of SMURF does not have a
>>> problem.
>>> The bad news is that I'm not sure which patch fixed it. The even
>>> worse news
>>> is that SMURF has changed a lot since hawaiki was released and so
>>> real
>>> shared-risks data may not reduce with the hawaiki version. Can you
>>> rsync the
>>> live JAC version? (obviously not tonight since the JAC servers are
>>> all
>>> off-line).
>>>
>>> Can you tell me where the crash is happening?
>>>
>>> % gdb $SMURF_DIR/sc2clean
>>> gdb> run concat.sdf clean order=0 dcbox=50
>>> gdb> bt
>>>
>>> and send me the output of the "bt" command.
>>>
>>> Tim
>>>
>>> On Mon, Feb 15, 2010 at 1:58 AM, David Nutter <[log in to unmask]
>>> >
>>> wrote:
>>>>
>>>> Hi.
>>>>
>>>> I'm getting a seg fault with sc2clean whilst running through the
>>>> cookbook commands (with the sample data from JAC).
>>>> This error appears on both 64- and 32-bit linux running Hawaiki.
>>>> The
>>>> command works until dcbox=50 is added. the input file is produced
>>>> by
>>>> concatenating
>>>> s4a20091214_00015_0002.sdf and s4a20091214_00015_0003.sdf.
>>>>
>>>> daedalus spxdjn 235: sc2clean concat.sdf clean order=0
>>>> Processing data from instrument 'SCUBA-2' for object 'URANUS'
>>>> from the
>>>> following observation :
>>>> 20091214 #15 scan
>>>>
>>>> daedalus spxdjn 236: sc2clean concat.sdf clean order=0 dcbox=50
>>>> Processing data from instrument 'SCUBA-2' for object 'URANUS'
>>>> from the
>>>> following observation :
>>>> 20091214 #15 scan
>>>>
>>>> Segmentation fault
>>>>
>>>> daedalus spxdjn 237: sc2clean concat.sdf clean order=0 dcbox=50
>>>> dcthresh=200 dcflagall fillgaps
>>>> Processing data from instrument 'SCUBA-2' for object 'URANUS'
>>>> from the
>>>> following observation :
>>>> 20091214 #15 scan
>>>>
>>>> Segmentation fault
>>>>
>>>>
>>>> Cheers
>>>> Dave
>>>
>>>
>>
|