Hi Greig,
> If I can't get access to the admin shell then can you login into it and
> cd to the PoolManager cell. Then run the following commands and send the
> output back to me:
> > psu ls pools
> > psu ls pgroups
> > psu ls links
> > psu ls units
> > psu ls ugroups
(PoolManager) admin > psu ls pool
psu ls pool
hepgrid6_5
ihepgrid880_1
hepgrid6_1
hepgrid6_3
hepgrid6_2
hepgrid5_1
hepgrid6_6
hepgrid6_4
(PoolManager) admin > psu ls pgroup
psu ls pgroup
lhcb
ResilientPools
ops
dzero
babar
cdf
default
dteam
atlas
(PoolManager) admin > psu ls link
psu ls link
dzero-link
lhcb-link
default-link
ops-link
cdf-link
babar-link
atlas-link
dteam-link
(PoolManager) admin > psu ls unit
psu ls unit
atlas:GENERATED@osm
cdf:STATIC@osm
babar:GENERATED@osm
*@*
atlas:STATIC@osm
ops:STATIC@osm
0.0.0.0/255.255.255.255
dteam:STATIC@osm
0.0.0.0/0.0.0.0
lhcb:GENERATED@osm
babar:STATIC@osm
dteam:GENERATED@osm
ops:GENERATED@osm
*/*
dzero:STATIC@osm
cdf:GENERATED@osm
lhcb:STATIC@osm
dzero:GENERATED@osm
(PoolManager) admin > psu ls ugroup
psu ls ugroup
cdf-groups
ops-groups
dzero-groups
babar-groups
atlas-groups
any-store
dteam-groups
world-net
lhcb-groups
> Another thing we could try would be for you to change the gridmap-file
> on the SRM node and each gridftp door node to map my DN to the ops VO.
> That way I will be able to test if there are any differences between the
> ops and dteam configurations.
Seems like now SAM test automagically work, interesting.
Thanks
Cheers
Paul
>
> Greig
>
> Paul Trepka wrote:
> > Yes, they have been assigned for the pool as well.
> >
> > To open port is too complicated,
> > Unfortunatly without alternatives (see UOL Security Policy), Greig.
> >
> > Cheers
> > Paul
> >
> > On Thu, 1 Feb 2007, Greig Alan Cowan wrote:
> >
> >> Paul,
> >>
> >> I've ran some tests from our ui and everything is OK for writing into
> >> and reading from the /pnfs/ph.liv.ac.uk/data/dteam area of your dCache.
> >>
> >> Have you configured any pools for the ops VO to use?
> >>
> >> If you could open up port 2288 in your site firewall then I could have a
> >> look at yout dCache webpage:
> >>
> >> http://hepgrid5.ph.liv.ac.uk:2288/
> >>
> >> This would provide a lot of useful information regarding your dCache
> >> setup. Alternatively, you could give me the password for the dCache ssh
> >> admin shell. I would be able to use this to gather information about
> >> your system.
> >>
> >> BTW: I've cc'd this to the storage list.
> >>
> >> Cheers,
> >> Greig
> >>
> >>
> >>
> >> Paul Trepka wrote:
> >>> Hi
> >>>
> >>> I Have failing SAM srm due to time out (network?)!?
> >>>
> >>>
> >>> Local network test shows that there is clean way from CE/WN/UI to
> >>> contact our SRM also results from atlas test page show that our SRM
> >>> operations works for them
> >>> (http://hepwww.ph.qmul.ac.uk/~lloyd/atlas/atest.php).
> >>>
> >>> I Am quite faraway of understanding particulars of the SAM test
> >>> failiture (is mysterious to me now :-)
> >>>
> >>>
> >>> Please find details from my own test (below test srmcp & lcg-cr)
> >>> confronting results from yesterday and today both SAM and ATLAS UK.
> >>>
> >>>
> >>> Thanks
> >>>
> >>> Cheers
> >>> Paul
> >>>
> >>> Storage Resource Manager (SRM) CP Client version 1.23
> >>> Copyright (c) 2002-2006 Fermi National Accelerator Laboratory
> >>>
> >>> SRM Configuration:
> >>> debug=true
> >>> gsissl=true
> >>> help=false
> >>> pushmode=false
> >>> userproxy=true
> >>> buffer_size=131072
> >>> tcp_buffer_size=0
> >>> streams_num=10
> >>> config_file=/user2/pat/.srmconfig/config.xml
> >>> glue_mapfile=/opt/d-cache/srm/conf/SRMServerV1.map
> >>> webservice_path=srm/managerv1
> >>> webservice_protocol=https
> >>> gsiftpclinet=globus-url-copy
> >>> protocols_list=http,gsiftp
> >>> save_config_file=null
> >>> srmcphome=/opt/d-cache/srm
> >>> urlcopy=sbin/urlcopy.sh
> >>> x509_user_cert=/user2/pat/.globus/usercert.pem
> >>> x509_user_key=/user2/pat/.globus/userkey.pem
> >>> x509_user_proxy=/tmp/x509up_u385
> >>> x509_user_trusted_certificates=/etc/grid-security/certificates
> >>> globus_tcp_port_range=null
> >>> gss_expected_name=null
> >>> storagetype=permanent
> >>> retry_num=20
> >>> retry_timeout=10000
> >>> wsdl_url=null
> >>> use_urlcopy_script=false
> >>> connect_to_wsdl=false
> >>> delegate=true
> >>> full_delegation=true
> >>> server_mode=passive
> >>> srm_protocol_version=1
> >>> request_lifetime=86400
> >>> from[0]=file://///user2/pat/analisis.sh
> >>> to=srm://hepgrid5.ph.liv.ac.uk:8443/pnfs/ph.liv.ac.uk/data/dteam/generated/2007-01-31/analisis_3.sh
> >>>
> >>>
> >>> Wed Jan 31 18:18:56 GMT 2007: starting SRMPutClient
> >>> Wed Jan 31 18:18:56 GMT 2007: In SRMClient ExpectedName: host
> >>> Wed Jan 31 18:18:56 GMT 2007: SRMClient(https,srm/managerv1,true)
> >>> SRMClientV1 : user credentials are:
> >>> /C=UK/O=eScience/OU=Liverpool/L=CSD/CN=pawel trepka
> >>> SRMClientV1 : SRMClientV1 calling
> >>> org.globus.axis.util.Util.registerTransport() SRMClientV1 : connecting
> >>> to srm at httpg://hepgrid5.ph.liv.ac.uk:8443/srm/managerv1
> >>> Wed Jan 31 18:18:58 GMT 2007: connected to server, obtaining proxy
> >>> Wed Jan 31 18:18:58 GMT 2007: got proxy of type class
> >>> org.dcache.srm.client.SRMClientV1
> >>> Wed Jan 31 18:18:58 GMT 2007: source file#0 : //user2/pat/analisis.sh
> >>> SRMClientV1 : put,
> >>> sources[0]="srm://hepgrid5.ph.liv.ac.uk:8443/pnfs/ph.liv.ac.uk/data/dteam/generated/2007-01-31/analisis_3.sh"
> >>>
> >>> SRMClientV1 : put,
> >>> dests[0]="srm://hepgrid5.ph.liv.ac.uk:8443/pnfs/ph.liv.ac.uk/data/dteam/generated/2007-01-31/analisis_3.sh"
> >>>
> >>> SRMClientV1 : put, protocols[0]="http"
> >>> SRMClientV1 : put, protocols[1]="dcap"
> >>> SRMClientV1 : put, protocols[2]="gsiftp"
> >>> SRMClientV1 : put, contacting service
> >>> httpg://hepgrid5.ph.liv.ac.uk:8443/srm/managerv1
> >>> copy_jobs is empty
> >>> Wed Jan 31 18:19:01 GMT 2007: srm returned requestId = -2146998317
> >>> Wed Jan 31 18:19:01 GMT 2007: sleeping 4 seconds ...
> >>> Wed Jan 31 18:19:05 GMT 2007: FileRequestStatus with
> >>> SURL=srm://hepgrid5.ph.liv.ac.uk:8443/pnfs/ph.liv.ac.uk/data/dteam/generated/2007-01-31/analisis_3.sh
> >>> is Ready
> >>> Wed Jan 31 18:19:05 GMT 2007: received
> >>> TURL=gsiftp://hepgrid5.ph.liv.ac.uk:2811//pnfs/ph.liv.ac.uk/data/dteam/generated/2007-01-31/analisis_3.sh
> >>>
> >>> copy_jobs is not empty
> >>> copying CopyJob, source = file://///user2/pat/analisis.sh destination =
> >>> gsiftp://hepgrid5.ph.liv.ac.uk:2811//pnfs/ph.liv.ac.uk/data/dteam/generated/2007-01-31/analisis_3.sh
> >>>
> >>> GridftpClient: memory buffer size is set to 131072
> >>> GridftpClient: connecting to hepgrid5.ph.liv.ac.uk on port 2811
> >>> GridftpClient: gridFTPClient tcp buffer size is set to 0
> >>> GridftpClient: gridFTPWrite started, source file is
> >>> java.io.RandomAccessFile@1989b5 destination path is
> >>> /pnfs/ph.liv.ac.uk/data/dteam/generated/2007-01-31/analisis_3.sh
> >>> GridftpClient: gridFTPWrite started, destination path is
> >>> /pnfs/ph.liv.ac.uk/data/dteam/generated/2007-01-31/analisis_3.sh
> >>> GridftpClient: set local data channel authentication mode to None
> >>> GridftpClient: parallelism: 10
> >>> GridftpClient: adler 32 for file java.io.RandomAccessFile@1989b5 is
> >>> 1851898276
> >>> GridftpClient: waiting for completion of transfer
> >>> GridftpClient: gridFtpWrite: starting the transfer in emode to
> >>> /pnfs/ph.liv.ac.uk/data/dteam/generated/2007-01-31/analisis_3.sh
> >>> GridftpClient: DiskDataSink.close() called
> >>> GridftpClient: gridFTPWrite() wrote 684bytes
> >>> GridftpClient: closing client :
> >>> org.dcache.srm.util.GridftpClient$FnalGridFTPClient@2b7db1
> >>> GridftpClient: closed client
> >>> execution of CopyJob, source = file://///user2/pat/analisis.sh
> >>> destination =
> >>> gsiftp://hepgrid5.ph.liv.ac.uk:2811//pnfs/ph.liv.ac.uk/data/dteam/generated/2007-01-31/analisis_3.sh
> >>> completed
> >>> setting file request -2146997317 status to Done
> >>> copy_jobs is empty
> >>> stopping copier
> >>> Storage Resource Manager (SRM) CP Client version 1.23
> >>> Copyright (c) 2002-2006 Fermi National Accelerator Laboratory
> >>>
> >>> SRM Configuration:
> >>> debug=true
> >>> gsissl=true
> >>> help=false
> >>> pushmode=false
> >>> userproxy=true
> >>> buffer_size=131072
> >>> tcp_buffer_size=0
> >>> streams_num=10
> >>> config_file=/user2/pat/.srmconfig/config.xml
> >>> glue_mapfile=/opt/d-cache/srm/conf/SRMServerV1.map
> >>> webservice_path=srm/managerv1
> >>> webservice_protocol=https
> >>> gsiftpclinet=globus-url-copy
> >>> protocols_list=http,gsiftp
> >>> save_config_file=null
> >>> srmcphome=/opt/d-cache/srm
> >>> urlcopy=sbin/urlcopy.sh
> >>> x509_user_cert=/user2/pat/.globus/usercert.pem
> >>> x509_user_key=/user2/pat/.globus/userkey.pem
> >>> x509_user_proxy=/tmp/x509up_u385
> >>> x509_user_trusted_certificates=/etc/grid-security/certificates
> >>> globus_tcp_port_range=null
> >>> gss_expected_name=null
> >>> storagetype=permanent
> >>> retry_num=20
> >>> retry_timeout=10000
> >>> wsdl_url=null
> >>> use_urlcopy_script=false
> >>> connect_to_wsdl=false
> >>> delegate=true
> >>> full_delegation=true
> >>> server_mode=passive
> >>> srm_protocol_version=1
> >>> request_lifetime=86400
> >>> from[0]=srm://hepgrid5.ph.liv.ac.uk:8443/pnfs/ph.liv.ac.uk/data/dteam/generated/2007-01-31/analisis_3.sh
> >>>
> >>> to=file://///user2/pat/analisis_3.sh
> >>>
> >>> Wed Jan 31 18:20:16 GMT 2007: starting SRMGetClient
> >>> Wed Jan 31 18:20:16 GMT 2007: In SRMClient ExpectedName: host
> >>> Wed Jan 31 18:20:16 GMT 2007: SRMClient(https,srm/managerv1,true)
> >>> SRMClientV1 : user credentials are:
> >>> /C=UK/O=eScience/OU=Liverpool/L=CSD/CN=pawel trepka
> >>> SRMClientV1 : SRMClientV1 calling
> >>> org.globus.axis.util.Util.registerTransport() SRMClientV1 : connecting
> >>> to srm at httpg://hepgrid5.ph.liv.ac.uk:8443/srm/managerv1
> >>> Wed Jan 31 18:20:17 GMT 2007: connected to server, obtaining proxy
> >>> Wed Jan 31 18:20:17 GMT 2007: got proxy of type class
> >>> org.dcache.srm.client.SRMClientV1
> >>> SRMClientV1 : get:
> >>> surls[0]="srm://hepgrid5.ph.liv.ac.uk:8443/pnfs/ph.liv.ac.uk/data/dteam/generated/2007-01-31/analisis_3.sh"
> >>>
> >>> copy_jobs is empty
> >>> SRMClientV1 : get: protocols[0]="http"
> >>> SRMClientV1 : get: protocols[1]="dcap"
> >>> SRMClientV1 : get: protocols[2]="gsiftp"
> >>> Wed Jan 31 18:20:20 GMT 2007: srm returned requestId = -2146998314
> >>> Wed Jan 31 18:20:20 GMT 2007: sleeping 4 seconds ...
> >>> Wed Jan 31 18:20:24 GMT 2007: FileRequestStatus with
> >>> SURL=srm://hepgrid5.ph.liv.ac.uk:8443/pnfs/ph.liv.ac.uk/data/dteam/generated/2007-01-31/analisis_3.sh
> >>> is Ready
> >>> Wed Jan 31 18:20:24 GMT 2007: received
> >>> TURL=gsiftp://hepgrid5.ph.liv.ac.uk:2811//pnfs/ph.liv.ac.uk/data/dteam/generated/2007-01-31/analisis_3.sh
> >>>
> >>> copy_jobs is not empty
> >>> Wed Jan 31 18:20:24 GMT 2007: fileIDs is empty, breaking the loop
> >>> copying CopyJob, source =
> >>> gsiftp://hepgrid5.ph.liv.ac.uk:2811//pnfs/ph.liv.ac.uk/data/dteam/generated/2007-01-31/analisis_3.sh
> >>> destination = file://///user2/pat/analisis_3.sh
> >>> GridftpClient: memory buffer size is set to 131072
> >>> GridftpClient: connecting to hepgrid5.ph.liv.ac.uk on port 2811
> >>> GridftpClient: gridFTPClient tcp buffer size is set to 0
> >>> GridftpClient: gridFTPRead started
> >>> GridftpClient: set local data channel authentication mode to None
> >>> GridftpClient: parallelism: 10
> >>> GridftpClient: waiting for completion of transfer
> >>> GridftpClient: gridFtpRead: starting the transfer in emode from
> >>> /pnfs/ph.liv.ac.uk/data/dteam/generated/2007-01-31/analisis_3.sh
> >>> GridftpClient: DiskDataSink.close() called
> >>> GridftpClient: gridFTPWrite() wrote 684bytes
> >>> GridftpClient: closing client :
> >>> org.dcache.srm.util.GridftpClient$FnalGridFTPClient@13f136e
> >>> GridftpClient: closed client
> >>> execution of CopyJob, source =
> >>> gsiftp://hepgrid5.ph.liv.ac.uk:2811//pnfs/ph.liv.ac.uk/data/dteam/generated/2007-01-31/analisis_3.sh
> >>> destination = file://///user2/pat/analisis_3.sh completed
> >>> setting file request -2146997314 status to Done
> >>> copy_jobs is empty
> >>> stopping copier
> >>> -rw 684 analisis.sh
> >>> -rw 684 analisis_3.sh
> >>>
> >>>
> >>> -bash-2.05b$ lcg-cr -v --vo dteam -d hepgrid5.ph.liv.ac.uk -l
> >>> lfn:/grid/dteam/analisis_5.sh file:///user2/pat/analisis.sh
> >>> Using grid catalog type: lfc
> >>> Using grid catalog : prod-lfc-shared-central.cern.ch
> >>> Source URL: file:///user2/pat/analisis.sh
> >>> File size: 684
> >>> VO name: dteam
> >>> Destination specified: hepgrid5.ph.liv.ac.uk
> >>> Destination URL for copy:
> >>> gsiftp://hepgrid5.ph.liv.ac.uk:2811//pnfs/ph.liv.ac.uk/data/dteam/generated/2007-02-01/file01f6d4d0-57c2-4da0-8111-0c4c26ab6df2
> >>>
> >>> # streams: 1
> >>> # set timeout to 0 seconds
> >>> Alias registered in Catalog: lfn:/grid/dteam/analisis_5.sh
> >>> 684 bytes 0.69 KB/sec avg 0.69 KB/sec inst
> >>> Transfer took 2050 ms
> >>> Destination URL registered in Catalog:
> >>> srm://hepgrid5.ph.liv.ac.uk/pnfs/ph.liv.ac.uk/data/dteam/generated/2007-02-01/file01f6d4d0-57c2-4da0-8111-0c4c26ab6df2
> >>>
> >>> guid:96c5ce35-53c9-4d85-9d77-23f6c254ef0d
> >>>
> >>> ---END---
> >
>
--
Dr. Paul A. Trepka ;Intl:+44(0)151 794 2137
Oliver Lodge Laboratory ;Fax: +44(0)151 794 3444
Dept. of Physics ;e-mail: [log in to unmask]
The University of Liverpool
Liverpool L69 7ZE
England, UK
|