Difference: LHCComputingGrid (4 vs. 5)

Revision 52006-03-03 - AndrasLaszlo

Line: 1 to 1
Changed:
<
<
Getting access to LHC Computing Grid. One can find a brief introductory material at the RMKI's getting started page, where you can find information on how to get access to LCG. There are also simple examples on this page.
>
>
Getting access to LHC Computing Grid. One can find a brief introductory material at the RMKI's getting started page, where you can find information on how to get access to LCG. There are also simple examples on that page.
 
Changed:
<
<
Some practical information on usage.
>
>
Some more practical information on running a typical job.

  • Get authenticated on the LCG:

> grid-proxy-init Here, you will be prompted for your grid password. Or:

> grid-proxy-init -valid 4:00 This is the same, but the authentication will be only valid for 4 hours.

Now you can perform various operations on the grid, e.g. send jobs. You can get info on your authentication (e.g. expiration etc.) by grid-proxy-info. You can destroy your authentication proxy by grid-proxy-destroy.

  • Get your jobs authenticated on the LCG:

> myproxy-init Here, you will be prompted for your grid password, and to specify a password attached to your so called job proxy to be created. Or:

> myproxy-init -n Here, you won't be asked to specify a password for your proxy.

Running myproxy-init is necessary when you are running long-term jobs. In this case, the myproxy-init ensures that your jobs still will have authentication even after your interactive proxy (obtained by grid-proxy-init) has expired. You can get information on your job proxy by myproxy-info. You can destroy your job proxy by myproxy-destroy. Note: if you don't use this, you may not be able to retrieve your job outputs for long-term jobs!

  • Running a job:

A "Hello World!" example can be found at RMKI's getting started page. Instead of a "Hello World!" example, we present here a framework, with which one can send jobs in mass to the LCG: see the attached tarball called 'submit.tar'. You can adjust these shell script wrappers to your needs. This assumes that your jobs are placed in a directory scheme like in the attached example 'skeleton.tar'. This latter is a framework for simple jobs, which are ran on a simple computers (contains an automated Makefile and an automated starter shell script).

  • Specifying system requiremets:

It is a common task to require some software environment from the working nodes, where your jobs will be executed (e.g. AFS). You can specify requirements by placing lines like the following line to your .jdl file:

Requirements = (Member("AFS", other.GlueHostApplicationSoftwareRunTimeEnvironment));

One can apply logical operations to the arguments, like:

Requirements = (Member("AFS", other.GlueHostApplicationSoftwareRunTimeEnvironment) && Member("VO-cms-ORCA_8_13_1", other.GlueHostApplicationSoftwareRunTimeEnvironment));

-- AndrasLaszlo - 03 Mar 2006

 
Deleted:
<
<
-- AndrasLaszlo - 01 Mar 2006
 
META TOPICMOVED by="AndrasLaszlo" date="1141225754" from="CMS.WebTopicCreator" to="CMS.LHCComputingGrid"
 
This site is powered by the TWiki collaboration platform Powered by Perl This site is powered by the TWiki collaboration platformCopyright &© by the contributing authors. All material on this collaboration platform is the property of the contributing authors.
Ideas, requests, problems regarding TWiki? Send feedback