Using TeraGrid Build and Test
From TeraGrid Wiki
Preparing RP resources
To use Build & Test on an RP resource the administrators and the person building software need to preparate the resource.
RP administrator preparations
RP administrators need to:
- upgrade the resource to SoftEnv 1.6.2 and make it the default CTSS 3 version (Upgrade Instructions), and
- install Condor 6.9.1 or 6.9.2 on Solaris (Install Instructions).
RP administrator deployment status is tracked on the CTSS 4 Built and Test Readiness page.
Software builder preparations
The tg-build tool needs to be installed on each TG resource. Initially we're going to use a version installed on each TG resource in JP's home directory under tg-build/.
If you need to install your own version use these instructions:
mkdir tg-build cd tg-build soft add +pacman pacman -install TeraGrid/ctss4/tg-build:tg-build.pacman vi etc/tg-build.conf # make sure GRID_SECURITY_DIR points to a location that has TG accepted CAs # make sure other configuration parameters are correct
Launching build services on RP resources
By default, the launch method listed here should be used to launch build services on each RP resource.
First, login to a machine where you have your private x509 certificate and create a proxy.
Next, from that machine gsissh to the machine where you want to launch an interactive build service. This will propagate your proxy.
Finally, launch the build service:
To remotely launch a build service using Gram you need to be on a machine with Globus client tools, the tg-build software, and a valid proxy.
To launch the build service:
tg-build/bin/add-build-host -on dtf.sdsc.teragrid.org -m 120
This submits a 120 minute job to launch a build service on the dtf.sdsc.teragrid.org resource. Once the job starts it should join the build pool within 5 minutes. You can monitor which nodes are part of the build pool at http://build.teragrid.org/nmi/?page=pool/index.
The <bold>-on</bold> attribute can be any fully qualified tgwhereami value, or unique resource name, like: lonestar, bigred, dtf.ncsa, etc.
For more information use "tg-build/bin/add-build-host -help".
Submitting Software Builds
Login to tg-login.uc.teragrid.org. Setup to access the TeraGrid's CVS.
tg-login1:~> export CVS_RSH=ssh tg-login1:~> export CVSROOT=":ext:<login>@repo.teragrid.org:/var/lib/cvs"
Check-out the sample build glue.
tg-login1:~> mkdir build-test tg-login1:~> cvs -d build-test co gig-si/software/build/test tg-login1:~> cd build-test tg-login1:~/build-test> ls CVS cmdfile glue.in remote_task.pl source.in tgdocs.in
Customize the cmdfile:
- set platforms to the NMI platform where you want to run
- set notify to your e-mail
- set, or remove, prereqs
Get to the tools and submit the build:
tg-login1:~/build-test> soft add @build tg-login1:~/build-test> nmi_submit cmdfile [lots of output] gid = bacon_tg-t007_1166454778_8358 runid = 26
The cmdfile is commented, and can be used as a template for creating other scripts. You can see the output of the jobs you submit by going to http://build.teragrid.org/nmi/index.php. Click on "Run Results Overview". Your job will be listed under the runid that was output by the nmi_submit command.
If you have any questions about customizing these scripts send e-mail to gig-pack. There are also tutorials at http://nmi.cs.wisc.edu/tutorials/user, as well as a reference manual at http://nmi.cs.wisc.edu/node/65.
RP resource testing
Starting the week of March 19, the gig-pack team started testing the Build & Test system on RP resources.
The following table shows the testing status of each resource.
|Person||Resource||Successfull Test Build Date||Notes|
|Charles||copper.ncsa.teragrid.org||Missing SoftEnv 1.6.2|
|Charles||tungsten.ncsa.teragrid.org||Missing SoftEnv 1.6.2|
|Charles||rachel.psc.teragrid.org||3/29/2007 (JP)||Login and use "-pbs"|
|Jason||cobalt.ncsa.teragrid.org||Missing SoftEnv 1.6.2|