TeraGrid Compute Resources

From TeraGrid Wiki

Jump to: navigation, search


TeraGrid Compute Resources

TeraGrid Resource Providers

Registering a new allocable TeraGrid Compute Resource

Step 1: RP must get resource into POPS. To do so, you need a POPS display name and a few "attributes," which in POPS means a short set of Q&A. The easiest way to understand the "attributes" is to go to POPS and begin entering a DAC request, and go to the Resource Request section of the submission forms. Send this information to allocations@teragrid.org. At this point, we don't even need to know how many SUs you plan to make available, just that the resource will be available. As for deadline: You should be sure that you notify the allocations team a week or two prior to the time that POPS begins accepting submissions for the next xRAC meeting.

Step 2: The RP needs to promote the resource's availability so that potential requestor's know to ask for time. There is nothing formal about this step, and RPs are advised to do this well in advance of the proposal submission period for the xRAC meeting.

Step 3: Approximately 1 month before the xRAC meeting, all RPs need to specify the number of SUs available for allocation at the upcoming meeting. This applies to all resources, not just new ones.

Step 4: The NU/TG SU conversion factor should be established prior to the resource's first xRAC meeting. There is a formula for doing so, which I'll look up and try to post, but in general the RP needs to provide the Rmax value from the HPL benchmark and the number of processors used for that benchmark run. If HPL can't be run, we have used the conversion factor from a comparable architecture, scaled to the number of processors and processor speeds.

Step 5: Finally, the RP needs to include their system in TGCDB so allocations can be made. TeraGrid also presumes they'll have AMIE up and running so they are plugged into the accounting system. Some prelim documentation is available for this step (contact David Hart). It involves preparing a short text file. Needs to be done several weeks prior to the allocation award period, so that allocations and user accounts can be created and the TGCDB is prepared to accept jobs for that resource. Here is a .doc file describing how to register a resource in TGCDB.

Decommissioning an allocable TeraGrid Resource

Step 1: Make a plan for dealing with existing allocations and users. Generally, the RP is responsible, but GIG can provide support. This may be as simple as contacting users and instructing them to transfer SUs to other resources. For heavily used resources, an RP may want to automatically transfer users to new allocations -- the RP should work with the Core Services staff on this. Where decommissioning dates are known well in advance, the RP can begin "ramping down" its available SUs so that most users will not have to transfer.

Step 2: RP should update the end of service date and status in the Resource Catalog, as soon as such details are known.

Step 3: The RP needs to determine when to de-list the resource from POPS. Contact Kent Milfeld (milfeld@tacc.utexas.edu) and pops-devel@teragrid.org about de-listing. Note that a resource can be de-listed separately (if that's desired) from Research, Startup, and/or Education submissions. De-listing for TRAC happens ~2-3 months prior to a meeting so get such changes in early.

Step 4: Update the end dates in TGCDB. Submit a ticket to the TGCDB group (Michael Shapiro). RPs can provide two dates. Required: the end of service date for resources. No new allocations will be initiated after this date (though late usage can still be sent to TGCDB.) Optional: A separate end date for any 'grid resources' of which the resource is a part, usually TG Roaming, but also, e.g., Abe/QueenBee.

Step 5: Contact the TGUP team to have the resource removed from the portal. Some elements are handled automatically (e.g., the add user form), but other parts (e.g., the system monitor) must be updated manually.

System Administrators for resources

GIG SI Packaging Contacts for Resources

Compute Resource Management Software

CTSS 4 and Information Services Deactivation

Framework for Cooperative Sharing of Computational and Storage Resources

Personal tools