News and Events
Empower you and your Students with Parallel Programming: An MPI Workshop using Python | November 19 and 21st 4:30-6:3 PM
Are you ready to supercharge your research with parallel programming? Join us for a comprehensive workshop on the Message Passing Interface (MPI), the industry-standard for high-performance computing, using the popular Python programming language.
Why MPI?
- Efficiency: Maximize computational power and speed up your simulations.
- Scalability: Handle massive datasets and complex algorithms effortlessly.
- Flexibility: Integrate MPI into your existing Python workflows.
What You’ll Learn:
- Parallel Computing Fundamentals: Explore concepts like Amdahl’s Law, Gustafson’s Law, and performance metrics.
- MPI Basics: Master essential commands like send, receive, broadcast, scatter, and gather.
- Practical Applications: Implement parallel Python codes for real-world scenarios.
- Hands-On Experience: Learn by doing guided exercises and examples.
Workshop Details:
- Dates: November 19th and 21st
- Location: MC 216
- Time: 4:30-6:30 PM
Don’t miss this opportunity to accelerate your research and achieve improved performance for your results.
Register now:
Elevate Your Research Computing Game: Join the Monthly Tech Talks Group Meeting: 1st Tuesday of every month starting October 1st 2-3pm!
Are you a researcher, student, or simply curious about Research & High Performance Computing (HPC)? Join us for our monthly tech talks group meeting at Colorado School of Mines!
What to Expect:
- Informal Presentations: Learn about the latest HPC trends, tools, and techniques from experts in the field.
- Networking Opportunities: Connect with like-minded individuals and build valuable relationships within the HPC community.
- Hands-on Mini-Workshops: Participate in practical sessions to enhance your HPC skills and knowledge.
- Open Discussions: Share your ideas, challenges, and experiences in a collaborative environment, whether you want to share your HPC-driven research, or want to discuss workflows that have benefited you on
Who Should Attend:
- Researchers and scientists from various disciplines
- Students interested in using HPC for their research
- Faculty and staff at Colorado School of Mines
When and Where:
- Date: 1st Tuesday of Every Month, starting October 1st
- Time: 2-3pm
- Location: MC 141
Join us for a stimulating and informative gathering!
Do you have any other questions? Feel free to reach out to rc@mines.edu
RC Presents: AI/ML Learning Series for Fall 2024
Do you have researchers interested in using AI/ML? The Mines Research Computing Group has partnered with Mark III Systems to offer a virtual AI/Machine Learning education series for faculty and students this fall. This bi-weekly program includes hands-on tutorials and rapid-fire lab sessions delivered through Jupyter Notebook. Industry experts will share insights on the latest AI/ML trends. We invite you to join us for this valuable learning opportunity.
The series begins on September 10, 2024 from 12-1 pm on Zoom. Below is the full schedule:
- September 10, 12-1 pm: Introduction to Machine Learning and AI
- September 24, 12-1 pm: Intro to Deep Learning: An Introduction to Neural Networks
- October 8, 12-1 pm: Introduction to Datasets
- October 22, 12-1pm: Introduction to Computer Vision
- November 5, 12-1pm: Intro to Large Language Models (LLMs)
For more details and to register for the workshop series: https://trending.markiiisys.com/mines-nvidia-aiseries
We encourage you to have your students register for this series!
RC Presents: New HPC User Workshop for Fall 2024
The Research Computing Group at Mines is hosting an Intro to High Performance Computing (HPC) workshop for new and existing computational researchers on September 3-5, 2024 from 4:30-6:30 pm in MC 216. New & current Wendian and Mio users are both encouraged to register!
We have an exciting agenda planned:
- Introduction to Serial vs. Parallel Computing
- Introduction to “Pre-HPC” skills using Linux & Bash
- Introduction to HPC job scheduling using Slurm
- Unleashing the Power of Python and Jupyter Notebooks
- Managing Conda Environments
- Mastering Batch Job Submission Techniques
- How to Budget for an HPC Project
And that’s just the beginning! Join the Research Computing team for an immersive learning experience where you’ll gain valuable insights into the world of HPC. Whether you’re a seasoned researcher or just starting your journey, this workshop will be a great opportunity to expand your knowledge.
Register here: https://forms.office.com/Pages/ResponsePage.aspx?id=4AlymbMJI0aaTXavpEpnXESw8KjIsIhPnK1wnHQGsuZUME1YRzIwSTlKNDhYUEk2RUQyV0taMkFMNC4u
Wendian Update – April 17, 2023
Cyberinfrastructure & Advanced Research Computing (CIARC) Team Services
We are a team of research computing experts here to help you become a greater and more efficient researcher. Maybe you inherited your job script and never looked back, or you play it safe when requesting memory. We can look at your job script with you and see if there are any opportunities to be more efficient when requesting resources. If you need to submit a large batch of jobs and are unsure about how best to orchestrate the workflow, or you want to know how to get started with parallel computing with your research, we can consult with you and your research team to work out your needs. Perhaps you are applying for a grant and thinking about adding computational simulations, we can help with your proof-of-concept and provide guidance for your budget.
Check out our newly improved TDX portal where you can now schedule a meeting with a CIARC team member for live support.
Checking HPC Usage
If you would like to check your usage for a given month, we have provided some convenient commands on Wendian.
To check usage as a user, use the command getUtilizationByUser:
$janedoe@wendian002:[~]: getUtilizationByUser janedoe -- Cluster/Account/User Utilization 2023-04-01T00:00:00 - 2023-04-12T11:59:59 (993600 secs) "Account","User","Amount","Used" "hpcgroup","janedoe - Jane Doe",$1.23,0
To check usage as a PI for all your users, use the command getUtilizationByPI:
pi@wendian002:[~]: getUtilizationByPI pi -- Cluster/Account/User Utilization 2023-04-01T00:00:00 - 2023-04-12T11:59:59 (993600 secs) "Account"|"User"|"Amount" "hpcgroup","janedoe - Jane Doe",$1.23,0 "hpcgroup","johnsmith - John Smith",$1000.00,0
Checking Job Efficiency
If you are interested in checking how efficient your job was after it is finished running, we have a tool installed called reportseff that allows one to quickly check the percent utilization of CPU and memory requested. This tool can let you check jobs for a given jobID, as well as check in each job directory that has Slurm output files.
Please refer to the GitHub page for more information: https://github.com/troycomi/reportseff
For more, go to our rates website and be sure to check out our blog for the most up-to-date information.
URGENT: Wendian Critical Request & Reminders
Hello Wendian users,
Critical request: We have reached 88% capacity on the Wendian scratch partition. Please remove all unnecessary files on Wendian. If the filesystem reaches 95% capacity, we will be purging data >180 days old, per policy.
Announcements and Reminders
Implementation of Monthly Billing for HPC
This is an email reminder that Saturday April 1st at 12:00am will mark the end of preemption and the beginning of a monthly billing cycles. If you need more information on the charge model, please see: https://ciarc.mines.edu/hpc-business-model.
Quality of Service (QoS) Changes
Quality of Service is a parameter used in Slurm to alter priority access to the job scheduler. Historically, Wendian had two main QoS options, full and normal. The full QoS allowed for jobs to submit to the entire available pool of CPU nodes, while normal QoS was a smaller pool with non-premption. Moving forward, please use the normal QoS. The Full QoS will exist through the month of April and behave identically to normal. At the end of April, the full QoS will be removed; direct your scripts to the normal QoS now to avoid job errors in May. To do this, please add the following to your Slurm scripts:
#SBATCH -q normal
Pricing on HPC
Below is the current table of rates for the new charge model. Though we are charging the same price for low and high memory compute nodes, we will be monitoring usage and reaching out to users that are inefficient with their memory consumption. Your jobs will be routed to the appropriate node, based on your memory request. Please request only the memory that your job requires.
These values will be kept up to date on the CIARC website: https://ciarc.mines.edu/hpc-storage-rates
Node Type | Rate per hour [USD] |
CPU core | Memory per CPU core [GB] |
GPU |
CPU | $0.02 | 1 | 5 or 10* | NA |
GPU enabled | $0.12** | 6 | 48 | 1xV100 |
*There are two types of CPU nodes on Wendian: (1) a “low” memory node of 192 GB, and (2) a “high” memory node of 384 GB node. Jobs will be routed to each of these nodes depending on requested resources.
**For GPU jobs, the V100 node has 4 GPU cards. For each GPU card you request, you automatically must pay for 6 CPU cores and 48 GB memory, since this is ¼ of the available compute resources on the GPU node.
Consultations for improved compute efficiency
We understand that a charge model means that individuals will want to run jobs as efficiently as possible. If you would like to reach out for a consultation on how best to utilize HPC resources, please use the following Help Center ticket request:
https://helpcenter.mines.edu/TDClient/1946/Portal/Requests/ServiceDet?ID=30287
Wendian Scratch Filesystem
Files on /scratch is a short-term shared filesystem for storing data currently necessary for active research projects; this is subject to purge on a six-month (180 day) cycle. No limits (within reason) to the amount of data. Currently Wendian is reaching 88% of its 1 petabyte (1000 TB) storage capacity. Once the filesystem reaches the critical threshold of 95% capacity, access to Wendian will have to cease until the issue is resolved. This policy will remain in place: https://wpfiles.mines.edu/wp-content/uploads/ciarc/docs/pages/policies.html
Classroom Use on Wendian
As a reminder, Wendian HPC usage in classes will not be affected by the new model and ITS will request an annual budget to cover classroom costs. If you are interested in using HPC for your class, you can submit a request here: https://helpcenter.mines.edu/TDClient/1946/Portal/Requests/ServiceDet?ID=38002
If you have further questions, please submit a ticket here.
Best,
HPC@Mines Staff
Modified HPC Office Hours for Week of 7/25
Office Hours for HPC users on Wendian and Mio will be modified for the week of July 25. Computational scientist Nicholas Danes will open his virtual doors via Zoom at the following times for this week only:
- Wednesday, July 27, 12-1 pm
- Thursday, July 28, 10-11 am
Please join Nicholas on Zoom with the following link:
Nicholas Danes (he/him) – Mines is inviting you to a scheduled Zoom meeting.
Topic: HPC Office Hours
Time: This is a recurring meeting Meet anytime
Or iPhone one-tap: 13462487799,4179773375# or 16699006833,4179773375#
Or Telephone:
Dial: +1 346 248 7799 (US Toll) or +1 669 900 6833 (US Toll)
Meeting ID: 417 977 3375
International numbers available: https://nam04.safelinks.protection.outlook.com/?url=https%3A%2F%2Fmines.zoom.us%2Fu%2FaolEkoRay&data=05%7C01%7Cndanes%40mines.edu%7C659c2b0535cc4c94f71008da336f6aa2%7C997209e009b346239a4d76afa44a675c%7C0%7C0%7C637878850883473481%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C2000%7C%7C%7C&sdata=pqbGReRJ%2BwEjIDHwWqZ5hLdA4Xh4A7aQIln5ILJBVmQ%3D&reserved=0
Or a H.323/SIP room system:
H.323: 162.255.37.11 (US West) or 162.255.36.11 (US East)
Meeting ID: 417 977 3375
Note: The initial few weeks of office hours are considered a test phase to gauge community interest; if deemed successful, we will post our regular HPC office hours on our website at https://nam04.safelinks.protection.outlook.com/?url=https%3A%2F%2Fciarc.mines.edu%2F&data=05%7C01%7Cndanes%40mines.edu%7C659c2b0535cc4c94f71008da336f6aa2%7C997209e009b346239a4d76afa44a675c%7C0%7C0%7C637878850883473481%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C2000%7C%7C%7C&sdata=vYzjoIqef97jTUjemYrGEHuxtYRb0fCUdFWBosigJ1Q%3D&reserved=0.
If these office hours are not convenient for you, please reach out to ndanes@mines.edu directly to schedule a separate time for hands-on help with HPC resources.
Thank you!
Regards,
HPC@Mines
HTCondor Week 2022 – Virtual Registration closed May 23!
HTCondor is a high throughput computing (HTC) open source software suite designed for automating large batch workloads and managing other compute resources. HTCondor Week is a workshop series that provides in-depth tutorials and talks to learn more about HTCondor, HTC and how they are used.
You can learn more about the Workshop series here: https://agenda.hep.wisc.edu/event/1733/
Registration is required and closes May 23. There is an in-person component, but virtual attendance via Zoom is also available.
HPC Virtual Office Hours
We’re exploring ways to expand facilitation for HPC users on Wendian and Mio, and are beginning with virtual office hours. Computational scientist Nicholas Danes will open his virtual doors via Zoom, twice per week, at the following times, beginning May 04, 2022:
Wednesdays, 12-1 pm
Thursdays, 12-1 pm.
Please join Nicholas on Zoom with the following link:
Nicholas Danes (he/him) – Mines is inviting you to a scheduled Zoom meeting.
Topic: HPC Office Hours
Time: This is a recurring meeting Meet anytime
Or iPhone one-tap: 13462487799,4179773375# or 16699006833,4179773375#
Or Telephone:
Dial: +1 346 248 7799 (US Toll) or +1 669 900 6833 (US Toll)
Meeting ID: 417 977 3375
International numbers available: https://nam04.safelinks.protection.outlook.com/?url=https%3A%2F%2Fmines.zoom.us%2Fu%2FaolEkoRay&data=05%7C01%7Cndanes%40mines.edu%7C659c2b0535cc4c94f71008da336f6aa2%7C997209e009b346239a4d76afa44a675c%7C0%7C0%7C637878850883473481%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C2000%7C%7C%7C&sdata=pqbGReRJ%2BwEjIDHwWqZ5hLdA4Xh4A7aQIln5ILJBVmQ%3D&reserved=0
Or a H.323/SIP room system:
H.323: 162.255.37.11 (US West) or 162.255.36.11 (US East)
Meeting ID: 417 977 3375
Note: The initial few weeks of office hours are considered a test phase to gauge community interest; if deemed successful, we will post our regular HPC office hours on our website at https://nam04.safelinks.protection.outlook.com/?url=https%3A%2F%2Fciarc.mines.edu%2F&data=05%7C01%7Cndanes%40mines.edu%7C659c2b0535cc4c94f71008da336f6aa2%7C997209e009b346239a4d76afa44a675c%7C0%7C0%7C637878850883473481%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C2000%7C%7C%7C&sdata=vYzjoIqef97jTUjemYrGEHuxtYRb0fCUdFWBosigJ1Q%3D&reserved=0.
If these office hours are not convenient for you, please reach out to ndanes@mines.edu directly to schedule a separate time for hands-on help with HPC resources.
Thank you!
Regards,
HPC@Mines
PEARC22 – Early Registration Ends May 10, 2022!
The Association for Computing Machinery’s (ACM) annual conference, Practice and Experience in Advanced Research Computing 2022 (PEARC22) is coming July 10-14, 2022 in-person in Boston, MA. Although submission deadlines have passed, it is still a great opportunity for researchers and students to become involved in one of the largest research computing conferences in the United States. Visit the PEARC22 homepage for more information on registration.