Subscribe/Unsubscribe

To receive the monthly RC newsletter via email:

  • Go here
  • Click “RIT Login”
  • Login with your RIT credentials
  • Click “Subscribe”

To stop receiving the monthly RC newsletter via email:

  • Go here
  • Click “RIT Login”
  • Login with your RIT credentials
  • Click “Unsubscribe”

Newsletter Archive

November 2025

Greetings, RIT Research Community!

Welcome to the ninth edition of the RIT Research Computing Newsletter! Each month, we share system updates, new features, upcoming events, and tips to help you make the most of our resources. Our goal is to keep you informed, connected, and empowered as RIT’s research computing ecosystem continues to evolve.

AT A GLANCE

This month’s Newsletter incudes important and exciting updates:

  • New Acceptable Use Policy – outlining proper and responsible use of RC-provided computing resources
  • Get to Know Us – this month’s spotlight is Paul Mezzanini
  • New Hardware for Testing – we have added new test nodes for improved performance
  • System Maintenance Updates – summary of recent maintenance activities
  • Upcoming Training – check out and sign up for upcoming training sessions
  • Tips & Tricks – helpful tips to make the most of RC resources

ACCEPTABLE USE POLICY

We have implemented a Research Computing Acceptable Use Policy (AUP) to provide further clarity to the RIT Research Computing Community. This policy outlines the proper and responsible use of RC-provided computing resources. It’s designed to help all users understand their rights and responsibilities when using these shared systems.

Read the full policy here: policy.rc.rit.edu (Sign-In Required)

Who does this policy apply to?

All RIT faculty, staff, students, and affiliates that access RC resources, including:

  • High-Performance Computing Cluster (sporcsubmit.rc.rit.edu)
    • Including all individual cluster nodes
  • OnDemand Web Portal (ondemand.rc.rit.edu )
  • GitLab (git.rc.rit.edu)
  • REDCap (redcap.rc.rit.edu)
  • Mirrors (mirrors.rit.edu)
  • Virtual Machines provisioned by RC
  • Research File Shares (\ad.rit.edu\arc\research)
  • Documentation (docs.rc.rit.edu)
  • Publications (publications.rc.rit.edu)
  • Any system or website with a *.rc.rit.edu address

What does this mean for you?

If you continue using any RC systems on or after Dec. 1, 2025, you are agreeing to follow the new AUP. When you submit an RC Project Questionnaire, you must acknowledge that you have read and will follow the new AUP.

If you have any questions, please contact Matangi Buch, Executive Director of Research Computing.

GET TO KNOW US! PAUL MEZZANINI

Paul was the first member of the Research Computing Team, starting in 2007! In his current role as a Platform Engineer, he maintains and makes improvements to RC’s Ceph Filesystem, the backbone of all RC systems. He also maintains mirrors.rit.edu, a mirror of popular open source software for the RIT Community.

Before joining RC, Paul received a B.S. in Networking and Systems Administration from RIT in 2001. He previously worked as a Linux Systems Administrator in the Computer Engineering Department.

Outside of work, Paul is an avid maker, tinkering with 3D printers and power tools to bring his ideas to life. He also enjoys gardening (ask him about his ghost peppers), cooking, and camping with his family.

NEW HARDWARE FOR TESTING

We’ve added next-generation NVIDIA Grace/Grace (GG) and Grace/Hopper (GH) nodes available via the grace partition. These new nodes promise higher efficiency and performance for data-intensive workloads.

Try Them Out: These nodes have newer GPUs (H100s, GH200s). If you would like your Spack environment rebuilt to run on these new nodes, please submit a request.

Share Your Feedback: We would love to hear your feedback on using this new hardware! Please share your feedback here.

MAINTENANCE HIGHLIGHTS

Recent maintenance windows included:

  • Routine system-level software updates on cluster nodes
  • New login welcome message (on sporcsubmit) showing real-time storage usage and recent job statistics
  • Removal of unused and obsolete spack software
  • Testing of shared GPU access for OnDemand interactive jobs
  • Return of our 4th H100 GPU to service

Coming Soon: Shared GPU access will expand to additional interactive apps in OnDemand and interactive jobs. This will mean that exclusive access to GPUs for the duration of an interactive job will be sunset. Batch jobs will continue to have exclusive GPU access.

UPCOMING TRAINING AND MAINTENANCE

You can see all upcoming events on our Events Calendar.

  • Thursday, Dec. 4, 1:00-2:00 p.m.: Cluster Training Session. Sign Up & Details
  • Thursday, Dec. 18: RC Maintenance Day. Details

TIPS & TRICKS

  • Looking for software? You have options! In addition to software support from the RC team (with Spack), you can install your own software (e.g. conda, pip, containers). Check out our Software Tutorial to learn how!
  • Wondering when your job will start? Run squeue --me --start to see the worst case start time (assuming no jobs finish early).
  • Are your batch jobs running efficiently? Visualize CPU, RAM, and GPU utilization metrics for your jobs with Grafana!

HELP RESEARCH COMPUTING GROW: CITE US!

If your research uses RC resource (HPC Cluster, software environments, Ceph storage, fileshares, virtual machines, etc.), please cite Research Computing in your publications.

Your citations help us secure funding and improve our services, and your publications will be showcased on our Publications Site!

Forgot to cite us? No worries! Fill out this quick form to update us on your publication.


Thank you for being part of the RIT Research Computing community!

We value your feedback, so please share your ideas and suggestions here.

Log in to Subscribe or Unsubscribe from this Newsletter.

Happy Computing!

– The RIT Research Computing Team

October 2025

Greetings, RIT Research Community!

Welcome to the eighth edition of the RIT Research Computing Newsletter! Each month, we share system updates, new features, upcoming events, and tips to help you make the most of our resources. Our goal is to keep you informed, connected, and empowered as RIT’s research computing ecosystem continues to evolve.

AT A GLANCE

This month’s Newsletter incudes important and exciting updates:

  • ColdFront Launch – a new self-service RC Project management tool for Principal Investigators
  • Revamped Service Request Forms – streamlined RC Project Questionnaire and approval workflow for improved user experience
  • New Hardware for Testing – we have added new test nodes for improved performance
  • System Maintenance Updates – summary of recent maintenance activities
  • Upcoming Training – check out and sign up for upcoming training sessions
  • Tips & Tricks – helpful tips to make the most of RC resources

INTRODUCING COLDFRONT!

We are thrilled to announce the launch of ColdFront, an open-source portal for managing RIT Research Computing projects. ColdFront allows Principal Investigators (PIs) increased visibility and control of their RC Projects, streamlining the process for managing compute and storage allocations.

Visit ColdFront: https://coldfront.rc.rit.edu

PIs for can now use ColdFront to:

  • Update project details, such as descriptions, grants, and publications
  • Add or remove collaborators on your projects
  • Request new (or modifications to) compute and storage allocations

Annual Project Review: When you login to ColdFront, you may see “Needs Review” or “Project Review Pending”. Once a year, all PIs must perform the following review process:

  • Verify and update project details for accuracy
  • Add relevant grants and publications
  • Verify and update collaborators on your project

ColdFront Resources:

Since ColdFront is newly deployed technology at RIT, we appreciate your patience as we refine the platform based on your feedback. We’re excited to provide ColdFront to our research community!

UPDATED RC REQUEST FORMS

We’ve improved our RC Project Questionnaire and related request forms for a smoother experience. Click here to see the updated Research Computing Project Request (Questionnaire).

Key Changes:

NEW HARDWARE FOR TESTING

We’ve added next-generation NVIDIA Grace/Grace (GG) and Grace/Hopper (GH) nodes available via the grace partition. These new nodes promise higher efficiency and performance for data-intensive workloads.

Try Them Out: These nodes have newer GPUs (H100s, GH200s). If you would like your Spack environment rebuilt to run on these new nodes, please submit a request.

Share Your Feedback: We would love to hear your feedback on using this new hardware! Please share your feedback here.

MAINTENANCE HIGHLIGHTS

Recent maintenance windows included:

  • Routine system-level software updates on cluster nodes
  • New login welcome message (on sporcsubmit) showing real-time storage usage and recent job statistics
  • Removal of unused and obsolete spack software
  • Testing of shared GPU access for OnDemand interactive jobs
  • Return of our 4th H100 GPU to service

Coming Soon: Shared GPU access will expand to additional interactive apps in OnDemand and interactive jobs. This will mean that exclusive access to GPUs for the duration of an interactive job will be sunset. Batch jobs will continue to have exclusive GPU access.

UPCOMING TRAINING AND MAINTENANCE

You can see all upcoming events on our Events Calendar.

TIPS & TRICKS

  • Slack Notifications for Jobs – Get real-time updates from Slurm for your batch jobs! Learn how in Part 1 of our Slurm Tutorial. These notifications will include details about tour resource utilization to help you tailor your resource selection.
  • Run Applications in Your Browser – You can use OnDemand to launch Matlab, Jupyter Notebooks, VSCode, and more.
  • Are your batch jobs running efficiently? Visualize CPU, RAM, and GPU utilization metrics for your jobs with Grafana!

HELP RESEARCH COMPUTING GROW: CITE US!

If your research uses RC resource (HPC Cluster, software environments, Ceph storage, fileshares, virtual machines, etc.), please cite Research Computing in your publications.

Your citations help us secure funding and improve our services, and your publications will be showcased on our Publications Site!

Forgot to cite us? No worries! Fill out this quick form to update us on your publication.


Thank you for being part of the RIT Research Computing community!

We value your feedback, so please share your ideas and suggestions here.

Log in to Subscribe or Unsubscribe from this Newsletter.

Happy Computing!

– The RIT Research Computing Team

September 2025

Greetings, RIT Research Community!

Welcome to the seventh edition of the RIT Research Computing Newsletter! Each month, we’ll bring you updates on system improvements, new features, upcoming events, and tips to help you make the most of our resources. Our goal is to keep you informed and engaged as we continue enhancing the research computing experience at RIT.

News

Research Computing Survey:

  • Here at Research Computing, we take pride in supporting the computational needs of RIT researchers in their quest to discover. Your voice directly shapes the future of Research Computing at RIT. By completing this short (15-minute) survey, you’ll help us:
    • Improve the tools, compute, and services you rely on today.
    • Identify and prioritize new services that would best support your research.
    • Reduce barriers you may face in accessing computational resources.
    • Ensure that your needs are represented in future investments and planning.
  • We greatly appreciate your time! The survey will remain open through September 30, 2025. You may respond anonymously, or include your name if you’d like us to follow up. After the survey closes, we’ll share back the results and outline how we plan to act on your input.
  • Click here to take the survey.
  • Thank you for helping us make RIT Research Computing work better for you!

Summary of Recent Maintenance Windows:

  • Routine updates to system-level software on cluster nodes.
  • New login welcome message with storage usage (updated nightly) and availability along with recent job stats – when you log into sporcsubmit.
  • Uninstalled unused software that was built for RHEL7.
  • We are slowly rolling out the ability for multiple interactive jobs to have shared access to GPUs. Historically, when researchers requested GPU resources for an interactive job, they had exclusive access to that GPU for the duration of their job. In this maintenance window, we enabled the option for shared access to GPUs for Desktop Sessions in our Web Portal (OnDemand). As we are testing this feature, this option will slowly be made available for other interactive apps in OnDemand and interactive jobs. Eventually, exclusive access to GPUs for interactive nodes will be prevented; batch jobs will continue to have exclusive access to GPU resources.
  • Our 4th H100 GPU is back in service after being replaced by the vendor due to a fault.

New Hardware for Testing:

  • Nvidia grace/grace and grace/hopper nodes are available. These newer machines promise greater efficiency and performance. If you would like your Spack environment rebuilt to run on these new nodes, please submit a request.
  • We would love to hear your feedback! Please fill out this form to share your experience using this new hardware.

Reminders

Upcoming Events (Calendar):

Tips & Tricks:

  • Did you know? The Research Computing Team is happy to provide cluster training for your research lab, or an overview of Research Computing in your graduate seminar classes! Faculty members – Please submit a ticket if you’re interested.
  • Wondering when your job will start? Run squeue --me --start to see the worst case start time (assuming no finish early).
  • Are your batch jobs running efficiently? You can view CPU, RAM, and GPU utilization graphs for your jobs with Grafana!

Help Research Computing Grow: Cite Us! (Publications):

  • Are you publishing research using our HPC cluster, software environments, Ceph storage, file shares, virtual machines, or other RC services? Your citations help us secure funding and improve our services.
  • Forgot to cite us? No worries! Fill out this quick form to update us on your publication.

Quick Links:


Thanks for being part of the RIT Research Computing community! Stay tuned for next month’s updates, and let us know what you’d like to see in future editions.

  • Have suggestions or feedback? Fill out this form – we’d love to hear from you!
  • Log in to Subscribe or Unsubscribe from this Newsletter.

Happy Computing!

– The RIT Research Computing Team

August 2025

Greetings, RIT Research Community!

Welcome to the sixth edition of the RIT Research Computing Newsletter! Each month, we’ll bring you updates on system improvements, new features, upcoming events, and tips to help you make the most of our resources. Our goal is to keep you informed and engaged as we continue enhancing the research computing experience at RIT.

News

Research Computing Survey:

  • Here at Research Computing, we take pride in supporting the computational needs of RIT researchers in their quest to discover. Your voice directly shapes the future of Research Computing at RIT. By completing this short (15-minute) survey, you’ll help us:
    • Improve the tools, compute, and services you rely on today.
    • Identify and prioritize new services that would best support your research.
    • Reduce barriers you may face in accessing computational resources.
    • Ensure that your needs are represented in future investments and planning.
  • We greatly appreciate your time! The survey will remain open through September 31, 2025. You may respond anonymously, or include your name if you’d like us to follow up. After the survey closes, we’ll share back the results and outline how we plan to act on your input.
  • Click here to take the survey.
  • Thank you for helping us make RIT Research Computing work better for you!

Upcoming GitLab Maintenance:

  • On Aug. 21 between 9:00 a.m. and 12:00 p.m., RC GitLab (git.rc.rit.edu) will have a one hour outage for upgrades.
  • During this outage, the following services will not be available:

Get to Know Us! Sid Pendelberry:

  • Sid joined Research Computing in 2017 as a Facilitator. He loves working one-on-one with researchers to help them get started using the Cluster and optimizing their workflows.
  • Prior to joining RC, Sid worked on special projects for ITS, including standing up Active Directory and Exchange in 2002, and the NYSERDA Green Data Center in 2010. Sid has also been an adjunct professor for GCCIS and KGCOE.
  • At RIT, Sid received an M.Eng. in Systems Engineering in 1999, and an M.S. in Sustainability Systems in 2017. He is currently working towards an M.S. in Industrial Engineering.
  • Outside of work, Sid is an avid cyclist, a volunteer EMT, and he plays the upright bass in a Folk/Americana band called Still One Left.

Summary of Recent Maintenance Windows:

  • Routine updates to system-level software on cluster nodes.
  • New login welcome message with storage usage (updated nightly) and availability along with recent job stats – when you log into sporcsubmit.
  • Uninstalled unused software that was built for RHEL7.
  • We are slowly rolling out the ability for multiple interactive jobs to have shared access to GPUs. Historically, when researchers requested GPU resources for an interactive job, they had exclusive access to that GPU for the duration of their job. In this maintenance window, we enabled the option for shared access to GPUs for Desktop Sessions in our Web Portal (OnDemand). As we are testing this feature, this option will slowly be made available for other interactive apps in OnDemand and interactive jobs. Eventually, exclusive access to GPUs for interactive nodes will be prevented; batch jobs will continue to have exclusive access to GPU resources.
  • Our 4th H100 GPU is back in service after being replaced by the vendor due to a fault.

New Hardware for Testing:

  • Nvidia grace/grace and grace/hopper nodes are available. These newer machines promise greater efficiency and performance. If you would like your Spack environment rebuilt to run on these new nodes, please submit a request.
  • We would love to hear your feedback! Please fill out this form to share your experience using this new hardware.

Reminders

Upcoming Events (Calendar):

  • Thursday, Aug. 28, 1:00-2:00 p.m.: Cluster Training Session. Sign Up & Details.
  • Wednesday, Sep. 3, 10:00-11:00 a.m.: Cluster Training Session. Sign Up & Details.
  • Tuesday, Sep. 9: RC Maintenance Day. Details.

Tips & Tricks:

  • Did you know? You can run the time-until-maintenance command to see a list of upcoming maintenance windows.
  • Looking for software? You have options! In addition to software support from the RC team (with Spack), you can install your own software (e.g. conda, pip, containers). Check out our Software Tutorial to learn how!
  • Are your batch jobs running efficiently? You can view CPU, RAM, and GPU utilization graphs for your jobs with Grafana!

Help Research Computing Grow: Cite Us! (Publications):

  • Are you publishing research using our HPC cluster, software environments, Ceph storage, file shares, virtual machines, or other RC services? Your citations help us secure funding and improve our services.
  • Forgot to cite us? No worries! Fill out this quick form to update us on your publication.

Quick Links:


Thanks for being part of the RIT Research Computing community! Stay tuned for next month’s updates, and let us know what you’d like to see in future editions.

  • Have suggestions or feedback? Fill out this form – we’d love to hear from you!
  • Log in to Subscribe or Unsubscribe from this Newsletter.

Happy Computing!

– The RIT Research Computing Team

July 2025

Greetings, RIT Research Community!

Welcome to the fifth edition of the RIT Research Computing Newsletter! Each month, we’ll bring you updates on system improvements, new features, upcoming events, and tips to help you make the most of our resources. Our goal is to keep you informed and engaged as we continue enhancing the research computing experience at RIT.

News

Upcoming Networking Maintenance:

  • On Jul. 15 at 5 a.m. and Jul. 17 at 5 a.m., the ITS Networking Team will be performing maintenance on routers in the Institute Hall Data Center, where RC infrastructure is hosted.
  • For approximately 30 minutes on Jul. 15 and Jul. 17, the following RC Services may be inaccessible:
    • Cluster (sporcsubmit.rc.rit.edu, ondemand.rc.rit.edu)
    • Mirrors (mirrors.rit.edu)
    • Virtual machines hosted by RC
    • NFS/SMB connections to file shares hosted by RC

Summary of Recent Maintenance Windows:

  • Routine updates to system-level software on cluster nodes.
  • New login welcome message with storage usage (updated nightly) and availability along with recent job stats – when you log into sporcsubmit.
  • Uninstalled unused software that was built for RHEL7.
  • We are slowly rolling out the ability for multiple interactive jobs to have shared access to GPUs. Historically, when researchers requested GPU resources for an interactive job, they had exclusive access to that GPU for the duration of their job. In this maintenance window, we enabled the option for shared access to GPUs for Desktop Sessions in our Web Portal (OnDemand). As we are testing this feature, this option will slowly be made available for other interactive apps in OnDemand and interactive jobs. Eventually, exclusive access to GPUs for interactive nodes will be prevented; batch jobs will continue to have exclusive access to GPU resources.
  • Our 4th H100 GPU is back in service after being replaced by the vendor due to a fault.

New Hardware for Testing:

  • Nvidia grace/grace and grace/hopper nodes are available. These newer machines promise greater efficiency and performance. If you would like your Spack environment rebuilt to run on these new nodes, please submit a request.
  • We would love to hear your feedback! Please fill out this form to share your experience using this new hardware.

Reminders

Upcoming Events (Calendar):

  • Wednesday, Jul. 23, 10:00-11:00 a.m.: Cluster Training Session. Sign Up & Details.
  • Wednesday, Aug. 6, 10:00-11:00 a.m.: Cluster Training Session. Sign Up & Details.
  • Tuesday, Aug. 19: RC Maintenance Day. Details.

Tips & Tricks:

  • Wondering when your job will start? Run squeue --me --start to see the worst case start time (assuming no finish early).
  • Did you know? You can run Matlab, Jupyter Notebooks, VSCode, and more directly in your web browser from our web portal, OnDemand.
  • Are your batch jobs running efficiently? You can view CPU, RAM, and GPU utilization graphs for your jobs with Grafana!

Help Research Computing Grow: Cite Us! (Publications):

  • Are you publishing research using our HPC cluster, software environments, Ceph storage, file shares, virtual machines, or other RC services? Your citations help us secure funding and improve our services.
  • Forgot to cite us? No worries! Fill out this quick form to update us on your publication.

Quick Links:


Thanks for being part of the RIT Research Computing community! Stay tuned for next month’s updates, and let us know what you’d like to see in future editions.

  • Have suggestions or feedback? Fill out this form – we’d love to hear from you!
  • Log in to Subscribe or Unsubscribe from this Newsletter.

Happy Computing!

– The RIT Research Computing Team


June 2025

Greetings, RIT Research Community!

Welcome to the fourth edition of the RIT Research Computing Newsletter! Each month, we’ll bring you updates on system improvements, new features, upcoming events, and tips to help you make the most of our resources. Our goal is to keep you informed and engaged as we continue enhancing the research computing experience at RIT.

News

Get to Know Us! Emilio Del Plato:

  • Emilio joined Research Computing in 2014 as a Platform Engineer. He maintains and makes improvements to much of RC’s researcher-facing infrastructure, including RC’s Web Portal (ondemand.rc.rit.edu), Grafana (graphs.rc.rit.edu), and virtual machines.
  • While working in RC, Emilio received a B.S. from the School of Individualized Study in 2018. Prior to joining RC, Emilio worked as a Systems Administrator in the Computer Engineering Department.
  • Outside of work, Emilio loves experimenting in the kitchen (he doesn’t know how to cook for less than 20 people), raising chickens, tinkering with old electronics, and exploring creative engineering projects (especially related to space travel).

Summary of Recent Maintenance Windows:

  • Routine updates to system-level software on cluster nodes.
  • New login welcome message with storage usage (updated nightly) and availability along with recent job stats – when you log into sporcsubmit.
  • Uninstalled unused software that was built for RHEL7.
  • We are slowly rolling out the ability for multiple interactive jobs to have shared access to GPUs. Historically, when researchers requested GPU resources for an interactive job, they had exclusive access to that GPU for the duration of their job. In this maintenance window, we enabled the option for shared access to GPUs for Desktop Sessions in our Web Portal (OnDemand). As we are testing this feature, this option will slowly be made available for other interactive apps in OnDemand and interactive jobs. Eventually, exclusive access to GPUs for interactive nodes will be prevented; batch jobs will continue to have exclusive access to GPU resources.
  • Our 4th H100 GPU is back in service after being replaced by the vendor due to a fault.

New Hardware for Testing:

  • Nvidia grace/grace and grace/hopper nodes are available. These newer machines promise greater efficiency and performance. If you would like your Spack environment rebuilt to run on these new nodes, please submit a request.
  • We would love to hear your feedback! Please fill out this form to share your experience using this new hardware.

Reminders

Upcoming Events (Calendar):

  • Wednesday, Jun. 25, 10:00-11:00 a.m.: Cluster Training Session. Sign Up & Details.
  • Tuesday, Jul. 8: RC Maintenance Day. Details.

Tips & Tricks:

  • You can receive Slack notifications for your batch jobs! You can read about that in Part 1 of our Slurm Tutorial. These notifications will include details about your resource utilization to help you tailor your resource selection.
  • Looking for software? You have options! In addition to software support from the RC team (with Spack), you can install your own software (e.g. conda, pip, containers). Check out our Software Tutorial to learn how!
  • Are your batch jobs running efficiently? You can view CPU, RAM, and GPU utilization graphs for your jobs with Grafana!

Help Research Computing Grow: Cite Us! (Publications):

  • Are you publishing research using our HPC cluster, software environments, Ceph storage, file shares, virtual machines, or other RC services? Your citations help us secure funding and improve our services.
  • Forgot to cite us? No worries! Fill out this quick form to update us on your publication.

Quick Links:


Thanks for being part of the RIT Research Computing community! Stay tuned for next month’s updates, and let us know what you’d like to see in future editions.

  • Have suggestions or feedback? Fill out this form – we’d love to hear from you!
  • Log in to Subscribe or Unsubscribe from this Newsletter.

Happy Computing!

– The RIT Research Computing Team


May 2025

Greetings, RIT Research Community!

Welcome to the third edition of the RIT Research Computing Newsletter! Each month, we’ll bring you updates on system improvements, new features, upcoming events, and tips to help you make the most of our resources. Our goal is to keep you informed and engaged as we continue enhancing the research computing experience at RIT.

News

Office Hours Change

  • To ensure time with the RC team when you are available, we are changing our process for office hours to better meet the needs of the RIT Research Computing Community.
  • Starting Tuesday, May 20, you can now book one-on-one time with the Research Computing team on Wednesdays, Thursdays, and Fridays—at a time that works best for you.
  • Whether you have questions about your workflows, need troubleshooting help, or want to discuss your research goals, we’re here to help. Book an appointment through our easy-to-use Bookings Page.

Get to Know Us! Ben Meyers:

  • Ben, one of two RC Facilitators at RIT, joined the Research Computing Team in Jul. 2022 and quickly became the go-to-person for tackling support requests, improving our documentation and training materials, and helping researchers set up and optimize their Slurm workflows.
  • Prior to joining RC, Ben wore many hats at RIT. As a student employee, he was a course assistant for Language Science and Software Engineering courses, a research assistant studying linguistic characteristics of security conversations, and a student systems administrator in KGCOE. As an adjunct faculty, he taught SWEN-331: Engineering Secure Software.
  • Ben received his B.S. in Software Engineering from RIT in 2018, and his Ph.D. in Computing and Information Sciences from RIT in 2023. If you are curious about human error in software engineering, you can read his dissertation here.
  • Outside of work, Ben loves reading non-fiction and Tolkien, building Lego sets, going on hikes (check out Abraham Lincoln Park), and experimenting in the kitchen (ask him about rumbledethumps).

Summary of Recent Maintenance Windows:

  • Upgraded the Linux kernel from v6.1 -> v6.6 on cluster nodes.
  • Routine updates to system-level software on cluster nodes.
  • New login welcome message with storage usage (updated nightly) and availability along with recent job stats – when you log into sporcsubmit.
  • Uninstalled unused software that was built for RHEL7.
  • We are slowly rolling out the ability for multiple interactive jobs to have shared access to GPUs. Historically, when researchers requested GPU resources for an interactive job, they had exclusive access to that GPU for the duration of their job. In this maintenance window, we enabled the option for shared access to GPUs for Desktop Sessions in our Web Portal (OnDemand). As we are testing this feature, this option will slowly be made available for other interactive apps in OnDemand and interactive jobs. Eventually, exclusive access to GPUs for interactive nodes will be prevented; batch jobs will continue to have exclusive access to GPU resources.

New Hardware for Testing:

  • Nvidia grace/grace and grace/hopper nodes are available. These newer machines promise greater efficiency and performance. If you would like your Spack environment rebuilt to run on these new nodes, please submit a request.
  • We would love to hear your feedback! Please fill out this form to share your experience using this new hardware.

Reminders

Upcoming Events (Calendar):

Tips & Tricks:

  • Wondering when your job will start? Run squeue --me --start to see the worst case start time (assuming no finish early).
  • Did you know? You can upload, download, and edit files directly in your web browser from our web portal, OnDemand.
  • Are your batch jobs running efficiently? You can view CPU, RAM, and GPU utilization graphs for your jobs with Grafana!

Help Research Computing Grow: Cite Us! (Publications):

  • Are you publishing research using our HPC cluster, software environments, Ceph storage, file shares, virtual machines, or other RC services? Your citations help us secure funding and improve our services.
  • Forgot to cite us? No worries! Fill out this quick form to update us on your publication.

Quick Links:


Thanks for being part of the RIT Research Computing community! Stay tuned for next month’s updates, and let us know what you’d like to see in future editions.

  • Have suggestions or feedback? Fill out this form – we’d love to hear from you!
  • Log in to Subscribe or Unsubscribe from this Newsletter.

Happy Computing!

– The RIT Research Computing Team


April 2025

Greetings, RIT Research Community!

Welcome to the second edition of the RIT Research Computing Newsletter! Each month, we’ll bring you updates on system improvements, new features, upcoming events, and tips to help you make the most of our resources. Our goal is to keep you informed and engaged as we continue enhancing the research computing experience at RIT.

News

REDCap Users Meeting:

  • Do you use or plan to use REDCap (redcap.rit.edu) to collect survey responses for your research? Do you have questions to ask or tips to share about REDCap?
  • If you do, we’re hosting a REDCap Users Meeting on May 12 from 1:00-2:30 p.m. This will be an open forum for sharing tips, asking and answering questions, and presenting how you use REDCap. Please sign up if you plan to attend.

Summary of Recent Maintenance Windows:

  • Upgraded the Linux kernel from v6.1 -> v6.6 on cluster nodes.
  • Stabilized storage issues which were causing some researcher’s home directories to become temporarily unavailable.
  • Routine updates to system-level software on cluster nodes.
  • New login welcome message with storage usage (updated nightly) and availability along with recent job stats – when you log into sporcsubmit.
  • Uninstalled unused software that was built for RHEL7.

New Hardware for Testing:

  • Nvidia grace/grace and grace/hopper nodes are available. These newer machines promise greater efficiency and performance. If you would like your Spack environment rebuilt to run on these new nodes, please submit a request.
  • We would love to hear your feedback! Please fill out this form to share your experience using this new hardware.

Reminders

Upcoming Events (Calendar):

  • Thursday, Apr. 17, 3:30-4:30 p.m.: Cluster Training Session. Sign Up & Details.
  • Tuesday, Apr. 29, 10:00-11:00 a.m.: Cluster Training Session. Sign Up & Details.
  • Tuesday, May. 13: RC Maintenance Day. Details.

Tips & Tricks:

  • Looking for software? You have options! In addition to software support from the RC team (with Spack), you can install your own software (e.g. conda, pip, containers). Check out our Software Tutorial to learn how!
  • Did you know? You can run Matlab, Jupyter Notebooks, VSCode, and more directly in your web browser from our web portal, OnDemand.
  • Are your batch jobs running efficiently? You can view CPU, RAM, and GPU utilization graphs for your jobs with Grafana!

Help Research Computing Grow: Cite Us! (Publications):

  • Are you publishing research using our HPC cluster, software environments, Ceph storage, file shares, virtual machines, or other RC services? Your citations help us secure funding and improve our services.
  • Forgot to cite us? No worries! Fill out this quick form to update us on your publication.

Quick Links:


Thanks for being part of the RIT Research Computing community! Stay tuned for next month’s updates, and let us know what you’d like to see in future editions.

  • Have suggestions or feedback? Fill out this form – we’d love to hear from you!
  • Log in to Subscribe or Unsubscribe from this Newsletter.

Happy Computing!

– The RIT Research Computing Team


March 2025

Greetings, RIT Research Community!

We’re thrilled to introduce the first edition of the RIT Research Computing Newsletter! Each month, we’ll bring you updates on system improvements, new features, upcoming events, and tips to help you make the most of our resources. Our goal is to keep you informed and engaged as we continue enhancing the research computing experience at RIT. 

News

Welcome, Matangi Buch!

  • We’re excited to welcome Matangi Buch as the new Executive Director of Research Computing at RIT!
  • Since joining in Oct. 2024, Matangi has been meeting with faculty to better understand research computing needs. If you haven’t had a chance to connect yet, feel free to reach out: mcbits@rit.edu

Summary of Recent Maintenance Windows:

  • Upgraded the Linux kernel from v6.1 -> v6.6 on cluster nodes.
  • Stabilized storage issues which were causing some researcher’s home directories to become temporarily unavailable.
  • Routine updates to system-level software on cluster nodes.
  • New login welcome message with storage usage (updated nightly) and availability along with recent job stats – when you log into sporcsubmit.

Slack Channel Changes:

  • We’ve heard feedback from researchers that the current Slack channel names are confusing. On Tuesday, Mar. 18, we will be making the following changes:
    • Renaming #general to #rc-general to make it easier to find for people in multiple Slack workspaces.
    • Archiving #help to avoid confusion – #general and #help have both been used for questions in the past; going forward #rc-general will serve that purpose.
    • Creating #rc-announcements as a read-only channel for announcements from the RC team.
  • Not on Slack? No worries! Join us here.

New Hardware for Testing:

  • Nvidia grace/grace and grace/hopper nodes are available. These newer machines promise greater efficiency and performance. If you would like your Spack environment rebuilt to run on these new nodes, please submit a request.
  • We would love to hear your feedback! Please fill out this form to share your experience using this new hardware.

Reminders

Upcoming Events (Calendar):

Tips & Tricks:

  • You can receive Slack notifications for your batch jobs! You can read about that in Part 1 of our Slurm Tutorial. These notifications will include details about your resource utilization to help you tailor your resource selection.
  • VSCode Users: By default, VSCode is constantly scanning your home directory (recursively) for changes. This can cause slow connections and other intermittent issues. Please put the following in your VSCode settings to disable this behavior:
"files.watcherExclude": {
    "**": true,
    "**/**": true
}

Help Research Computing Grow: Cite Us! (Publications):

  • Are you publishing research using our HPC cluster, software environments, Ceph storage, file shares, virtual machines, or other RC services? Your citations help us secure funding and improve our services.
  • Forgot to cite us? No worries! Fill out this quick form to update us on your publication.

Quick Links:


Thanks for being part of the RIT Research Computing community! Stay tuned for next month’s updates, and let us know what you’d like to see in future editions.

  • Have suggestions or feedback? Fill out this form – we’d love to hear from you!
  • Log in to Subscribe or Unsubscribe from this Newsletter.

Happy Computing!

– The RIT Research Computing Team



Tags: slurm