Computing resources are no longer just pieces of tech—they're scientific instruments. Moffitt Cancer Center’s Jarett DeAngelis, director of scientific information technology, and Shane Corder, senior HPC engineer, join cohost Jessica StLouis, senior scientific consultant at BioTeam , to discuss new approaches changing access to HPC resources and how platforms like Open OnDemand are simplifying the HPC experience for those unfamiliar with the system. They also share their thoughts on the future of HPC resources, what the Moffitt Cancer Center is planning, and what they expect to see in the field in the coming years.
GUEST BIOs
Jarett DeAngelis, Director of Research IT and Scientific Computing, Moffitt Cancer Center
Jarett DeAngelis is director of research IT and scientific computing at Moffitt Cancer Center, where he leads the design, deployment, and operation of the institution’s high-performance computing infrastructure. His team supports a range of advanced computing environments, including general-purpose HPC clusters, AI and machine learning systems, and collaborative computing platforms that enable modern data-driven research. With a background in IT infrastructure and scientific computing, Jarrett brings extensive experience in building and managing systems that support complex research workloads. His work spans system architecture, cluster implementation, software development, and direct collaboration with researchers to ensure computing environments are optimized for scientific discovery. Prior to joining Moffitt, Jarrett was part of BioTeam, where he focused on enabling scalable computing solutions for life sciences research.
Shane Corder, Senior HPC Engineer, Moffitt Cancer Center
Shane Corder is a senior HPC engineer at Moffitt Cancer Center, where he designs, builds, and supports high-performance computing infrastructure that enables cutting-edge scientific research. With nearly 25 years of experience in HPC, Shane has worked across the full lifecycle of computing systems, from hardware assembly and system architecture to implementation and operational support across diverse scientific domains. His work focuses on creating scalable, efficient environments that help researchers accelerate discovery and reduce time to insight. Shane is passionate about building practical solutions that empower science and expand the possibilities of computational research.
TRANSCRIPT
Jessica StLouis:
Hi everybody, welcome to the Trends from the Trenches Podcast. I'm your guest host, Jessica StLouis, a senior scientific consultant from BioTeam. I'm so honored and excited to have Moffitt Cancer Center's research IT department, Jarett DeAngelis, and Shane Corder with me today to talk about HPCs. When we do assessments across research institutions, the same challenges come up again and again. Collaboration is hard, data sharing is hard. And access to high-performance computing often takes days, deep command line skills or both. Today we're talking to the team at Moffitt Cancer Center, located in Tampa, Florida, about how they tackle these problems head on by redesigning how researchers access compute and data. Jarett, before we dive in, I'd love for you to briefly introduce yourself, your role at Moffitt, and what you focus on day-to-day.
Jarett DeAngelis:
Sure. Thanks, Jess. It's really fun and, you know, I think it's gonna be a good time to be here. And it's great talking with you again. My background comes generally from IT infrastructure. And more specifically than that, also, of course, we you know share a background at BioTeam previous to my position here. So I've been in this position now for it'll be three years in May. And my title is Director of Scientific Computing. So my group is responsible for all of the HPC infrastructure at Moffitt, everything from a large 70-node general purpose cluster to what we consider large anyway, at our at our scale, to a four-node ML and AI specific machine. And now the addition of a new machine that we're really excited about, which is specifically devoted to collaborative use cases. So our group manages all that stuff. We do you know everything from you know some software development all the way up to you know design and implementation of new cluster configurations, you know, a lot of direct assistance of researchers with their HPC needs and everything in between.
Jessica StLouis:
Awesome. Shane, same for you. Can you introduce yourself and tell us how you fit into Moffitt's research computing environment?
Shane Corder:
Yeah, sure. Hi, Jess. Yeah, with our background working at BioTeam together. Of course there's that. But I've been in HPC going on going on 25 years, basically in the trenches, doing everything from you know, design, build, implement, test, you know, kind of the the whole gamut, basically from anything that you get a box of computer parts all the way up to you know supporting many different domains of science with HPC. So but I've been at Moffitt since July of 2024. It's been it's been a fantastic, you know, year and a half-ish. Couldn't couldn't be happier with with my role here. I get to I guess I basically get to to to build and and tinker and you know kind of chase all the squirrels that I that I want to chase. So I get to you know build really cool things for for science to to help you know reduce the time to outcome and and and really help researchers push push the boundaries. So yeah, that's me.
Jessica StLouis:
I love hearing you say reduce the time to outcome. I know that will resonate really well with our audience. Shane, do you think you could give us a quick overview of Moffitt Cancer Center for the listeners?
Shane Corder:
Sure. Yeah, so Moffitt was established in 1986, I believe, in in Tampa, Florida. It's the state's only National Cancer Institute design designated comprehensive cancer center, which you know really reflects like the the national leadership in research and clinical care and education that Moffitt the Moffitt does. Moffitt has over 10,000 employees, over you know close to 500 physicians. We do a lot of great work. And five major research programs that really drive you know cancer innovation. We have population health supported by unique interdisciplinary initiatives like the integrated mathematical oncology department. So yeah, we we've we're really we're really very versed in in in what we do and really really focused on on making you know the biggest impact we can.
Jessica StLouis:
Thanks for that overview. I do want to start with asking Jarett, when we talk about institutions, collaboration and data sharing always come up as different pain points. Was this true at Moffitt as well?
Jarett DeAngelis:
Definitely. You know, I in the course of working at BioTeam, we definitely saw collaboration and data sharing come up as a challenge repeatedly. And you know, Moffitt's definitely no exception. The you know what it looks like when you try to you know do collaboration with someone involving, you know, say Moffitt resources right now, is you know, you you have to do things like onboard collaborators as you know, non-employees, right, as you know, contractor type accounts. And that can be a really long and involved and drawn out process, right? And that so that's it, that's if you want to actually get people to to be able to come in and use HPC resources. Even if you're just trying to do data sharing, that can also get complicated in terms of dealing with like different versions of different cloud services. You know, a lot of the the well-known ones that we folks will will use today, we've you know had different attempts of of trying to use those. And in various ways, they've you know sometimes come up short if in just in in terms of you know, is there enough available network bandwidth in and out of our network to wherever that cloud service is or from our collaborator to wherever that cloud service is? You know, are there problems with keeping data synchronized between wherever it might be getting generated at Moffitt or generated at the collaborator and you know, into the cloud service and then into the you know the other party? So there's all kinds of difficulties that we run into. And you have to remember at the same time, while we're dealing with that, we're also simultaneously trying to help or trying to make sure that you know permissions are correctly managed, that concerns around management of PHI are properly addressed. And all of that kind of comes together to really create some massive amounts of friction when it comes to trying to collaborate with other people outside the organization.
Jessica StLouis:
Yeah, this definitely is something I know our audience can resonate with. And I was thinking you're describing a problem that we hear everywhere. At some point, you guys moved from talking about this to actually doing something about it at Moffitt. Can you tell me a bit about this shift that you guys went through?
Jarett DeAngelis:
Yeah, so we're we're really excited about this new facility that we are just in just about to have go live in the next few months. So this is what we call the Collaborative Computing Center. And this was, you know, funded with a $2 million grant from NIH, an S10 grant, which is you know frequently has been used for things like you're okay, you're gonna buy a new instrument, right? Maybe a microscope or something like that. And then when you apply for the grant, you say, okay, well, here's the purpose of it. Here's you know why we want to buy this thing, here's the kind of work that we want to put it to, et cetera. Increasingly those kinds of grants have also been used for computing equipment because I think there's a realization on the part of NIH, you know, and the federal government writ large that really computing resources very much are a scientific instrument, especially in 2026. And it's only going to increase, right? When with many, many different kinds of applications. Everything from bioinformatics to, you know, microscope image analysis to ML and AI, I mean, all kinds of stuff, right? So because of that, we we you know wrote this grant in in kind of a unique way and sort of described it as, you know, we want to basically have dedicated hardware and software resources for collaborating with outside organizations. And so this is a fairly small cluster in the grand scheme of things. I mean, it's like 30-something nodes, not very big, but it's enough to be able to, you know, do plenty of meaningful work with outside collaborators. And the way that we've got it set up is it's going to have its own dedicated internet connection. So it the the data plane of this system is not connected to Moffitt's at all, right? So that means a couple of different things. It starts out with the fact that permissions and so on and sort of group memberships and and and other things don't have to apply from Moffitt to the Collaborative Computing Center and vice versa. So we can be really flexible with how we administer the system. And that's really powerful for us because it lets us get a lot of sort of institutional cruft out of the way, right? So that's one thing. It's also go it also has its own you know dedicated compute network and storage. So it's got about 1.3 petabytes raw of Hammerspace high-speed data storage. And one of the great things about Hammerspace is that it has you know built-in data management facilities that are incredibly powerful where you can have rules apply to data depending on the provenance of that data or other things happening to that data as it kind of moves through the system. And so the idea is basically, you know, because of the fact that this is going to be a multi-tenant system, we're gonna have multiple collaborators working with us at the same time, we want to make sure that, you know, those folks that come in have lots of assurance about the security of their data, access control, as well as, you know, data lifecycle management and other kinds of things like that. So all that together, I think, is going to create a really powerful environment. And it's it is sort of doing on premises what I think maybe a lot of organizations might seek to do in the cloud for this kind of, you know, use case. The main difference being that like our operational expenditures for a system like this are gonna be significantly lower than doing something like this in the cloud, you know, especially if you're if you're looking at utilization, you know, maybe close to maximum for most of the time, right? You can really burn through your wallet in the cloud trying to do HPC in that way, especially if you've got to keep, you know, storage persistent. You know, you've got to keep nodes running to be able to service requests and so on. On premises, you don't have to worry about that stuff as much. And so, you know, the this is sort of like building our own cloud environment that we're running for the express purpose of doing research with outside organizations. So we're really excited about it.
Jessica StLouis:
It's definitely very exciting. I liked hearing you say there is a realization that computing resources are a scientific instrument. What I'm also hearing is that this wasn't about building technology for its own sake. It was also about removing friction so researchers could actually get to the science faster. Would you agree with that?
Jarett DeAngelis:
Yeah. You know, this is this the idea here is to be as low friction, especially with respect to process as possible. You know, because of the fact that this machine doesn't have any, you know, connectivity to the rest of the Moffitt network, it means that, you know, it it is simultaneously secure, you know, from you know interaction with any anything in Moffitt that's not supposed to be there. And also Moffitt is secure from it, right? So we can really, this is this is kind of a science D DMZ like architecture insofar as the the there's you know gonna be one main point of connectivity between this system and the rest of our infrastructure, and that's really just you know physically, it's just the the internet connection that we've got that's right next to the Moffat internet connection, right? So that that connection isn't necessarily gonna be really fast. But, you know, and then and then from like a software and application standpoint, we're leveraging Globus both to do authentication for the whole system and data transfer. And so, you know, Globus is software which started out as data transfer software, just you know, kind of an implementation of parallel grid FTP and kind of evolved from there into a framework for doing collaborative computing across many different organizations. And so we're excited about being able to leverage that and yeah, be able to use that to reduce friction in addition to the to the architecture that allows us to just sort of you know bring folks in as we need and as you know, research demands call for it without having to worry about all of the sort of baggage that goes along with you know, quote unquote main institutional account management and and so on.
Jessica StLouis:
Nice. Could you talk a little bit more about Globus for those in the audience who may not be as familiar with it? For researchers listening, is this something that's broadly accessible? Do researchers have to have their own licensing for it?
Jarett DeAngelis:
Yeah. So Globus is a project of the University of Chicago. It's been around quite a while. But, you know, I was saying earlier that, you know, the it started out as this as this implementation of grid FTP and has kind of you know blossomed into several other roles at the same time. And so it you know, this is a it is definitely accessible to folks that don't have institutional accounts with it. You know, anybody can can create a Globus ID and use that with their personal endpoint software. So if I were to go and download that, I could put that on my desktop or my laptop. You know, I could create an identify, I could create an identity associated maybe with my institutional email or maybe with a Gmail account or whatever. And then I could start, you know, accessing data that's been shared as a Globus collection. And when I do that, I am presenting my credentials that I created in Globus to the institutional Globus Connect server that that that organization is using to share their data, to, you know, do their data transfer back and forth. And if I have permission to access it, then I can you know get in and do whatever I have permission for. And if I don't, I can't, you know, obviously I can I can't access it. I can't my access gets shut down. So yeah, I mean anybody can use this. I think it is probably very much at its most powerful when you have a real Globus Connect server deployed in your organization and you've got a subscription with Globus, because that gets you access to a lot of services that you wouldn't otherwise, you know, that you wouldn't otherwise have access to.
Jessica StLouis:
Nice. Shane, I want to bring you in here because this is where the experience really shows up. How does this new approach change how someone actually accesses HPC resources on day-to-day?
Shane Corder:
Well, from the collaborative computing center side of things, definitely, you know, we're gonna have a a new model of of you know HPC access, mostly for the folks at Moffit. Most most are you know used to just using institutional resources. But I definitely think that you know being able to collaborate with with outside people, outside institutions is going to be very powerful. And I know that you know, just as SC this year, you know, mentioning mentioning the Collaborative Computing Center to other cancer institutes, you know, our colleagues and, you know, fellow HBC nerds at at other cancer institutes, we're very excited to hear about this you know this this development. And they think it's, you know truly what what the what the space needs. You know, we we we're in science, we we we should be collaborating, we should be, you know, sharing data, we should be, you know, doing all the things to advance science and keeping all of that data and all of the compute resources and and and all the knowledge you know kind of collapsed inside one institution. Obviously, you know, that that's that's worked worked for many many instances. But I think that's the next you know real realm. But you know, we're we're offering things like Jarett said, you know, Globus to to kind of ease the authentication piece between institutions, but also you know, using things like open on demand, which you know really breaks down a lot of barriers for users who are either new to HBC and scientific computing, which you know by and large uses you know Linux as the back end. And a lot of times, you know, you're dealing with schedulers and different institutions use different schedulers. So there's a lot of stuff there to learn for for researchers and and users. So this open on open on demand implementation, which we also have on our other two clusters really breaks down the barriers and really makes accessing HPC resources much less of a hassle, much less of a learning curve. You know, you have a web interface and you know, you find the the thing that you want to run, you click on it and you hit go, right? And and and then you have access to you know whatever resources that you requested on on the system. So going I think going the open on-demand route really kind of just opens opens a lot of doors for a lot of people that it opens those doors quicker, I should say, for folks to get on HPC and and start running and making an impact with the science that they're doing.
Announcement:
Are you enjoying the conversation? We'd love to hear from you. Please subscribe to the podcast and give us a rating. It helps other people find and join the conversation. If you've got speaker or topic ideas, we'd love to hear those too. You can send them in a podcast review.
Jessica StLouis:
So this makes sense, and it truly is impactful, Shane, hearing you say breaking down the barriers, decreasing the learning curve, and opening the door quicker. Do you think you could dive a little bit deeper into open on demand, perhaps talking about what makes it more approachable or how it simplifies the HPC experience for those who maybe aren't as familiar with HPC systems?
Shane Corder:
Absolutely. Yeah, so open on demand is a web-based HPC access system for HPC systems. So you basically log into a web portal that takes care of you know your authentication into the cluster. And so it essentially gives you the same type of you know credentials and authentication and and access to HPC resources via the command line or via you know the open on-demand web interface. These web interfaces are built on passenger apps mostly that run an Nginx process in the back end to do lots of you know fancy things like proxying and and and things like that. So users instead of give you an example. So prior to open on demand, a user wanted to run R Studio, they would you know need to open a command line terminal, start the Slurm job. They would have to build their own Slurm submission scripts or you know their own S run commands, submit that to the cluster, it would get queued up, and then they would have to log in via another terminal to start an SSH tunnel so they could then get RStudio visible on their on their local system. Otherwise, without that SSH tunnel, they wouldn't have any any means of actually accessing that RStudio session through their browser. With open on demand, it actually does all of that stuff on the background the back end for you. Users don't actually see any of that. They simply click on the button for R Studio or Galaxy or MATLAB or whatever, enter in a few few simple parameters like how many cores, how many, how much memory do they need, how long of a runtime do they need, and they hit go. And then it builds all of the stuff in the back end and serves up a you know a an R Studio instance for them in their web browser. A lot of this is actually built, a lot of our apps are actually built on apptainer containers. So instead of using Lmod or modules to to load a system, you know, kind of a static install of say R Studio or MATLAB, these apptainer containers go and and basically build the newest and greatest RStudio app version and all of its you know requirements and dependencies, build all of that into a container, and then serves up that container directly through the open on-demand interface. So it's really powerful. You can do lots of great things with it. You can you can offer many different options and and capabilities and really really make it customized for the particular system that you're running, running it on. And, you know, I could go on and on about open on demand. I don't want to take up the entire time with it.
Jarett DeAngelis:
But it's a great project. There's a lot to say.
Shane Corder:
There is, yeah, you're right. And the community is fantastic. There's been a ton of development on apps and and and sharing them out to the community so you know we don't have to recreate the wheel or you know collectively bang our heads too hard on any walls to get any particular app to run through open on demand. So yeah, it's an amazing resource for users. It's super handy for engineers and and administrators like myself to help break down those barriers because the you know that's really what I'm here to do is is to enable scientists to to run on these systems. Sure I you know I care feed and water for the on the systems, but in the end of the day, it's about helping researchers get their get their work done and and do the science that they're here for at Moffitt for. So anything I can do to get them get them to that point and make it easier for them I say is a win. And open on demand, absolutely, without a doubt. It's you know it shows up in our in our accounting database and historical data about about running jobs on the system. Open on demand is is powerful and it's useful and it and it it clearly makes an impact. So yeah.
Jessica StLouis:
Excellent. So this is quite encouraging. Thank you for that example of the user story, and thanks for helping researchers be able to actually enable their science. So it is very encouraging. And Jarett, I was wondering, could you share another use case where this made collaboration or data sharing easier?
Jarett DeAngelis:
In terms of what you mean, and by this you mean on-demand or globus?
Jessica StLouis:
Anything, anything that within, you know, building this new HPC system.
Jarett DeAngelis:
Sure. Well, I mean, one of the things that we're definitely trying to improve, and Shane's been at the leading edge of this, and we're very grateful for it, is in just general accessibility of HPC to our researchers. You know, that that's like a really a prime motivating factor behind our implementing on-demand on our clusters is to make it so that, you know, HPC resources, software, et cetera, are as accessible and sort of inviting as possible to use, right? A lot of people may not be familiar with working on the command line, but they've got a lot of great ideas and they really want to find a way to put them into practice. Well, when you've got a web interface that can you know present all of those abilities to the user in a really friendly way, that's gonna help you get more science done faster because it means less time figuring out what you're gonna do, you know, or or how you're gonna do it, and more time being able to actually interact with the system, submit jobs, and you know, and get more work done. And so that's one of the reasons why on-demand is going to be a central feature of the collaborative computing center, because we can bring collaborators from around the world in, authenticate them with Globus, you know, on demand, and then have them immediately be able to see, you know, a really friendly list of applications that they can use at a you know, at a moment's notice, drop of a hat, whether that's R Studio or Visual Studio Code or any number of other things that we've got set up and waiting for folks to use. And that's really enormously powerful, you know, because it it puts us in a position to you know to offer the the power of HPC to pretty much anybody who knows how to use a web browser, right? When and and that's a lot of people. So we, you know, this is another facet of this project that we're really excited about.
Shane Corder:
I would I would also add, Jarett, we have researchers like literally building their own apps for open on-demand. Yeah, what yeah, that's crazy awesome. So we've got yeah, this this this set of tools that you know are are just general science tools, MATLAB, Galaxy, R Studio, V V PS Code, you know, just shell terminal, you know, job some job submission, all of that good stuff we have in there. But we've had researchers actually build apps that go that use the open on-demand framework to serve up things directly to physicians. So and what that means is basically like you know, researcher builds thing, gives access to physician who is doing doing you know patient care or you know, at least you know, some sort of like in in the research terms, and then basically giving physicians who who have never touched HPC a day in their life a way to interact with HPC. That that is massive. That's huge. That is that's not something that a lot of folks are doing and have have have made made happen. So yeah, there's just a ton of ton of amazing and exciting things happening with all of our implementations of open on-demand, really.
Jarett DeAngelis:
Yep. Yeah, give giving physician scientists you know access that's that fast to research computing resources is something we're really happy happy with. And it's something that has enormous potential to you know to to well, you use the phrase reducing friction earlier, the to really reduce friction in the process of you know evaluating how a clinical trial is working or or you know, any number of other things. So we're really excited about that. And also it's you know, for the future, the reason why our group is called scientific computing as opposed to just you know research computing is that when our group was sort of you know founded originally, it was with the vision that we would eventually be able to serve you know clinical use cases in addition to research ones. And so the fact that we can kind of trial that a little bit by, you know, having physician scientists who are doing clinical trials able to access these resources and provide us feedback on how user-friendly they are and how easy they are they are to use in the course of, you know, doing their just the course of working in their day. That really sets us up well for being able to serve clinical use cases in the future with other HPC resources that are more clinically focused that are that are set up specifically for for clinical access. And that's a whole other world that we could talk about maybe in a different podcast, but, you know, there's definitely a lot of stuff coming on the horizon, right, in the clinical world that looks an awful lot like HPC. Whether that's ML and AI applications or imaging applications or any number of other things, increasingly we need more and more compute with more and more storage and the ability to use it as effectively as possible. And and I think we're we're blazing some trails for that at Scientific Computing and Moffitt, and we're pretty stoked about it.
Jessica StLouis:
Yeah, that's great. It's truly inspiring to see how this technology is helping advance the fight against cancer and enabling the physicians plus the dual expertise physician scientists to help with patient care and their patient outcomes. I loved hearing that you're getting the user feedback. That's fantastic to help with future HPC resources. And that kind of brings me to my next thing that I wanted to talk about, which was the future. I've had a great conversation with you guys. This has been very helpful, very insightful. And and I kind of wanted to end with what are you guys looking forward to the most at Moffitt Cancer Center? And, what do you think's coming ahead in the future?
Shane Corder:
Oh boy. Well, so, you know, new toys are always cool, right? New shiny HBC systems are always great. But my I would say my biggest goal is to utilize the things that we have within our grasp now better. I want to basically tap tap every every possible, you know, bit of power out of the systems that we have, make them as useful as humanly possible, make them as as accessible and continue to make them more accessible to to to to more people, not just necessarily inside research, but as you know, Jerry said, like in in as we go into you know clinical and and we have other departments at Moffitt that that are for the lack of a better term, you know, maybe suffering in silence, maybe because they don't know what they don't know, that we have the say, you know, all of this HP infrastructure at Moffitt that could you know make make their lives a lot easier and make their science a lot faster. So you know putting out all of the feelers and and and all of the kind of you know marketing, I guess, the internal marketing, just to really kind of bring in as many people as humanly possible to the HPC systems at Moffitt to benefit from them. So yes, I want to do that with our current stuff, but you know, we we definitely want to keep an eye out you know for the future, what what's coming down the pike as far as technology goes that's going to benefit our researchers. You know, as a technology nerd and an HPC nerd, you know, that's that's always kind of you know in the back of your head. Where's the new shiny toys and how can they help me do X. So, you know, it's definitely an exciting time with all of the new technologies, especially GPUs and accelerators and of course quantum and and all of the things, right? So all of that's very exciting and something that we're always gonna keep our minds on. But yeah, my I would say my goal for myself is just to continue pushing towards the best efficiency, best, you know, best experience for my users, and building anything and everything that they need to do to ultimately you know kick cancer in the rear end.
Jessica StLouis:
So excellent. That's a great call, Shane. How about you, Jarett?
Jarett DeAngelis:
Yeah, I had to agree with Shane about being able to better take advantage of stuff we've already got. And in addition, you know, grow the stuff that we've got now. So, you know, one of the things I'm really excited about is expanding our capacity for local AI and ML inference on our existing hardware and you know it in existing systems. You know, there's a huge amount of potential for use of LLMs, VLMs, other things, you know, in that space for all kinds of stuff, automation. There's you know opportunities in bioinformatics because you know genomic data is just text. So that's something that LLMs are really good at working with, right? So you can get them to do all kinds of really interesting analysis of genetic genetic data. You know, yeah, you can do that with foundation models in the cloud, you know, with a lot of you know friction that we talked talked about earlier with respect to PHI and so on. Or you can build that kind of capacity out yourself, you know, in your own data centers, run this stuff yourself, and then get some really great results in a really secure way. We're looking at expanding our capacity in that space now, both in the Collaborative Computing Center and in our internal compute resources, you know, similar capacity, you know, in in sort of two different magisteria. I don't know exactly what to call them, but two different networks, basically. But yeah, we've got that to to to work on and grow. You know, we have a big digital pathology project that is you know currently underway at Moffitt. You know, whatever we can do to support that, we're of course going to be really excited about. And yeah, I mean, there's all kinds of of exciting things happening right now in cancer research that we get to participate in and that we get to help grow as fast as possible. And so I think, you know, despite a number of industry challenges and and you know, scientific world challenges here and there, I still think that the future looks incredibly bright for cancer research. And I'm really glad that we get to be a part of that.
Jessica StLouis:
Thanks, Jerry. I just want to say how grateful I am for this time together. It's clear that the work you guys are both doing will have a profound impact on many people and on cancer research as a whole. Thank you so much for joining us on Trends from the Trenches, and I look forward to our next conversation. As I definitely think we could dive deeper into many of these conversations. I'm your guest host, Jessica St Louis from BioTeam, a life science IT consulting firm at the intersection of life science, data, and technology. To learn more about BioTeam and our work, please visit bioteam.net. And thanks again for listening. If you have any questions or would like any more information about Shane's work, Jarett's work, or the work we do at BioTeam, please send me an email at infobioteam.net and we'll make sure the right person gets the info they need. Thanks again for listening.
Announcement:
Thank you for listening to BioIT World's Trends from the Trenches podcast.