00:00
All right, I guess we'll get started here. Uh, thanks for coming. Last break out of the day. Good job, you did it. Congratulations. Give yourself a hand. Yes. Thank you, yes.
00:13
Uh, our session is called Feeding the Beast. Uh, I'm Ed Sue. You've got Kevin Parker here as well. We're gonna do a customer success story with Silicon Labs, and then we've got, uh, Intel to speak a little bit about the solution. Um, well, well, see, there's proof.
00:28
It's trust but verify, right? That's what it is, right. Um, so why did we call our obsession feeding the beast, right? What we find with a lot of customers that they're so focused when they're building their AI infrastructures that they're Concerned about like, we gotta get those GPUs.
00:44
How many can we get, right? And how fast can we get them, right? And usually the answer is like as many as we can afford and as fast as possible, right? But then when they're actually deploying them, they're end up being constrained in these real world uh productions by storage, because storage, it ends up being a little bit more
01:04
complicated than just sort of the metric of bandwidth for, you know, second that they've specked out, right? Because if you look at these AI actual workflows, different parts of them have different characteristics, right? The needs of, you know, training versus fine tuning versus inference and like even inference
01:26
itself has changed so much in the last 6 months. Right now with these reasoning models, these mixture of experts models, like it's drastically different demands on the storage that potentially even from when you actually spec out that system, right? And so this complexity is really hampering.
01:42
Really putting your AI infrastructure to use, right? On top of that, like we all know, like the amount of data that's needed and required to actually feed these pipelines is growing and growing, right? Not just multiple copies of data, but now different types of data, multimodal, right? Audio, video, right?
02:02
Documents, right? All of these are being ingested in that training pipeline and also those rag pipelines, right? And so, What's it causing, it's causing much more complexity, right? You thought you had sort of expected it all out, then now like you've got more things to worry about what's causing delays in that pipeline.
02:20
And the thing is, like, it's gonna change, right? No matter what you've done, no matter, even if you made all the decisions correct, like in 6 months, in a year, in 2 years, it's gonna change drastically. I mean, just look at like deep sea, you know, 6 months ago when everyone was freaking out, like, did we make all the wrong decisions?
02:39
Turns out like, oh we just discovered a sort of a new way to do things, but like those things are coming, right? So that's why when you're building an infrastructure to accelerate production, that these are the things that you have to worry about, right? Having an infrastructure that can give you that performance regardless of what parts of the workflow that you need,
03:00
or basically what parts of the workflow are now important to you. Uh, having something that can handle all of these demands, right? Parallel data architectures, right? Regardless if it's small files, large files, regardless if it's write heavy or read heavy, or ideally even like metadata dependent, right?
03:21
On top of that, like that whole process of actually even just preparing all that data, you know, what can we do to accelerate just that data cleansing, that data ingestion, right? Making your data scientists more productive. And then finally, again, keeping it simple, like it's, we don't want it to be a science project, right? That's like time wasted,
03:42
energy wasted, resources wasted. If you can have something that can give you that performance, simply and easily, like you guys, your data scientists, your IT guys that are supporting the projects can do more useful things. And then finally, resiliency, right? Because it's all enterprise now,
04:01
right? It's no longer science projects, you're deploying AI for enterprise use and so therefore, you need that resiliency, you need it to be dependable, especially as you move these projects into production, they need to stay up all the time. And then on top of that, as technology improves, like we have the ability to improve that
04:20
infrastructure for you nondisruptively. Yesterday you heard about the new R2 blades in Flash blade, right? You can now basically take your existing flash blade that's powering your AI infrastructure and nondisruptively upgrade it such that you get the better performance.
04:37
but without taking down that AI infrastructure. Again, keeping that beast, be fed, that beast doesn't have to go to sleep while you're improving your infrastructure, right? That's what we bring to the table. Right? And so this is what, like flash plate was built for, right?
04:50
Handling all of these different workloads, all these, these different workload characteristics. Small files, large files, random sequential, like we kind of really don't care. We can handle all of them, giving you low latency, high performance. And especially like super great metadata performance. Right? And then on top of that, giving you that
05:12
predictability, as you need to expand either more capacity or more performance or even both, we can basically do that depending on where your AI workflow challenges take you. And on top of that, the gravy is that. We're super energy efficient, right? More power with less energy consumption, less rack use,
05:35
right? So giving you more to actually power your actual GPUs that you want to, right? So let's look at the AI workflow. So starting with data ingesting and cleansing, right? Again, like this is where sort of the, the, the, the, the dirty work happens, right? Like they don't really think about it,
05:56
but there's a lot of AI sort of data prep, trying to get all your data ready so you can feed into those models, right? And so what do they do? You usually take a data set and they have to like cleanse it, change permissions on whatever, transform it such that it's in a format ready to be used to be ingested by your, your workflow.
06:13
Um, and so often you'll take those data sets and you want to keep them around because they're in a state that's already prepared. So being able to keep all those available and ready at hand, such that if you need to go back to them, as you very well may, they're readily available, right? You don't have to wait wait to do them again or
06:32
basically sort of move them from one tier to another, right? That's just gonna cause delays in your whole workload. And so, one of the things that we've developed is a tool called Rapid Fi toolkit, right? And essentially what we've done here is we've taken standard Unix utilities, and basically rewritten them such that they're much more
06:55
multi-threaded, we've parallelized them. Right, so taking simple commands like find, that becomes pin, tar is P tar, hone is pho, right? And we've done this such that we can take advantage of the massive parallelism of flash play, right? And what does this do for you?
07:13
It takes this 40% of like data cleansing, data loading that, that scientists are using at the time, right? And then basically accelerates it, right? So then now these processes can be accelerated since they take, they're like 50 times faster, 100 times faster. Right? Such that they can concentrate on the stuff
07:34
that you want to pay your data scientists to do, right, which is this whole part down here, the cool, sexy data science part, right? These data scientists could say, right? Oh There's your picture. Right? So, again, like this is what you want them to do. The model training, the model fine tuning.
07:54
Right. And so, again, like all of these different parts of that workflow, they have different characteristics. Some of them we're very familiar with because they're, you can really traditional analytics, Mark, you know, Starburst, Dremio, that sort of thing, they lighthouse. But at the end of the day, regardless of the different characteristics of these workloads,
08:14
to us, they're just essentially a different. Workload, right? There's not necessarily anything inherently special about it, right? Some of them are more heavy on metadata, some of them are more heavy on reads or writes or whatever, but to us, we sort of break them down to their component parts and we try to basically just
08:32
give you the best performance of all of them. Right? And so one big example is that, you know, repeated share needs for training. Right. As you train that data set, you basically read it and reread it, right? Until, you know, for however, however many Fox
08:48
it takes to get that, you know, required level of, of accuracy that you're looking for, right? The thing is, as these AI infrastructures grow and grow, you've got more data scientists with more projects, right? And so maybe you designed it for a certain data set that you're gonna train on and that's how well your data storage is supposed to perform,
09:10
but now you've got multiple things going at the same time. What you want to make sure is that you don't run into problems when all of a sudden you are have a hot data set that's much more than you anticipated. Right? So if you've got a data storage architecture where you've got essentially a hot cache and it performs well as long as you stay in that cash,
09:29
but if you go larger, then all of a sudden everything goes off the cliff, that's gonna impact your entire workflow, right? And so that's what we bring to the table, like, we don't have that concept or, or restriction of just a hot cash, right? Basically, the entire cluster is designed to give you all the performance you need.
09:48
We natively load balance all of the requests across and use the resources of the entire cluster to give you the best performance we can. And so again, looking at that AI workflow, right? You're taking that data from all of the sources, and then you are transforming it, preparing it, feeding it into that model for training and evaluation,
10:12
and eventually you get to a point where you can deploy it. And the point here really is that We at Pure Storage have solutions that can fit into every single part of this workflow. More often than not, they're all flash blade, right? Because of the highly parallellyzed nature, and you get that performance,
10:32
right, and shared data natively. But depending on how you guys treat your data, what your workflow looks like, it could be flash array, it could be flash with Port works for more native containerization. It could be cloud blocks store, right, to sort of enable more hybrid uh approaches. Right? So we have solutions to fit into your solution
10:54
and accelerate all parts of it. So this one I'd like to uh welcome to Mike Webb from Silicon Labs. He can talk about how he's using for storage in his air flow. Great, thanks. OK, great. Yeah, I'm Mike Webb from uh Silicon Labs.
11:10
Um, I'm an application developer, um, previously more on the infrastructure side but recently more into AI and and Gen AI development. Um, I'm on a cloud team which is a little bit different on a semiconductor company, um, you know, that's essentially just a software team. Everyone else is building chips, but, um, I wanna go to the next slide.
11:33
Oh yeah, thank you. So I wanna go a little bit more specifically into how so we built an in-house rag, and I know, um, we've probably all heard about these how someone built an in-house rag about 40 times this conference. But this is ours, um, and, uh, I wanted to kind of go into some of the details of you know how flashlight helped us in this
11:56
process of taking data um building so basically the use case here is a um assisting our chip designers, um, getting them the information they need quicker to help them, you know, make design decisions or help them iterate on their chip designs, um, we have, you know. In terms of public PDFs we have over 3000, 4000 huge PDFs and it's very hard to find
12:22
information from these, so, um, and we have much more um information just in other formats, um, that we want to kind of make it easier to find. So we build an in-house rag, um, so. Essentially, if you're, you know, just to do a brief recap of how a rag works. is you have a lot of IO operations like loading from file system,
12:48
you have conversion operations which are um You know, sometimes you have maybe an image that you want to summarize, sometimes you might have a PDF with tables or graphs that you wanna extract that data into text form for a lookup. Um. And, and, you know, potentially other conversions that need to be done, um, and then you, you chunk that information and Uh,
13:12
essentially that's just making it into a digestible size depending on your embedding model because different embedding models perform differently for different size data or different type of data. Um, and, uh, eventually you input it into the vector database which, um, you can use a lot of different vector databases uh for these kind of rag systems.
13:35
But OK, so there's a lot of data manipulation and data processing here. Um, conversion, you know, takes a while, chunking takes a while, embedding takes a while, and each one of those knobs, if you turn them, it dramatically changes the performance of the rack system. So you want to try to optimize these different things.
13:56
You wanna optimize your conversions, you wanna optimize your chunking algorithm, you wanna optimize your embedding. So it's very convenient and the bottlenecks here, I just went over them. It's very convenient we found to use flashblade S3 to save the derived data, the artifacts from each of those stages. So if you tweak any of those knobs or if you
14:16
change your chunking scheme, your conversion scheme or embedding, you don't have to restart from the beginning of your pipeline. Um, I have one more slide. Um, so we chose flash plate as 30, sorry, sure, go ahead. I just curious, uh, uh,
14:41
evaluate like, uh, the, uh. Like on the like more part of how are you so sect. Why did I mean for a basic rag, it's a great question. I mean it's a very hard problem too for a basic rag, uh, gold standard is you have an evaluation data set with ground truth, so you need labeled ground truth answers, um, you need a certain amount of them,
15:03
um, and then you have, you can get very various metrics from that. You can get metrics, um, oh, how we did it. Um, yeah, we have a ground truth data set. We have evaluation data set. Um, but yeah, no, it's a great question. So I just have one more slide here. um, I, I just wanted to say so for the
15:23
checkpointing uh originally our, our initial implementation was storing this data to S3, which you know is in the cloud, it's the easiest approach really but um you you have a lot of problems, right? You have, um, you, you're moving data all the way to the cloud, you're having to deal with the bandwidth, uh, limitations of your connection, and network limitations.
15:46
And you're also having to deal with, you know, whatever speed you're getting out of AWSS3. So, uh, we've kind of redesigned our pipeline to run on-prem so we can take advantage of the local S3 endpoint that FlashBlade gave us, and you know, you, we can write to it because essentially it's hosted right next to the application so we can write to it these checkpoints at 3 gigabytes per second.
16:09
And it was just dramatic, like it was dramatically faster rights, dramatically faster reads from our checkpoints, so we can iterate on these pipelines faster. The other thing is, um, obviously it reduces the cloud spend. You don't have to pay for S3, you know, you don't have to pay for that in and out, um, and it keeps all of our data local, you know, so whenever we're iterating this rag
16:30
pipeline, none of our data leaves our sensitive data, sensitive design data, none of it has to leave our data center. Um, you can also use the S3, uh, this is a little different, but you can use S3 back uh mill this too if you want to actually do query time improvements, um. And then I, I just, the last point I had was
16:50
that, uh, personally I find that S3 is a very nice convenient format for like, you know, doing local development on your laptop trying to iterate on these pipelines or sharing an S3 endpoint or going into the console and making a key for someone and giving them permissions to the same S3 bucket that everyone else is working on, um, so you can, you know, iterate and um improve and collaborate without having to,
17:15
uh, you know. I don't know, get access, SSH access to some VM or, you know, set up some more complicated uh permission scheme. Um, so that's, uh, that's it for my, my use case. Thank you. Great.
17:30
You can keep the mic. I'll take the clicker. Awesome. OK. So, we're kind of joking, rag, you know, retrieval augmented generative AI that's a mouthful. We're going, how come towel didn't get accepted? But I guess we're going to rag. It's really easy.
17:46
So we'll talk a little bit about inference and rag. And uh first, we want to talk about what are some of the challenges that you see when you start looking at what do I need for a storage platform for, for a rag platform, right? So you're gonna take your LLM and then you have to take your corporate data,
18:01
your proprietary data, and put in the database and vectorize that, right, for your similarity search. And sometimes that can take your, your source data and expand it out, out to about 10 times, up to 10 times the size, right? So, as you continue to ingest your corporate data, and of course, people's like, oh, that's cool.
18:18
Here's some more data, and you keep putting it in. You need a platform that can scale out with you in the, in the capacity side and also as you take it from your, your dev test environment out into production, you need to be able to scale that in performance side. So you need a platform that can handle both.
18:35
At the same time, you need a platform that doesn't take a lot of care and feeding, right? You don't want to have to continue to add different nodes to it and do upgrades and And now you have different generations of nodes in your cluster, and you don't need that chaos in your life when you're dealing with the chaos of building your infrastructure, your, your pipeline above it,
18:54
right? You don't, you need your infrastructure adding to that chaos. You need something simple. Um, that's what Flash Blade brings to the table for us. And at the same time, but you need performance. So let's talk about some of the performance gains that we can see by using Flash Blade in, in RAG.
19:11
Um, the first is the ingest. Often the, the. The vector database is stored on the local SSD of the GPU server, right? And now you have multiple copies of those, so adding to the complexity of multiple databases and having to deal with it. Put it out on shared storage, people like, well, it's gonna be slower if I have to use shared
19:30
storage. Actually, when we compared the local SSD to um the flash blade, we actually 36% faster on the flash blade rights than ingestion writing out to the database than the internal SSD. So it's like, cool, that's great. That's 11 node. So when we actually moved out to two nodes, we continue to keep that same performance with
19:53
multiple nodes. And so we can actually scale out as you add GPUs and you're still having one database that you're having to manage, but multiple GPUs pulling from that single database. So again, performance is there, simplicity and management is there. Both, you know, feeding both vectors there, uh, vector database.
20:12
So that's on the NGS side. Um, now, let's, when we start talking about on the query side, you know, the same is applicable on the reading, right? So we have, as we continue to scale out the number of nodes, the number of queries that we can satisfy and, and the reads that are able to scale with that
20:30
number of GPUs you add, we're continuing to be able to continue to grow that uh performance. So, again, all very simple, one single database you're dealing with instead of copies on each of them the separate nodes, right? So, phenomenal um ease of use and performance and again, scale it in the performance metric, or you can scale it in your uh capacity or,
20:55
as I mentioned earlier, at the same time. Um, next, wanna talk, we wanna hand it over to, um, uh, I almost got this right, Xujin, and he's gonna talk about the innovations that, uh, Intel is doing. Thank you. OK, so I'm Sian Mystery.
21:12
I'm an AI software solution engineer. Uh, I cover mostly our west western region. I'm in the field, so, uh, so similar to Ed, a field solutions architect, and here to talk about the enterprise AI in a box solution. So we just heard from, uh, Mark about how vector, uh,
21:31
how flash storage can be really helpful in terms of uh feeding the beast. We talked about rag pipelines, but a lot of times one of the biggest challenges that Enterprises face is either cost or lack of resources to put all of these together and this is the solution and why I'm gonna talk about why it matters and why this helps solutions get started easily. It's a turnkey deployment, uh,
21:55
yep. Cool. So even before I get into that, uh, the solution, uh, that runs on top of the Intel, uh, Gaudi AI accelerator and the pure storage, uh, Flash array or Flash blade is the iterate.AI is a ISP partner of Intel. They're a strategic partner to us and basically they have a generative AI tooling solution which is called Interplay, so they can create different use cases.
22:20
Is based on it's a drag and drop AI platform. So now think of the rag pipeline, you have the embedding, you have the vector DB. How can I bring all of that together? Uh they provide an easy solution for that. You just pull the data pipes and then you can have a agentic workflow, you can have a rag workflow, you can have a computer vision workflow. Like it does, doesn't apply to generative AI
22:40
they have solutions in computer vision, intelligent document processing, etc. etc. This is what the solution looks like. The interplay is the main underlying technology that's their proprietary technology. The one solution that I'm gonna be particularly talking about which is the generate solution. It is a generative AI assistant, uh, it allows, uh.
23:02
Uh, customers to get started and on Turkey, uh, agentic workflows and rag workflows, and I'll be getting a little bit into the different features and the overview and what the features are. Uh, so starting with rag, I mean, this on the left side right that you see, this is iterate solution basically, instead of me having to put all of these together, they already provide a rag uh rag
23:25
pipeline for you to consume, uh, and then the best part is it runs on. Intel Gaudi can run on Nvidia GPUs, but I'm here to talk about Intel, so I'm gonna be focusing on Nvidia, or sorry, Intel, uh, and the, the reason we, uh, like Gadi is because of its 128 HBM memories. So now it has 8 cards.
23:47
I can fit up to 8 models if I wanted to for up to 30 to 40 billion parameter model, I can fit it on uh one card easily. So now I can have multiple models doing, uh, uh, using different workflows, uh. The second, uh, workflow that they support is the agentic AI workflow. So now it's just not rag. Now you're adding all the tooling,
24:08
uh, mechanisms, also the ability to use reasoning model as well to support that workflow. All of that tooling and workflows is pre-built into this application, so customers can get started easier and uh provide a turnkey solution for that. OK, let's get into the actual meat and potatoes of this. So what is the solution?
24:29
So this is what it would look like the generate application. There's 3 main things here the document search, which is the rag-based workflow. So now, uh, basically you have your corporate data that's sitting within your pure storage. You run that application running on top of Intel Gaudi, and you can start document search.
24:46
Functionality is available to you. The second other feature is the agentic workflow, which is the database search. So now you have your database search, uh, database running within your pure storage applications or pure storage appliance, uh, and what I could do is I can run write English queries that can convert that into a SQL based queries and start getting results out of you.
25:07
So now. That solves the problem that I mentioned earlier about the lack of resources cause usually in a large corporate uh data set, uh, you need some SQL engineers to write these queries for you, uh, iterate solution with Intel running on top of Intel takes care of that for you. Cool. OK, so as I mentioned that they were,
25:29
these are some of the horizontal use cases, right? Agentic workflow, rag workflow, but what does that mean in real life use cases? So here, I'm gonna give you three examples of different. Uh, use cases in particular vertical segment that's healthcare and financial services. So starting with the financial service and the uh database search,
25:47
right? So think of, uh, insurance company where they have a lot of claims going on. How can I, uh, and a lot of times they do need to do a lot of auditing and stuff like that. So how can I use a database. Search engine to then input all the data uh into this generate application start get uh start using this application to get analyze
26:07
payment trends, uh, devising billing strategies. These are different cards that you can see and all of these are customizable according to the customer's need. The second one is, uh, this is a really popular one that we have been seeing, uh, a lot in the market, especially with the healthcare industry segment where there, there, there is a, a big cost associated if you have a patient coming in multiple times within
26:33
a month and that's, that's usually not a good. Sign of good practices being uh served to that uh patient. So a lot of times the federal institutes that are regulating this, they will like take a look at this and be like, hey, this patient has been coming into the office for so uh to your hospital for so long. Why, why is this happening? You guys are not providing good care?
26:53
What is going on? So. With, with all those data that sits within your uh databases, right? I could, I, I have, they have a card created called mitigating patient malpractices. So now I can quickly just run that card which will give me a list of all the services that are uh provided by the uh hospital,
27:13
all the. Patients that took that services and then it will, it'll give you a count and it'll give you an analysis of this patient came this many times, this was the care, all this information in a matter of minutes because uh because of this whole solution that provides. Usually this would take a week for someone to pull all the data together then visualize it in a nice way.
27:34
So this, it basically boost productivity for you. Uh, this, this is a pretty cool one as well, where this is more on the operational, uh, operational side of things for the health care, uh, so think of, uh, identifying growing procedures. So if there are, uh, thousands of services that are being offered by the hospital,
27:56
how can I make sure that this service is, uh. The most useful, this is not the most useful, for example. So based on that data, I can run that card. It will give me a, basically a report of all the bottlenecks or where I can go into patient time, uh, how fast was this trace and treated, how.
28:16
less time it took for this patient to receive the service, what was the cost associated with it? And if it gives the hospital a way to optimize their services, right? So these are some examples of, uh, the workflows that have been created using this application. Uh, we all, we unfortunately the demo, the demo expert is closed.
28:35
We had a solution if there uh a demo running on it. So if there is interest, please let me know. I'm happy to connect to you and show us the live demo of this as well. But yeah, this is the or arking solution. This is what, uh, yeah, the enterprise AI in a box solution looks like.
28:50
It comes with network computing storage, so compute is of course Intel Gaudi AI accelerators. Networking is a top of rack switch with our Cisco switches, uh, storage, of course, the flash blade, and then on top of all of this is the iterate generate application, uh, which allows you to run generative AI, uh, tooling and different use cases based on your
29:11
customer's need. Uh, yeah, overall, it's a complete package solution. Uh, it's gonna be right now the plan is to be sold through channel partner and the starting entry point, this is wrong, it shouldn't say 250K. It's the, they probably had more margins, so it's gonna be close to around 350K,
29:30
uh, but it still allows you to get, uh, customers started easily, so it's low cost, easy to get started. Uh, yeah, that's all. I think that's my last slide. Yeah. Cool, thank you. Perfect. Thank you, Shujin. So We're gonna talk a
29:51
little bit about that pure storage in there. We're gonna kinda talk about what does that look like and where does it fit in your environment and how do we help you go from there. So, here's kind of the The family product, right? And, and it's really kind of talking about, so how do you take when you start out with your first GPU,
30:11
right? You got a server, maybe it's not an NVIDI DGX, right? Maybe it's a Dell or a uh HP or Super Micro, whatever it may be, and you got a GPU in it, right? So this is kind of showing you that we can actually start out when you go up to the NVID certified, um, you know, We work fine in those environments.
30:30
The certified for Nvidia cloud partners, so, uh, we have offerings there in the cloud as well. And I'm not from Canada, I'm from Montana, just for the record, but it may sound like I'm from Canada. So, um. Nvidia, we have the BasePod and, and Superpod. So as you grow,
30:47
we have continued to have products. If you want to go on the Cisco route, we have uh Cisco validated uh Flash stack, uh, Cisco Validated design. So if that's your flavor, we can help you there. We can deploy that very easily. You can use what you have and add our stuff in. So we make it easy. If you want even easier, we have our A product,
31:09
which is a roll in the door and it's a turnkey solution. So we have lots of different ways to help you. In fact, just recently we've started adding, uh, solutions, not just storage, but solutions as well. So generative AI stack where we have, um, what's the first one out, financial model, right?
31:28
What are the next ones we're doing? So healthcare and is it uh drug exploration? Yeah, drug discovery. So those are the next two. So these are, these are models that are designed and tuned and ready to go for your environment. You don't have to go build it all together,
31:46
right? And so that's a, a complete stack for you to deploy and go, right? So, We can walk with you, we can run with you, and then we can get a whole bunch of people running together. And that's what our exit is. So, it's the really the, the huge, um, terabytes per second scale. It's um extremely large in
32:08
size, exabyte in scale, terabytes per second in performance. Um, most people aren't gonna use it, but the AI factories may. End of the day. We have products that help you in the optimize your TCO, right? Your large data sets, your large model libraries, and we have the cost optimize the flashb E,
32:29
really inexpensive to get petabytes of storage. Then you start getting into the more middle of the road, HPC environments, AI environments, and you're doing your training and you're starting in enterprise, enterprise size, the flashb S, especially the SR2 release 2 version. That's your workhorse there in that environment.
32:50
And then when you start getting AI factories that have, you know, thousands of GPUs, we have a product there. The thing is, we've been doing this for 10 years, 12 years, hundreds of customers running AI on top of, of, of pure storage. We know the space, um, we excel in the space. We have great customers and,
33:08
and partners that understand our value and we're going to market together. So a lot of good things happening altogether. So, um, Last thing, don't forget to get a pair of legendary socks. I'm not sure what legendary means, but these are cool. You control delete socks. Legendary socks and scan that.
33:26
And also, you can join our uh customer community by scanning that barcode. Um. So, you gotta end on a barcode to scan, right? I mean, it's kind of mandatory. So, um, that's what we got.