Skip to Content
24:56 Video

Deploying SQL Server in Kubernetes

This session will look closely at each of these elements to help you successfully run SQL Server in Kubernetes.
Click to View Transcript
00:01
Mm Welcome to pure accelerate tech fest 2022. I'm Anthony no Santino and this session is deploying sequel server in kubernetes or more specifically deploying staple applications in the modern cloud and data center. So we're gonna pivot a little bit on topic, not just sequel server. We're also going to introduce maybe some open
00:23
source databases to if that topic pops up. So today we're gonna have a panel discussion. I have some good friends with me here at pure storage and we're going to talk about some fun, stable applications Inside the crew ready space. So let's go ahead and introduce yourself 1st Chris Please. I'm chris Atkins. I'm a solutions architect based in the EMEA
00:47
region. My interests are most databases on kubernetes and automation. Very cool. John hi, I'm john owings I um the director for cloud native strategy, I kind of give care and feeding to anything that's going to be going into kubernetes especially databases and I'm Anthony Constantino a principal field solution
01:15
architect at pure storage Microsoft mvp and been working with kubernetes and sequel server. Well kubernetes as long as it's been around and sequel server for a lot longer than that. So which anyone seen the joke, You know, they would have tremendous experience for 20 years for this job application, anyway. so the way that we're gonna do it today is kind of a Q and a panel with the group.
01:35
I have a couple questions lined up. We're going to open it up to the panel and kind of jam out about what we're thinking about running state for applications and sequel server inside of kubernetes. And so I think this one's a big softball to the team here. What do you see as the primary benefit of deploying sequel server or excuse me, deploying state for applications in kubernetes is all
01:53
that anybody that wants to go first grab that. I'll go go. So one of the big benefits. I see and again, this isn't just secret services, just any, any database. The way the databases are evolving, is there evolving to,
02:15
to scale out data platforms? Historically, sequel server hasn't been like this. But with the advent of availability groups, it is absolutely moving in that direction. So you, you can scale out your, your database or database engine via availability group and this scale out as opposed to scale up this
02:42
absolutely plays into the sweet spot of kubernetes. I think one of the things I see, one of what, what I'm excited about is that agility to do it fast. Right. Like I came from the days when we would order a server from, you know, the server company and it would show up and I would build it and then install sequel server, you know, on it, A sequel 2001 on it.
03:10
Right. And uh, right and long gone are those days. Right. It's just a line of code that says deploy this and that's, that's what I see is like the, one of the biggest benefits is that um, kubernetes has taken that from, you know, the deployment of a VM, which is fast too, basically starting a process, which is a way faster.
03:33
I totally agree with with both of those statements and, and, and john your story about ordering the VM and getting the ticket or whatever it is to get that process started and turning it into one line of code when I talk to folks about sequel server on kubernetes, that's the number one thing. But I also add taking that code and parking it in a repository to have that code measured over
03:53
time so you can keep track of changes and things like that. So you literally have the description of your platform over time in a repository, what you think provides exceptional value just to add one thing to dovetail onto the end of what john was saying, one of the really cool things that you can do with kubernetes is via a deployment, you can change the image tag
04:19
and it's a really, it provides a really, really fast way of upgrading the version of the database engine. Yeah, that is super nice. Yeah, like jokingly, I've taught a lot of classes on how to run sequel server and crew readies and supported a bunch of environments and when I show folks that exact trick for lack
04:43
of a better term by just swapping the tag out and rolling out a cumulative update. I jokingly tell folks that you're probably still trying to rdp into your windows box to download the update to apply it In that scenario. I mean it's it's bananas fast and I'm gonna correct myself because it was sequel 2000 and I said 2001 for some reason I don't want to lose all my street cred totally let that slide.
05:03
I was gonna crack a joke at, you know, I was I had done 3.5 inch floppies. Yes, that was access 97 that I did that one. It's cool. Cool. Alright, any final thoughts on this topic team before we move to the next? I think there's a lot more benefits too. But those are those are the ones I think all of
05:23
us kind of hit the big one. Yeah, yeah, definitely infrastructure as code or now, you know, and data platform is code, right? That's that's super important. Right. So the next question team for the panel is Cancun and just perform well. Right?
05:36
Because databases have a requirement of usually low latency and high throughput, can we take an application that may be an intense LTP application and lift it up and plop it on the kubernetes and expect it to work. Right. That's a totally loaded question voted completely. I mean obviously there's ways to employ quotas and resources,
05:56
you know, so you can get the same resources compute memory disk, right? Or flash right underneath to be able to to deliver that. Right. So it all just comes down to your design. The actual interference of running it as a container may is negligible in my, in my tracking in the, in the past. Right?
06:16
So if you, if you have the right, you know, if you have a flash array underneath your cooper nineties, right, you're going to get the same low latency that you would get, you know, if you were running it in a VM or anything like that. And just from my testing from what I've seen different different workloads well you nailed the point a minute ago is it's just a process right?
06:36
It's got some stuff wrapped around it to kind of isolated from the other ones, but it's just a process running on a server. Right? I agree. 200% with John It this, this comes down to The infrastructure that you're running your Cuban 80s cluster on and again, to go back to the point I made for the first question again
07:03
at least depends on the workload entirely. But if you've got and LT P workload because when people have conversations about latency is invariably to do with LTP or not. Oh, lap workloads. If not um that workload is read only, what you can do is you again, you can unload that to a read only replica in an
07:31
availability group. Mhm and it was something fancy like pork works right? I can actually make replicas of that data and so it's like in a completely different node right? The same data is copied mirrored somewhere else. You can read it, you know what I mean? So I'm not even sharing the process with the
07:47
same with the, with the primary server. So there's some cool and obviously we're a little spoiled. Probably all of our labs are on flash arrays today, a little right, everything we do. But I mean if you're running in the cloud, there's obviously some, some of the considerations to do to be able to,
08:02
you can get the same performance. You just, you know, you gotta be smart about it. Sure, sure. Yeah. I think, I think the biggest thing in talking to customers and I like to make the analogy of, you know, you can, you can think about cluster design instead of proving it is almost like building a
08:16
VM ware cluster, you need to have enough memory, you need to have enough CPU and then you also need to have enough memory and CPU in the event of a node failure right to make sure that your workload is consistent over time. And so just taking that into account when you build your platforms and then chris touched on it, we can only scale up so much with sequel server before we have to scale out. And so then you have to take that into account
08:35
of how big your notes are in terms of the computer memory associated with those to drive that kind of throughput into something fancy underneath. Like flash array. Right. Cool. Any thoughts or extra comments on this one, before we jump to the next one team um I think the other thing is to consider, I mean chris would might have said something,
08:57
I don't mean to interrupt chris all you go, no, I was gonna say this, jOHN has got something to add, I think we've, we've kind of nailed the big ticket items on this. Cool, all right, this is again another loaded question in the sense that, you know, I think we all have experience with this um,
09:21
this challenge inside of kubernetes, but what about fail over? So I just talked about building clusters and making sure you have enough capacity in the event that you have to fail over a process or a pod running on one dough to another. What are the expected recovery objectives? I can get out of that and can I get a fast fail over between nodes inside of a Kubernetes cluster like I would get out of,
09:41
for example, like a fail over cluster instance for sequel server or any other data platform to so, so I'll jump in his um in my opinion, this comes down entirely to the database engine. So again, I don't want to come across as if a scale out database engine is the answer to everything, but it's, it really plays well in this this space if
10:11
you've got something that scales out and that could be sequel server via availability groups, it could be post Grassfire replication, um, cassandra by virtue of the way it's designed from the ground up is scale out. Yes, absolutely. Such database engines work well in this space.
10:35
The challenges is if you've got something that you can't scale out, then you're into the realms of post eviction timeouts, which I think off the top of my head, If you don't change it, it's something like isn't as high as five minutes. Yeah, it's five by the Fault. Yeah. And that's what I was going to say is that, you know, barring like you,
11:01
your knowledge of the time outs of everything, you know, of it could be the cloud ec, two instance or, you know, in any of those kind of things, right? Like there's timeouts everywhere before anything recognizes that something's gone and so that's part of it. So that's why like we have stuff like built in that kind of helped short circuit that,
11:23
but you know, you gotta, you gotta be knowledgeable of that. You know, obviously what chris was saying around scale out where they can the applications detecting it, that's the best. Then there's, you know, ways to go down from there. Like if you have a single node, post grads, like there's ways that we can fail over, we can make that very quick 25 seconds or
11:42
so, you know, you know, for the whole process to restart, that's that's usually good enough for almost everything. And then if there's not, there's ways there's ways to uh make that faster. Right, right. Yeah, I think one of the things, especially folks coming from a Windows background that it supported traditional fail
11:58
over cluster instances is Windows server fail over clusters are pretty awesome because out of the box you get pretty responsive fail over times between notes and the cluster because this entire stack was built by one place Microsoft and they built sequel server to function on top of that and you have all the coordination of those events that can drive a quick fail over and fail over cluster is similarly like chris describes the Gs,
12:23
you know, the application can detect when things are wonky and quickly move a process or not even the process because the processes will still be running but move the read, write replica to another node in the cluster. And so what are there any like pain points inside of kubernetes though? That could cause you specifically to have slow fail over times.
12:41
And I think you guys touched on that a little bit with this kind of timers all the way down. Right. And so if you are working with this tech, I think the biggest thing is to test, you know, go pull a plug and do and do those things that you can find out the thresholds that you need to touch in your environment because again, in comparison to Windows server fail over clusters, that's the easy button because
13:01
they've done all the engineering for that stuff to be a smooth, consistent fail over. And I think in this advent of I don't want to call it bleeding edge technology anymore, but maturing technology and you know, we're gonna have to discover what knobs and buttons have to be to be bashed to get good recovery times and fail over. You believe we've been doing this for like five
13:20
years now, like kubernetes, like it doesn't seem like that time has gone by that fast. Um the other, I would say like those pain points are one, but Kubernetes also doesn't have like the concept of failing one cluster to another, so you can't have like cluster A cluster B Right? And so I mean something that we've, we've built Metro D R for that,
13:42
but like Kubernetes still doesn't have that concept of like, hey, you know, this cluster disappeared, start all my apps over here now, like it does, they're just independent entities that kind of run all by themselves. So um it's nice to have something that, to kind of bridge those gaps, I'll try to miss something. So one of the important things to do and I
14:07
don't know how many people actually do this is that you've got to design some extra capacity in your into your cluster for some fault tolerance. So if you've got, say three worker nodes Without any requirement for anything to fail over, they are 100% maxed out on CPU and memory if for whatever reason, a node gets itself into a state where it's unscheduled ble where do your
14:35
pods go? So that's, that's something you need to think about and think and think to like some of these destro's not naming any names of Cooper 90s have a lot of stuff to come along with them to monitor to, you know what I mean? So you've got a plan for that as well? Yeah, definitely capacity planning. Yeah. You know,
14:54
it's um, and that kind of like, you know, drawing drawing that correlation back to the VM ware model. If you built of EMR cluster, you're going to make sure that you can deal with a node failure and potentially a node failure plus maintenance. So usually and plus two, if you're building the critical system right. You touched on a good point though, jOHn with regards to the humanities doesn't really know
15:14
about cluster fail over. Inter cluster fail over. Right. And And if you look at the design patterns from the major cloud vendors, that's why they'll have services like azure traffic manager around 53 to help you move that traffic quickly between clusters. But that problem, although that particular type of fail over is really good for stateless
15:32
applications that can have like https probes but how do you have an intelligent fail over between state, full applications inside of a cluster. Right. And that's going to be like chris said you're gonna have to push that into the application tier when you're designing your platforms like eggs or something like that. Mhm. Or have something like Metro D are you know,
15:50
import works? Do you want to tell me a little bit more about small plug? Everyone go google or bing? That's your D. R. Alright, let's go ahead and jump into the next one. I think this is the last question to you. So what are the D. R. Options?
16:07
And I think we kind of touched on that a little bit a second ago. Maybe we'll let you know get a little ahead of ourselves in the in the deck here. But what are our d our options for things like clusters or things like that, your D. R. That you've talked about. What are we, what are the planning options that we have to take into account?
16:22
It might take a back seat on this and let john off the leash, it's like chomping at the bit to say something like a rabid dog after a raw steak and he wants to talk about pork works. Metro D are so so john the stage is yours? Well there you pointed it out so there's obviously Metro D. R is a synchronous option, right?
16:45
That is built into port works. That allows you to Synchronously mirror. You know, if you're under 10 minutes basically synchronous distance or latency, you can have a stretched cluster so I can have the data in both places. But beyond that, like I think we think a lot about data because, you know, we work for a storage company.
17:05
Well the other, the other part of that is actually replicating the objects, you know, and keeping them in a state on the target cluster cluster B that's in the oh hey, it's there, but it's, you know, just ready for fail over. So all you have, you know, you the data is in sync so synchronous, but all you have to do is hit the button that says scale up,
17:25
you know, the other one is gone right and and you're good to go, like there's solutions out there that will kind of move the data around or back it up or things like that, but um I think it's really the thing that we do with that, like having that stand by, you know, shadow copy, kind of of of the staple app, just ready, you know, the deployments already deployed, it's just scaled to zero and then you
17:47
just turn it, turn it up and it's ready to go. I think that's that's actually pretty key. Obviously there's an asynchronous option if you're At a long distance, you know, and you're you're okay with 15 minutes um intervals in that disaster recovery and uh what other what other things can I mention chris backup? PX backup, I think there's there's a couple of things on that.
18:10
It's because it'll actually give you that point in time, so, you know, replication is good, but if you're replicating something corrupt replicate your replicas bad too. So it's nice to have PXE backup, which you know, rights to an object store as your blob or S three or a flash blade and it allows you to go, hey, I need the one from last week and and restore it,
18:33
you know, hopefully you don't ever have to do that, but it happens right. And um, that actually comes in two flavors. Now there's actually a, as a service version, so you don't even need me to install the server for it anymore. You just sign up, pointed at your kubernetes clusters if they're running in as your amazon on Prim and do it all from there. It's pretty sweet stuff,
18:56
Those are three of the options. And then obviously I like chris because I think there's some application level options too that are pertinent, like people should know those. Yeah, well there's, there's again, you know, not to labor the point too much, but you, you, well, it's, it's more than
19:19
er, but one of, one of the things I'll touch on just to follow on from what, what john was saying, one of the important things to think about when it comes to d r is, Yes, absolutely back your data up, you absolutely need to do that, but in the world of kubernetes, there's a whole bunch of other things you need to think about as well. Things such as secrets.
19:42
So whatever you do, don't just get hung up on the fact that the data is there anything that you need to worry about because there is, yeah, saying I have a snapshot of it isn't good is just a small portion of everything you need. I want to leave it real hard on this one and don't forget access to your container images right? Um taking that into the account because if you
20:08
can't pull the container you can't get to the data. Right? That's also an important yes. Yes. Whatever your whatever your image repository is to you probably should make sure that's protected. Right or have two of those on the subjects of sequel servant things you can do an application level thinking out loud,
20:27
I don't know, it's a bit old school. I'm just wondering if you could um log ship from one database in one cluster to another and off the top my head, I don't see why you can't do that. No, I'm pretty sure that's the that's the supported the r option for sequel server in communities. So is the maturity of a Gs outside of Azraq
20:50
enable data services um lax an operator to coordinate fail over. So it's upon you literally the operator to coordinate fail over today. Inside of Clusters unless you want to use a third party tool to integrate which is available. Microsoft has a partner with another organization that has according to fail over 80GS and if we have time when you said
21:14
operators maybe think about like you can also look at um poor rocks data services because it is rather because like some of these operators that are out there are, you know, open source, they're not made to fail over right there. If you try to make a copy, it'll just override it. Right? That's what the operator does and make sure
21:33
everything is set to a certain state. So um you may, you know, a thing to look into, right is poor works data services, which is an operator that's made for this kind of stuff, right? Like it's one api one ui for it for this. So um you know, obviously it's it's a it's just out, you know,
21:53
depending on when you watch this. So um there'll be more and more features like that coming because it's it's integrated with court work. So it's it's built to do that. Cool, cool. Alright, well it's gonna jump to the last topic here which really is just closing thoughts. Is there anything else swirling around your
22:10
brain that you want to communicate to the community about running state of applications or sequel server on his team. I'll go first because I think that there's um my closing comment is there may be some people out there that that are kind of shy on running state full applications, but if you look at what actual like enterprises are doing,
22:34
they're they're doing it right. Um and so just think about like those, those benefits that we're talking about. There's people that have been doing it for you for a few years now and they're full, they're all in on it. Right. So don't be, I guess my main called action is
22:53
don't be shy about this just because someone from a cloud provider said, you know, you don't do it. It's hard, you know, we're here to make it easy for you. Yeah, john's kind of stole my thunder there. It's, you know, just double down on that point.
23:14
There's a certain amount of information floating around to suggest that state collapse and kubernetes are nothing. They absolutely are. And again, not just that the fact that people are doing this in anger in the real world. Mhm Yeah, I think the most important thing are kind of like when I saw this happening for the first time is this idea of decoupling configuration and state because I
23:42
can have the config like we talked about in the beginning of the code and I have the data and that data is going to live somewhere in my data center and really I just need something to get access to that data and that's, that's the job of kubernetes is to put a container in the right spot to get access to that data and and once you kind of start decoupling those things and you see it in action and then operations, it's, it's kind of a very revolutionary thing
24:05
like for lack of a better way to put it in that I have my application state described in code and all I have to do is make sure that I have a cluster available to get access to my data and once you do that, your operational burden of maintaining stable applications shifts dramatically, you're able to do things so much more quickly. Right? Yeah, I agree, awesome.
24:28
Well I want to thank the panel so much, john chris appreciate y'all kind of jamming out with me to talk about staple apps and sequel server growing crew. Brandies. Thanks for having me. Yeah, it's the same. Yeah, thanks for having me also Anthony it's been a blast. Cool, cool.
24:45
And I want to thank the viewers and everyone accelerate this year, Happy conferencing.
  • Portworx
  • Video
  • Containers

Are you thinking about running SQL Server in Kubernetes and don’t know where to start? Are you wondering about what you need to know? If so, then this is the session for you!

When deploying SQL Server in Kubernetes, key considerations include data persistency, Pod configuration, resource management, and high availability/disaster recovery scenarios. This session will look closely at each of these elements to help you successfully run SQL Server in Kubernetes.

Continue Watching
We hope you found this preview valuable. To continue watching this video please provide your information below.
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
CONTACT US
Meet with an Expert

Let’s talk. Book a 1:1 meeting with one of our experts to discuss your specific needs.

Questions, Comments?

Have a question or comment about Pure products or certifications?  We’re here to help.

Schedule a Demo

Schedule a live demo and see for yourself how Pure can help transform your data into powerful outcomes. 

Call Sales: +44 8002088116

Mediapr@purestorage.com

 

Pure Storage, Inc.

2555 Augustine Dr.

Santa Clara, CA 95054

800-379-7873 (general info)

info@purestorage.com

CLOSE
Your Browser Is No Longer Supported!

Older browsers often represent security risks. In order to deliver the best possible experience when using our site, please update to any of these latest browsers.