speaker 1: This is me. I'm carton Gibson. I'm at T Carlton Gibson on Twitter and GitHub. You can find me there. I respond if you put that in, and I get a notification if you don't know me. I' M1 of the Jango fellows. So together with my colleague Maris, who sat back there, we do the kind of day to day maintenance on the framework we're contracted by the Jango Software Foundation, which is the governing body of Jango, to do things like ticket triage, pull request for a few, handle security issues, do releases, etc. We don't do any of that without an absolutely awesome band of contributors, which will include lots of you. But ggo is a project that's so big that there's too much of it to be done just on volunteer work. So it's necessary that there's paid people to do that too. So I always describe this as the janitus. We do the stuff that on the day to day basis keeps the framework going. So before I move on, I'm gonna to remind you that you can sponsor the dsf either individually or ideally your company takes up a corporate sponsorship of the dsf. You make a small investment to help secure the continued development of the web framework that you built your business on. It's like having public liability insurance but for your tech dependencies, okay, while I'm not following, I help maintain various packages in the Jango ecosystem, the relevant ones for today's talk of the channels trio, which was originally started by Andrew, but which I took over a little while ago. There's channels Daphne and chanreus there. We've got some new releases just pending. They're waiting for me to write the release announcement. So there's a tail that needs to wag a dog to get that done. And then I'll get that out. I had hope to get that out for before the talk obviously didn't happen obviously. But in the next few weeks that will be when I'm not doing that. Nothing do this short goes was got podcast within my frame work of instdgo chat. We get guests from the community and we chat about Jango. If you haven't listened to that, please do you know anyway. speaker 2: but today's . speaker 1: talk is about async Jango. I you think he was one of those buzzwords. It's an exciting topic, but I think there's lots of confusion over what it's really about. And there are questions, what's all this async? What's it good for? How do I use it and so on. So I've called this the practical guide you've been awaiting for. Well, I thought the pn, then the title was irresistible. Now it's async is a totally massive topic and there's much more than I can talk about today. But it's also a massive topic that, as Katie mentioned in her keynote, we don't have to think about most of the time as jgernauts. So what I want to do today is focus on just a few examples of using injango that bring in the bits of acythat are relevant to us, and we can leave the deeper stuff to the group of very unclever people who spend a lot of time thinking about and pushing it forwith most of the time. It's nice that we know it's there. You know Katie said, be aware of the layer below. Well, we need to sort of have an overview, but we don't need to get in there as Janger nats. We can do a lot of async and add to our Jango applications without getting into the lowest level stuff where it gets it gets complicated, frankly. So my hope is that you leave with some ideas about using async in your Jango application without making it too complicated. So let's go. We're going to start with just a simple asynko example, just to bring in a few concepts and to set the seat. So first, we can import this module asynkio. Asynkio is the Python standard library's implementation of an async runtime. It gives you an event loop and lets you schedule tasts concurrent tasks that will run on it. Right, there are other implementations of async runtimes available for Jango, notably the trio project. But as Jango people as doing async Jango, we don't need to think about those. Async kio is the main async implementation in Python. It's the one that's in the standard library and it's what's Jango's async is based upon. Okay, at least for now. So with asynchio in place, then we can define a couple of helper functions. The first one here, it just prints a dot every second. And the second one will take a task name and a time to hang around for. And then it will just print when it started, wait for a bit, and then print when it finishes. They're not very exciting, but they let us demonstrate some things. A couple of things I want to highlight here is the in the syntax. The first is the async and a weight syntax that are used to mark async functions and to signal to c async io that we're ready to pause the function normally while we wait for io. But here we're just sleeping and we give control back to the event loop at that time so that it can do something else while it's waiting for whatever our awaitable is doing. An async death function is called a co routing. And when you call it, it doesn't actually run the code at it returns a helpfully named co routine object. Co routine function coting. It gets confusing, right? But the co routine object is the thing that you can await. You can say, go and run this event loop, please. And when you await that co routine object, your code is paused. The event loop handles running the co routine. And then sometime later, you get the actual return value of your function on your code continues from where it left off. Let's look at that. So here's our main function, and this is going to be the end of this first introductory example. First, we use the asyko create task method to schedule our print every second co routine to run. Okay. And then in the loop, we create five tasks, name them a to e, and we give them a time to wait. And then really important step at the bottom, we gather those tasks to wait for them to complete. That's a bit like calling join on a thread if you've ever used that syntax. Then at the bottom there, we just use this async run method to create an event loop, start it and run it until our main function completes. So that's the whole script, right? You can see you still see that, but that's the whole script. When we run it, this is the output. Our our tasks, a to e start, then our printer dot one gets a go and then a couple of them finished, the printer dot one gets another one. It's interesting why the dot there it finishes kind of appears after the beat instead of before the beat. And it's because the event loop has a kind of queue of task and it just goes through them in the order that they come up and runs them. And so depending on which order they end up in, that list depends on which order they get executed and so on. Let me just show you one more example. If I comment out that gather call that one where I said it was important. Look, let's just go back that one. It says important, wait for all the tasks to complete. If I call me that out, this is the output. Okay? And then what happens is all our tasks start, but then the main function exits and the event loop shuts down on all our other tasks. Never get the opportunity to run. So our poor little printer dot every second it doesn't get to do anything. So the key point here is we need to not exit our main function because we need a running event loop to execute our concurrent tasks. So thinking about Jango, the first question that comes up whenever the async word pops up is, so can I use this for background tasks? So a user signs up, I want to send them an email to confirm, you know, something like that. But I don't want to spend five to 10s with the browser not doing anything before I send the request. So could we use asyncho create tasks to send the email in the background? Well, here comes my old man answer. Well, it kind of, and it depends. So let's go through those in time. So can I use some background grounds? Well, kind of the question is, is your background task going to fail? What happens if it goes wrong? Because there's no error handling built for you in asin Kao create task. There's no statuses, there's no retries. Iser, no error handling. There's none of that stuff. So if you just fire off a coreteam to create task and it goes wrong, you've got an awful lot of error handling that you need to write with that for that to be robust, where you could just use something like Jango dbq or Jango q or celery, or you know if that's your cup of tea, now you're adults, you can do what you like and maybe in your code you don't need much error handling. And maybe you know it doesn't really matter if this task just blows up, but kind of if you could use it, but not really, maybe someone's gonna to build you know a task quewith retries and statuses and shaduine and all these things built on top of asing kio with a not but it would handle all that stuff that's not code you want to write yourself. Okay, so kind of what about can I use it for background task? Well, it depends it depends on how you're running Jango. So there's one more thing we need to talk about, which is whiskey and Askey. So whiskey, maybe you all know, is the old standard for running synchronous web applications in Python. So flask is a whiskey compliant framework, Django is a whiskey compliant framework, etcetera. Asgy craged by Andrew over there is the asynchronous sort modern, not modern. It's the asynchronous version of this asynchronous frameworks. So fast, api, starlight, starstarlight, starlet, Jango are all asi compatible frameworks. And simplifying a whiskey app gets the whole request at once, and it returns the whole ops once. It's kind of one shot. And the Nazki app has its event based and it has a pipe for incoming events and one for outcoming events. And so communication can be long lived and it can be bit by bit and it can be two way. The point is that in general, under whiskey, we don't have a running event loop. Jango allows you to write async death functions, which will be run asynchronously with an event loop. But what it does is when the whiskey handler finds a or when bashandler finds an async death view, it spins up an event loop just to run that. It runs your view function. And when it returns its response, it will shut down the event loop. So it's like the example where we didn't call gather. We could start a background task short, but as soon as the event loop exits, that task would get shut down. So it kind of depends. You could run back down tausing crtasks, but only if you've got a running event loop. Okay, so that's the first example. And there are a few concepts there that we wanted to bring in the event loop, async, awake, syntax, thatdo and gather. And those . speaker 2: are the kind of . speaker 1: basics. speaker 2: Now as Janger nats, I want . speaker 1: to give us one example that I think is very handy. I think this is one of my favorite patterns, and so it's called aggregating views. And what I actually call it is what we did before GraphQL was ever a thing. So let me set that up. You've got an app, and it's a app for hotels. And so you've got a couple of models. You've got hotels and you've got rooms and prices, right? Then you have some basic drf serializers for those. You've got hotel serializer room serializer fine. And you've got a couple of drf views. So you've got hotel detail, iled view and a room list view. And they all work great and they've got url's. That's fine. Except your mobile team says to you, we can't make two requests. We've got a view that needs a hotel and the list of matching rooms, and we can't make two requests for that. Quite rightly. Mobile connections are slow. Data is expensive. Connections aren't reliable. They don't want the page to take all data to load. What they want is a single view that fetches all the required data in one go. Now this is exactly the use case for which graph qil was invented for, but you don't necessarily need the whole graph ql setup because you can have an aggregating view. Now this is it. Can we see that? I'm going to go through that bit by bit. So at the top, first of all, we just import httpx, which is a asynchronous requests library. It's light requests, but it's asynchronous capable. And then we import some other bits that we need for adjson response. Then we define our async def request handler. And from Django 4.1, we can define async deaf handlers on class based views, on views of classes, right? So we get a nice namespace and can decompose our logic into separate methods if we want to. If you did this with a function based view, you could deness that a level and you could get, you could get rid of the class. You could get rid of the self and you could just you know remove the ending, but then that's up to you. You can do that. You get rid of the self argument, but you wouldn't then have a name, espace. You just have your module namespace to work on. So it's up to you which is more useful for you. Then we get the url's for the views that we want to aggregate. So we get the hotel detail view and the filtered room list for that hotel. Okay. And then we can use httpxasync clients to fetch the urils concurrently. The client get method isn't awaitable. It returns a coteam when you call it. And we can use that with async io gather again to wait for all the tasks. So that's essentially which of these requests is going to operate most slowly, wait for them to complete, and then we move on. And then finally, we compose the response structure we needed for our front end and we return the response, okay? And that's what the front end team looked for. Now this is a pattern I've used for many years, before asyncho was around, before async Python, we used to use no jnow. We could have done it with Python threads, probably, but no js was the thing that you did. So we did it no js and youhave a little nojs service next to your Jango app that would provide the aggregating endpoints. And now we can just do it in Jango, which is better . speaker 2: because you know when you've . speaker 1: got two texts, you've got two documentations to learn two things, you know, focus your energies. So the best bit about this pattern, I think, is that you can do this with your existing wizarkey application right now, because Jango is capable under whiskey, under a synchronous running that you've used a your unicorn, whatever it is that you're using to have these async def views. And you don't need to change a thing. Itwill spin up the revent loop just for your these aggregating views and the rest of your application remains the same. So I think this . speaker 2: is a really good . speaker 1: a really good pattern. speaker 2: And then want to . speaker 1: look at another example, which is a chat app. We're going to look at it four ways. And the goal here is to see how the need for 18 comes in some different considerations as you go through. So let's the setup. We're going to build a simple app that lets you post messages to a single list, like the old guest books you might see in the late nines, you know where you got a you know just a wall where people post their messages. The channtutorial has has got a version which is multi room and you can have different rooms and is worth checking that out in contrast. But what I want to focus on here is how we start with very simple synchronous thing and then we add various options on async to change it in various ways. So let's have a model. We've got a message and we've just got a text field by who it's posted by. We're not going to bother with user and all the rest of that when it's updating. We just can order by its creative. Let's have a form in a filter. So we have a go Jango Jango filter filter set there so we can filter by synthere's. A little trick here for Jango filters that nobody likes to created that as the url parameter and Jango filters, so you can call the filters the url parameter synand. Then you just say which field name it points to people like. And then we just have a message form, which is just a simple Jango model form that will take the fields for the posted by and then the text that people want to post, let's have a view. This is just a list, a Jango list view, class based list view. Let's go through it bit by bit at the top, just you know, template, context, object name, ordering model, etc. Okay, for the get query set, we just get you call the super method, we just run it through the Jango filter set in case the syparameter was passed. And then I'm just limiting it to the first 30 records just you know if there's 400 messages, I don't want to get the whole lot. If I just want some okay. For the context data, we're going to see if the only interesting thing is for the message form, we're going to give it some initial data. If the post adbuys in the session because you don't want to have to type in your name every time you want to type it in once and forget about it. And then for the get template names, we're just going to use ht hang on, httpx no htm malx, which is the world, it called hmx. That's what we're going to use that. Sorry, I went to the party too. We you can use htmx if we want on that. And that was why not? Because that's great. That's it. That's that's quite a basic list view. And then we're gonna to add a two bits. Yes, we're gonna add a post message to create the messages, which is just again, standard create view based on Jango's genereric class based views. There's two bits I wanted to call out there. One is that the success url we're gonna to just redirect back to not the object detail, but back to our list view. So we've got this the message board we just from the post view we don't go to a detail for individual messages we just see the list each time. And then the second bit was that if the user did post did put a post sted buy in then we will save that in the session. Okay, so. We're going to look at four ways that could you know, you could deal with this. If you want to update, I want to know. I've got my message list open in the browser, and I want to know when it's updated. If someone else pothe message, I want to see that. And so how do I do that? The first way I could do it is by polling. I could request every 5s, every 10s and check to see if there are new messages. And if there are, I can update the html in the browser. So htmx, yes, thank you. Does this for us in, you know, so I just put a check, a check box here, type checkbox. I hit the list view because it's an htmx request. It will just send the formata bit that I want to swap in. I give it a target. I say, look, I want go and replace the message list, ID, dom element and a trigger when I check the checkbox start and then every 5s send another request. Now that might well be all you ever need, and that might be perfectly good. And if it is, stop there, right? Or but scaling right? What if I've got lots of quiet clients and they're repeatedly making requests? Now, maybe it's big enough, but likely you would you wouldn't. At some point, it's going to start getting a problem. What about responsiveness? You might say like 5s is quite. It might be a long time. You know, in 10s might be a long time. I want my app to feel responsive so someone posts. I want to reply straight away. You know, now I could poll more frequently. I could say, well, let's poll every 1s. But then that scaling problem becomes a lot more serious. And so that there's a sort of ratio between how long do your requtake and how often are you hitting them and how many clients have you got. And if that sort of reaches saturation, you're in big trouble. Okay. So at that point, you're essentially doessing your own application. So if it's an internal user face internal, you know there's only one user and it's only one browser session. Yeah, fine polling. But if it's you know hundreds of users concurrently, that's not going to scale. So that's when we have the second strategy I want to introduce that that's long polling. And here we get beyond what we can do with Jango itself because we're kind of changing the schema of what we want to do. We want event based or real time updates, and that's not what Jango was originally set up to do. We go from the traditional request response model where we get the we get the view, gets the request and then it sends the response to the second model, which is event based, where the view gets the request, it somehow waits for an update and then it sends the response. And we can't do that under with traditional Jango views. There's there's no real option for waiting. I mean, you can synchronously block on a cuue, which is no. So this is why this this waiting bit, this need to be event based, is why we need asynchronwe need to respond to events. And we need to be able to communicate between requests, between open browser windows. Basically, for this, we need the channels package, which again, Andrew created. And this is a few parts, the ones I want to talk about today as a channel layer. And this is a way of sending messages between different connections. So you post a message, I want to hear about that, a, and the channel layer is how we do that. Essentially, you have a redis instance or something else, and a message goes via that too. Anyone who's subscribed and you send out to a group of all windows saying, Hey, a new chat message was posted. And then the second component that we need is consumers. And consumers are essentially user friendly abstractions on top of lower level assky. Okay, so we as Jango developers don't want to spend our time worrying about the details of aski. It's a bit gnarly. And there are gorges that we don't need to know. So on top of those, consumers make writing your acing apps a bit more human, a bit more Jango like now all of this is in the channels docs. And if you found follow the channels tutorial, you can get the feel for a bill ell. I want to go through an example and sort of show you how it comes up. So the first thing we need if we're going to update our message list, I've got my browser open. I want that to update is we want we need a way of notifying when a new message is posted. So let's we create just a helper function to do that. And for this, we use the channel layer. Basically, we need to send a message to the channel layer saying, Hey, there was a new message. So first of all, we instantiate the channel layer or we use this get channel layer function, which gives us an instance of the channel layer. Okay? And then this is a synchronous function. We're calling, we're writing here. You'll see why in just a second. So we used the wrapper from that. I should have put an import statement in there anyway, but it's from the asgf package agf thing. So async to sync helper. And that will turn an asynchronous function into one that we can call synchronously. Okay. And then the channel layer has the concept of groups. So I want every browser window that's open to get the to be updated. We have a group for that, and that's just called chat in this example. And then we send it this dictionary here, which is the message itself. Now the key field there is the type. Every every event that we send to the channel layer has to have this type. And the. I'll come back to why. But the bottom line is consumers know how to map types, the type of the message to the handler that's going to deal with them. So consumers will have a handler, which is chat underscore message, and they know when they get an incoming message, look for a handler that matches this message type and dispatch it. That's why the type is important. speaker 2: The message here doesn't matter. speaker 1: You can have key values, but the values have to be like essentially basic. They need to be serializable. They can't be model instances. You can send model ids but not model instances because you're going to PaaS them into a different thread and you can't PaaS model instances across threads that all go wrong. So with the notify message in place, we need to just make sure it's called whenever we create a new method, and we do that in our post view in the form valid method, we're just going to call the notify message when the transaction ends. So we use the transaction on commit callback funthere because we might have atomic crest on we might be inside of a transaction. If we call it just on form save. Well, the object won't be in the database. We'll send the message all our consume, all our browser windows will go all refresh. Can I have some normal data? And there won't be any new data. And I mean, what's gone wrong is because we didn't wait for the transaction to finish, right? That's a little Jango . speaker 2: gotcha. speaker 1: But and then we just call it transaction on commit will fire when fire the notification when the transaction is committed and that's it. We don't need anything else to that's the notification sign that's like, Hey, and updates ready to go. Okay. So then we need to listen to it and here's where we need the consumers. So this is going to be a long pole consumer and it's an asynchronous http consumer. So long polling is just like polling. What you do though is you wait around, you get a request and you hold on. You just don' T Reply. You send some headers to say, I'm gonna to start rebut. You don' T Reply. And after a timeout, the client will normally disconnect after a particular timeout or you might Tiout yourself, but if an update comes, then you send the response at that point. Okay. So the first thing we have to do is we're just going to set a since date, which we're going to use later to say any messages created from when the request started, then we could get more clever, but we don't need to for the example and then add ourselves to the channel layer to the same group, the same chat group that we're sending the notifications to. And what that says is, Hey, when a notification arrives at that group, the channel layer, please me, please forward it to me so that I can be notified. And then when we disconnect, we need to remove ourselves from that channel layer. Okay? Then we just need a helper method to fetch new messages and to render them. Now, this is a sync method. This is an important bit, okay? This is a sync method. We could use the orm's new asynchronous query interface, but we're going to take the model interfaces that the model instances that we get back, and we're going to render them in a template. Now template rendering is a cpu intensive operation. It takes time. And if we were to do that on the event loop in our async death function, we would block that event loop and it wouldn't be able to handle any other requests. And so template rendering is something you want to put off to a thread so the event loop can Carry on serving requests while the temple's busy rendering over here. But we can't PaaS lm objects into a thread, into another thread, in order to render them in the template. So what wehave to do if we use the asynchronous query interface is wehave to serialize those model objects into dictionaries and then PaaS those dictionaries into our function, into our ththreaded function, which is going to render the template. But that's a lot of work. It's much easier to just put the whole lot in one synchronous function, fetch the objects and render the templates together, and then call that once from the asynchronous view. So let me show you the calling thing. Here's our chat message. This is where we listen to the events. So if you recall, we sent that message of type chat message. And here the method name is chat underscore message. And the point is chanknows to take the message type and mung it to it. You can't have chat message as a Python identifier because that's not a valid identifier. So it knows how to monly the message type to a handler and call the right handler. So we will get past in that event with just the message string, which we don't have. Then we use sync to async, which did we use that before? Did we use the other one? Now we use the other one. We use async to sync. Now we're going to use sync to async, which enables us to take a synchronous function and then await it, execute it in the thread poand, wait for it to finish, and then get release control back to the event loop and wait for it to resume when the response is available. So we get the html from our message list helper, and then we send the body down to the client. We use the dessend body method, enables us to you know output the response. The more body full bit there says, and stop the response, close it. This is long pulling. We get a response in. We wait for an update to arrive. When the update arrives, send, we render the template, we fetch the data, render the template, send the response. Job done. speaker 2: The upside here is that . speaker 1: we get a new message just as soon as it's posted, like so it's responsive. We've sold that response to this, but. Yeah, Yeah, okay. And the upside, Yeah. So rather than polling, we have a rather than having the polling interview interval, we get the response instantly. So that leads us onto the second option, which is serside events, which are a teeny, which is the same idea but a slightly different version on it. With long polling, when we get the chat message event, we finalize the response and that says the end of it. That's close up and that's like the polling example. It's a traditional request response. We get the request. We get we send the request, we get the response. But there's just a little bit waiting in the middle. But most clients will then go and immediately reconnect again. So they're going to ask you for the are there any more updates now? What are you doing now? What are you doing? What are you doing now? What are you doing now? Right? There's overhead to that. So let's assume that even if our web server conducrequat zero cost, which it doesn't, we've still got a cost because we're still adding ourselves to the channel air. We're removing ourselves every disconnect, all the rest. So service and events is for that. Well, can't we just keep using the same connection sort of ID? And so yes, we can. So here we have the service and events version of the consumer, and it's kind of exactly the same. In handle, we connect to the channel lagroup exactly the same. In disconnect, we do exactly the same. And we have exactly the same message list, html helper that goes off to the orm, fetches the new objects and renders the template, which we call bsynto async. The only difference apart is in the chat message handler, the one that, when we send the notification from the form creation view, gets called in response to the update. We have the same sync tuasync call to the query Orender template. And then we have to format the event to send because service events can't have new lines in them because they're delimited by unlines. You get a kind of event injection if you didn't get rid of them. So we take all the new lines and we preface them with this. Each line and service event in an event body has this data line of stuff, data con line of stuff. And so we just formed it like that. We have add a couple of new lines and then we send it. It's almost exactly the same, except at the end we PaaS more body equals. And that says, Hey, I'm going to follow up with more here. Please don't close the connection. There's going to be another one next time. And so instead of having to disconnecting, reconnecting and all of that, I just keep waiting for more and more. And htmx or whatever you're using can take the html, put it into the dorm, and then just wait for a bit more. And htmx has an extension which handles all this for you. And it just goes in, you know, it's same attribute based target, that selector all done okay. If the connection fails, hdmx will take care of reconnecting. But in general, we just keep the same connection going for the lifetime of the browser session, which is nice. So that was polling long polling service, cent events. And then the fourth option is web sockets, which might be the one you jump too straight away because it's very popular and their library support and things like this, it's what you're going to want to use if you're building spas or you want to keep and sending data back from the client. B because the difference between the examples we've talked about thus far and web sockets is web sockets allow two directional data. So for this example, we're not going to rewrite our post view to send the new message via. We're just going to keep using the forms. We've already written it. And there's dger nots. We're not going to rewrite that to something more complex. We're just going to use it. But in principle, you could. And if you use you know JavaScript libraries that do that, hmx again has a web sockets extension, which you can just wheel out and use bam. Okay, let's look at it. So here it's exactly the same. Again, here's the cue. And this is the as Jango developers, this is the genius of channels les and the channels consumers is that they have this nice Django friendly pattern. And it applies to all these different methods with all they've done is change the import and change the superclass. And then I've got this chat message implementation, which is, Oh look, I call the self message list html helper in the sync to async so that it can go and query the new models, render the html, give me back the html. And then I can just use on the web socket consumer, I can just use self dot send and say, Hey, we're sending text data here because you can send binary data, other things. speaker 2: And that's . speaker 1: the four different ways, right? Again, hdmx has got that extension. Which should I use? Polling is simplest, right? If you've not got a lot of clients, not got a lot of requests relative to your response time and small delay doesn't really matter, you you know 10s is fine. I'm you know I'm not in a conversation here. I just want to know you know if someone's scored and if it's 10s later, that's fine. My view is that the best software is the software nobody wrote knows of. So you've already got a jangle app that just works and you can add polling with that. You know, those few htmx attributes and it's job done. Stop. But if you've got a lot of clients, lots of connections, and then that's when you want one of our other three for Jango 4.2, if it's feasible, welike a story for streaming async responses in Jango itself. So at the moment, you can write the aggregating view, the example I gave, the async deaand that returns a response. But what you can't do is stream a response. Like the long polling requires all the service events required. So you need channels. But maybe if we can get it in, we'll try and get that in for 4.2. And then you just write with nascent death. Otherwise, you can use channels. If you want real time responses though, then you've got to use one of the async options. You've got to use long polling. You've got to use server side events. You've got to use web sockets. If you want two way, well, then you've got to use web sockets. If you just want to get something from the server, well, you know one of the other options is available. But if you want know a conversation on the same connection rather than, you know, the side depath, you need web sockets. You might find that your chosen libraries support web sockets already. It's quite popular. It's very established. And it might just be, you know what? Just we're using this library. It's just a change of import from channels. Let's use web sockets rather than, you know, a streaming http response. So the answer is, Hey, it depends, right? I just want to talk to finish off a few thoughts about getting it online. Okay, so at the beginning we said whiskey or aski. Well hang on, what about whiskey and assky, at least for now, right? Async is something that we're still working on and I don't mean we in Jango, I mean everybodies still working on it. The patterns aren't 100 clear. There are bugs and gotches that you sort of have to learn the hard way and it's still kind of a bit like the Wild West, okay? Compare that to whiskey where we've got 15 years of rock solid experience to build on, okay? Every problem that could come up kind of already has and it's already been fixed. The scaling patterns are known. And frankly, why wouldn't you use it, right? You can deploy totally on aski. It's not a problem. You can't six feet up, right? They build the loud swarm. The product which we used for the conference, the jangocconference ferences after the last few years, it's brilliant. It works great, but six feet up are a very experienced teams team and they've got a lot of ops you know capacity to make sure that it's working correctly. Maybe you haven't got that. Maybe you don't want to spend the time it takes to you know make sure everything thing's robust. Maybe the best thing for you to do is to deploy your core app with whiskey as you exactly are, and then just have your async code in an asi app on the side and then you know engx can route the request just to the asgapp. Keep it simple, kitten. Go to bed at night when you're on page a call knowing that at least the core of your app isn't going to wake you up. And maybe it's only one or two endpoints that are experimental that might so that's one thought. And then the second thought is double check everything. Most of the issue reports that I see on channels are problems with the setup generally isn't channels or isn't Daphne. It's whatever you've got in front of that. Maybe your low balancer isn't configured for long, long live connections. Maybe your web server isn't using the right http version, so the web socket upgrades thinbulb never quite works. Maybe that the trouble with issues like that, I can't you know I can't help those issues. So they sit there forever unanswered. Okay, so build it out. Check it works with just Sture app, check it works going via just the web server. Then check it goes via the works going via the low bancheck it with an active connection, one that's you know that's noisy. Then check it with an idle connection because you know lots of people they Oh, it's all working fine. And then an idol connects. It just disconnects all the time. Why was because of a request timeout, you know maybe at the low bds level, wherever, how long do you want connections to be open for? Ask that and then tests that they can stay open that long and I want them to be open for an hour. Okay. So have you actually got tests that they can stay open for an hour with your setup? Do you need to be sending periodic lifetime events? Because a lot of this can be fixed just by sending a heartbeat. Every, you know, your low bouncer will kill your connection after 30s. So 20s senyou know a heartbeat, bam. And that keeps it alive. And it fixes lots of problems. And then the third point, again in the line, is to have fun, right? I ync is really interesting. There's lots to learn. I mean, it's lots to experiment with. I think it's of it as programmer cabinet. It's just, so have fun what you're waiting for. I'm kin Gibson. I'm your friendly Janger fellow. I'm kin Gibson on Twitter and gatethub. You can find me there. If you haven't listened to the podcast, do check it out. Jangochild com. Hope you enjoyed the talk. I hope you've got a few ideas about weather, weather and how you can add asynchyour Jango application if you've got any questions. Really happy to talk to you through the rest of the conference. Thank you.