speaker 1: Hi everyone. Welcome to working with post greql when your dva is not around. My name is Richard and I'm very excited that you can join jangokon 2023 and attend this presentation. Just a little bit about myself. I'm a software developer and support engineer at enterprise db where we work to help customers get the most out of postshchst. Prior to working at edb, I was a Pearl with developer. I was a pel web developer and eventually became a postcrist dba because our dba moved on to another organization. I worked on my first Jango project in 2020 during the pandemic. And because of the great experience I had, I am here today. I've been using plus scrip since the early two thousands, and I know that it can be a bit intimidating for people, so that's why I'm here sharing this presentation with you. So who is this top for? Is for Jango developers. And postchris is one of the default database engines supported by Jango. And if you want to get the most performance out of it, this talk will help you definitely get started. Or maybe someone else manages your database in different environments and youlike to get a little more involved, to collaborate and to see how your Jango app can be improved by tuning the database. This talk, it assumes that you're not intimidated by command line interfaces and you know a little bit about Linux just enough to be dangerous. Or maybe you're like me, someone who someone else manages your database and that person is on vacation, or you maybe that person quit, or maybe you even just never had a deviate. And now you need to learn how to use postgreris. So if any of these things describe you, this talk is for you. So where do we begin? Well, with postgrethere's, a lot to cover. As you can see, these are just some of the topics that we can have a 30 or 40 minute talk on. And I don't think we can squeeze all of this in into a 45 minute talk. So we're just going na go over a few of these today and just going to give you an idea of what we want na achieve in the next 40 minutes. First, we want to be able to log in the database to start and stop it. Sometimes maybe there might be situations where the database is down and if you can get in, maybe you can start it back up. Maybe you want to be able to take a backup of the database before some catastrophic damage occurs. So maybe your dva is not available and some kind of performance issues coming or you're seeing a lot of errors in your app, you want to take a backup just in case something worse happens. Then we also want to be able to diagnose some performance or stability issues by reading the logs. And then you also want to be able to identify any changes that can improve performance in the database. And finally, we want to be able to understand postgres, directory and file structure. So that way you know where to look for things and where to not look for things and what things not to delete. So just a quick romap. We're going to get into the database, we're going to look around, we're going to understand how it's all set up, we'll figure out how to do some maintenance on it, and then we'll look for ways to improve performance and then go over things that you don't want to do and talk about where we want to find help. So without further ado, we'll go ahead and get started. So we want to be able to start and stop. So I think first of all, you want to have ssh access to the database host. So you'll need to either talk with the dba or look at your application config to see where what the host name is for either your development or staging or production database, whichever one you need to work with at the moment. And then just a side note, if you're using things like rds or azure, you will not be able to get into the database itself using ssh. So you'll have to use the console to do the starting and stopping. So once you've Sssh into the machine and you maybe you discover that the database is down before starting it back up, you're gonna to want to do some sanchecks to make sure that it's safe to bring it up. So first of all, you want to make sure that there's enough disk base and make sure there's no full partitions using the df command. And then you also want to look at the database logs to figure out why it was shut down to begin with. We don't want to just bring up a database just because it's down. We want to figure out what caused it to go down. And maybe if you bring it back up, itjust shut down again by itself. So we'll need to know those kind of things, assuming that that's all cleared the way that we typically start postgreis using the system ctl command. Just like any other uniservice, you'll want to use system ctl start postgreql. For some older versions and some distributions, you'll need to tack on a version string to the end. So you'll want to look at your listing of system ctl services and to be able to choose the right service to start up the under the hood systems detail, we'll call pgctl, which is the actual command that starts the database. So if you don't want to use systems ctl or you feel uncomfortable or a little too confusing, pgctis the command that you actually want to use. In order to use it, you'll need to know where the database lives. Oftentimes, that will be stored as an environment variable called pg data. And if you don't know what that is, you will need to look for a directory with a postgrerule com file, and that that file is usually where the database lives. Ops, so assuming that you know where the pg data directory is, you basically can use pgctl and PaaS in the d for directory for the pg data, and then start and thatstart with the database. Now, let's say you need to stop it for some reason. The same idea, you just do pgctl PaaS in the data directory and Press stop. Now, by default, the stop command will wait for all sessions to to end before actually stopping the database. So if suppose you have like a coworker who actually using the database, the database will not stop until that person is out out of the database. But a lot of times my customers come into situations where that person using the database is gone for the day, and doing a simple stop will not work. So in those situations, you'll need to use pgc tlf. The m stands for mode and an f is for fast. So the fast mode will actually stop all the queries, terminate all the connections, and then stop the database. Sometimes that's not even enough. Maybe some kind of background process is running and the database is wedged, is stuck. So in those situations, you would need to use mi, which is immediate, and you use mi stop, which actually under the hood causes the database to crash. So you're basically crashing the database on purpose. And while that's okay in most cases, you'll need to know that once you start the database again, you're going to start it up in recovery mode, which actually could be something that takes a very, very long time because postgres will do consistency checks against all the directories, and that could be several gigabytes, hundreds of gib tes or even terabytes. So you want to use the mi stop very sparingly. Okay. So let's say the data is up and now you want to connect to it. So what you'll need to know is the host name for the database, and you'll need to know the port username and password. Now if you don't know those things, ask your dba. If you can't talk to the dba, maybe you might want to look in your application config, look in the password manager or aws secrets wherever your application gets the database information for the connection. Once you have that information, you'll use a program called psql, which is the database command line interface that postgres comes shipped with. Some people prefer using a gui database, and options include things like pg admin or dvver, and you can Google those to find more information about it. So you PaaS in to psql your host port username, and then you type in the password. And this is the kind of output that you would get. So as you can see at the top psql, and then the dash H is for the host name and the U for user. And then you actually the second edb admin there is the database name that you want to use. So in postgres, you can actually have multiple databases, which are different, I guess, workspaces for for people to use. And the way that it's organized is kind of like a kind of like a namespace or schema in oracle. And if you use my sql before or sql light, a database is its own separate. As you can notice that I don't PaaS in the port here because it's using the default, which is five, 432. If you need to use a non default port, you'll PaaS in a dash lowercase p. And once I'm in, I get a command line edb admin equal and a greater sign. That's the typical way you'll see the database name equal and greater than, and then you can start typing queries. In this example, I use backslash dor, backslash dn. Backslash d will list out all the different tables and sequences or views that can be accessed by the user at this very moment. Sometimes there are other schemas or namespespaces that are not not listed by default. So if you want to see what default of what namespaces there are, you do backslash dn for namespace. And as you can see, there is a namespace or schema called my schema. And then there's also the results schema. Okay, so now now we're in. Let's explore a little bit more. So what else is going on? Like maybe there's some kind of performance issue. Maybe there's you know you just want to know how many connections there are or what people are doing. You can actually type in a query called select star from pg Stat activity. And what that does is that it will show what's going on at that very instant. Now the caveat here is that depending on the username that you provided, if you're not a super user, you will not be able to see everything. And people using azure and rds, those, you will have limited visibility as well. Something also important is to know where the logs are. So if you use show log directory and presenter, it will give you the directory where all the logs are being put. And from there you can actually see historical data like what queries are run, how long they took, things like that. If you're using rds, you're I think the logs will be living in the console and you have to point it from there. Now from within the database, you can also do things like cancel, query or terminate a session. These are pretty powerful tools. You'll need super user access in order to do that if you are using a user that's shared. So like let's say I have edb admin user, or let's say I have Jango user. If I log in as Jango user, I can actually cancel other Jango users queries so you can actually control the queries and activity of the things that you have access to. Okay? And this is done by using these two commands, pg cancel back end and pg terminate back end. Now pg cancel back end will take a process ID and then it will cancel whatever query is running. And then if you determinminate back in, we'll take that process ID and end that session altogether. So these are useful if you have if you have the access and you're trying to prevent some kind of infinite loop or resolve some kind of problem that your application is doing in interacting with the database. So here's just a snapshot of the pg Stat activity output. As you can see, it's in a table format, and because of how wide the rows are, it wraps around, which is a little bit hard to see. So what you can do is actually you can use a backslash x command, which turns on the extended display. And what that does, it turns all the rows in, lists them out, all the columns in in rows, as you can see here on the left. So for the first row, record one, you get to see all the column ID, name pid, leader pid, so on. And then for the second row, it's listed as, you know, record record two. And then all the all the columns are listed in that in that way. Okay? So that could be very useful, especially if you have large and wide tables that you need to look at. Okay, so in the interest of time, I won't be able to go too deep into this, but let's move on to configuration. So all the configuration is located in the postresql dot com file and the PostgreSQL dot com file once again, usually lives in the pg data folder. In some instances, like if you're using a dbn install or Ubuntu, you'll probably find it in the slash, etc. Postgrefolder. If you want to look at all the configuration, the current state of the configuration, you can actually use a command called show all, which will actually print out all the configuration parameters and their values. Now, some of these configuration parameters can be changed without a restart, so you can change them on the fly just for your particular session, or you can change them for all the users depending on your super user access or not. The another way to look at the configuration is using this query, select name and its setting from pg settings where contacts is sikhub or user. Now I share this with you because the user level contacts are parameters that you can change for yourself. You can actually tell Jango to make those changes for itself as well. For sakehup, if you have super user access, you can actually make changes for all the users and reload the configuration and then have different parameters or different values for those parameters. And that might be useful in certain situations. And when p is a context where the values can be reloaded without actually restarting the database. If you are using the database manually using psql, you can use set per am, set as a commands, or set a parameters or a certain value for your particular session. And then if you need to make a change for the entire system, we would use alter system set per am to value. Now I'll get into this a little bit later just to I'll show you an example of how you can use this. When you are done making these changes like you know a set per am or ulsystem set per am, you'll want to use a pt reload comp to reload the configuration. Or you can actually go into the shell like if you have root access to the os, you can do system ctl reload, or you can find the process ID of the postmaster process and do a kill hub on that process ID. Okay. Some things that you might want to look at. We find a lot of customers needing to change these things because of their experience with with how the postgreworks, the first one is search Paso. Like I said earlier, the name spspes are not all shown by default. So when I first did that backslash d, you just saw the the two views in the public schema. And then when I did backslash dn, you saw that there were other tables in other schemas, like the myice schema and the results schema. So in order to make that, make those all show up by default, you can change the search path. So you can say, set search path to results comma public. And when you do backslash d, it will look both in the public schema and the results schema to get to find out what tables are available. So for search path, you may want to make that change to include some certain namespespaces by default. Some customers, they prefer not to do that, and they use the fully qualified table name. So when you do select star from table, you either it will look at default by public. And when you don't look in public by default, and when you want to look in a different schema, you say select star from results dot table, and then itlook in the results schema instead of the public schema. Workworkmen is a pretty important parameter. It defines the memory that's allocated for things like sorting and hashing. So if you join many tables together and you filter them and you need to sort them, all that stuff is sorted in a allocated space of memory called workmen. Now sometimes that is not enough by default. And what ends up happening is the query is so big that it spills to disk. And because disk is slower, the query becomes slower. And by increasing workman accordingly, you can actually get a faster performance from the database. Now you can do set workmemm for that particular session, or you can do alter system set work mam, like I mentioned earlier. Now if you do an alter system and you set the workman globally, that could be dangerous because every session is allocated that much workman to work with. Now if you have a, say, a, you know, like 100 users on your Jango app, then you and you allocate 1 gb of workmen, you can quickly start allocating 100 gb, which you may not have available on that machine. So you only want to set workmen to like the average query set size and then for on a per session basis, altra, work them to meet whatever performance requirements you have at that moment. Finally, I think you may want to change or look at these log parameters. They control what gets logged in the database logs. And I think they can show you like things like when a process began, when a user logged in, what the username is that ran a query where where the host came from. So like let's say you have a whole cluster of web servers and you want to know which which web server issued a particular query, then that that ip address or host name will get logged there. I want to take this moment to give an aside on the fact that database logs are not the same as wall logs. So what are wall logs? So wall logs, wall stands for right ahead logs. And what that is is basically a journaling system that postgres uses to as a means of disaster recovery and wall files. They live in pdata in the pg wall directory. Okay, I share this with you because some customers, they go in there and they expect to see logs because they think, Oh, right ahead log must be a database log. And that is not the case. So when you look in wall pgwall, you will not find anything useful. And in fact, you should not touch anything in there, because if you do, you could potentially corrupt your database or prevent disaster recovery. Okay? Now the way that it works is whenever a insert or update or delete occurs, that gets reserved in memory. And then when it's committed, it gets flushed to a wall file. And at checkpoints, a lot of different things happen. But basically what ends up happening is that the files in pg wall get merged with the actual files of the database. Now the reason why postgres does this is that it also it helps maintain performance because pwall files are only 16 mb in size. The database files itself could be tens of gigabytes and really large. So to actually write directly to those, it could slow down the database quite a bit. So once again, do not look in pg wall for anything useful and do not delete those files ever. Okay. This screen chart just gives you an idea of how, what things look like. So if you look at this top line in orange bar, let postgroscfifteen main, that is the pg data folder. Now within the pg data folder, you'll see base global, you pg notify, pg wall, pg exact. All these are things that the postgreuses to make the database run. Now, as you can see, the pg wall folder is just a bunch of these strangely named files, all 16 mb each. Those are not very useful. They're all binary data that you as a user or a developer would not be able to make sense of. Okay, all right. So we talked about configuration, talking about a little bit about logs, talk about wall. Now how do we control who gets access to database? This is controlled by a file called pba com. Now the pba comp file, it basically allows connections to specific databases by specific users and ip addresses. And that's nice because you can basically control which users get to connect to which databases you don't. You know not you know the Jango user would not need to look at some kind of secret database that is for like hr or something like that. And any changes that you make to phba comp is something that you can reload without the starting restarting of the database. You can use a pg reload comp or kill hup. By the way, pgba stands for hba stands for host based access. So based on the host you can control who gets to access the database. So so in this example screenshot here, the you can see that it says host all all and then one, two, seven, zero, zero, one slathirty two. So a host that tries to connect will all users will be able to connect to all databases if they are coming in from the local host. Now you can change that cider mask. You can change it to you know all the all the ips in your data center, all the ips in a particular subnet. By changing those, you can control which connection requests can connect to a database. So in this particular example, you can see it's global. But if you want to make changes to it to kind of clamp down who gets access, you make these changes and then reload. Okay, all right. I'm going to go into some maintenance maintenance topics. So we're going to talk about vacuuming. So as a developer, and if you get into the database and you say, Oh, something's running slow and I don't know what's going on, and sometimes you might come into, you'll see that, Oh, vacuum is taking up a lot of disk io and it's running kind of slow and yoube tempted to terminate those vacuum processes. But I'm going to explain to you why vacuum is important and why you should not necessarily kill those processes right away. So vacuuming it will help maintain performance by preventing disc bloat. The reason why the database can possibly bloat is because things like delete and updates. They don't actually delete data out of the database. They simply flag a row as deleted because that row might be still visible to someone who is you know, within a transaction. So it's a way to provide acset compliance, or viewability to multiple sessions called mvcc multi multiti version concurrency control. So these updates and the least they don't they don't actually modify the existing data on the database. Now, once their flags is deleted, they just there, right? And if you keep updating and deleting, you're going to have a lot of these rows that are flagged and invisible to most users and that will take up space. Now wouldn't it be nice to reuse some of that data if it's deleted and no one's using it? No one can see it. All the sessions that could see it are all ended. They can be reused. Now. So what vacuum does it? Actually scans through all the database tables and flags them as reusable. So a future update or future insert can actually reuse that row. So as you can see, vacuuming is a very, very important part of keeping your database trimmed and making sure that things continue to run at a good performance level. Now there is a program that comes built into postgres called autovacuum. And what autovacuum does is that it will at periodic intervals, which is every minute, basically it will look to see, Oh, is it time to vacuum a particular table? And if it says no, it's not a time, itlook at the next table, next one, next one, and find out if there's any tables that need to be vacuumed. And sometimes we will find a table and say, Hey, let's vacuum this one. We need to. It's about time. And those are all controlled by the auto vacuum parameters, which which are all in the puscoshicom file. Now, sometimes those tables are really big and it needs to be vacuumed. Now you can kill the vacuum process, which a minute later, when the outer vacuum wakes up again, it says, Oh, I need to vacuthis table. And then itstart vacuuming it again. And you'll have this perpetual situation of slow performance because it's trying to vacuum this really large table. So usually I recommend that customers, they they just wait for the vacuuming to finish, but if they absolutely cannot wait because it's preventing users from working or using their application, you can terminate the back end. So basically kill the query and then manually run, manually vacuum it. Oftentimes the vacuum is slow because some delay is set, so you want to manually run it with a delay of zero. And then you'll also probably want to change the maintenance workmap and give more maintenance workmen to the vacuum process to work with and hopefully thatgo a little bit faster. So that's a little bit about maintenance related to the vacuuming. Now another maintenance task is to take backups. So you know maybe you are working in the development environment and you want na take a snapshot of your database. You want na save that data and be able to restore it again later. You'll use a program called pg top. And what this does is that it will provide a plain tecdump of the database. It's kind of like psql. It takes in the host name for user and database, and then out comes a human readable database snapshot of the database with all the different commands like create table and insert and stuff like that. You can actually limit which which namespespaces or which tables get dumped by passing the related flags. You can also tell it to compress the dump and provide a binary version of that dump. Now the caveat is, well, I'm sorry, the thing about pd dump is that it will basically translate all the binary data in the base folder, all the database files into into like you know plain sql. So what you end up with is something that will most likely, 99% of the time, will get loaded into a database without any kind of errors. So it will not copy any kind of corruption that you might have in your database. So taking a pg dump is is pretty important, especially in a production environment, to kind of safeguard against corruption. The alternate is to take the pgbased backup. Now the pgbased backup is a bit different from pg dump. It basically takes a snapshot of the entire pg data directory, so it takes the database files as is in their binary format. It doesn't try to convert it into something human readable, but it includes all the things like indexes, foreign key constraints, things like that. So it's good to preserve the state of a database really quickly because it doesn't need to take any extra effort to translate that stuff into sql in order to take the pg base back up, you'll need to set Maxwell senders because what it does, it treats it like you're trying to create a licreplica database. And that will require Maxwell sender to the send the wall information and the binary information. And you'll need to invoke pgbase backup with a user that has a replication privilege. Once again, it is faster because it doesn't need to do any translation, but if the database was corrupted to begin with, it will copy that corruption along with it. Okay, all right, let's see. So we're going to talk about monitoring now. I think we're getting pretty close to the end. So monitoring, you'll want to look at the log. So we at the very beginning, I mentioned a parameter called show log directory. And within the log directory, you'll be able to see different rows of kind of entry of what happened in the database. Depending on how things are set up, you may or may not want to change these two parameters, log line prefix and log inteduration statement. Log line prefix basically will prefix any entry in the log with with the things that you define. So it could be a timestamp, a ip address, and a user and the database. By default, postgres will only log just the timestamp, and that's pretty much it. So if your dba has not done this already, you may want to recommend that this gets changed, that you can do you can do a better job of tracking down where where ries came from. I have a separate talk on this, and it actually takes a bit of time to go into the details of this. So I'm just going to mention that long line prefix is something that you want to change. Login duration statement. If a query takes a certain amount of time, it will get log, select star from whatever whatever that query was asking for. If it doesn't take that much time, then it won't get logged. So let's say you want to find all the slow queries, queries that take more than one minute. So you said log mduration statement to one minute, and then anything taking less than on a minute won't get logged. You'll never know that it got called. And anything that takes more than a minute will get logged with the amount of time it took and the query that it ran. Other parameters that you might want to look into. The log statement parameter, it basically logs all statements before executing them. So that won't help you identify any queries that were slow, but it will help you identify maybe like some kind of pattern like, Oh, you know, this query getting called a lot and maybe thatcause you to investigate deeper to other performance problems with that query or whatnot. Log in error statement will only log the statements if certain error of thresholds are reached. So by default is it's the error. So if the query that was given has a bad syntax or the table doesn't exist, itprint out an error message and say, Oh, this query failed. In some situations, you may want to well, in some situations you may want to log things like warning. Warnings are just pending issues, like maybe you have a transaction ID wraparound or something like that, something that could cause a problem in the future, and the query will get Lowith it. Fatal and panic are things that should cause concern and investigation. Panic is when the database crashes altogether, and fatal is when the session gets ended for some reason or another. So by default, error, fatal and panic. We'll all print the queries and you'll be able to see what caused those error messages to come up. Log duration is not that useful. It just print out a duration. It doesn't print out the query with it. So it's not in my experience, it hasn't been that useful. But for some people, maybe they just want na keep track of different durations and do some kind of like graphpping of those values or something like that. Log in duration statement is definitely more useful because as a developer, you want to know which query you've caused, the slowness. If you want to use pt Stat statements, that's an extension that postscripts provides and that basically collects historical data in all in one view or table. What queries were called and how long they took, stuff like that. Okay, performance. So you know we've identified some queries that are slow and we want to know why they're slow, right? And our gba can't help us with that right now. So we go in and we look and we find, how do you tell how fast a query is running? So you use something called explain. Now explain has two flavor. It has the explain flavor and the explain analyze flavor. Now explain just basically tells you what the query pldo. And the explain analyze tells you what the query plans to do and how it executed and all the statistics that came out of it. If you are able to, there's a there's an extension called auto explain and that if you set it up correctly, ititprint explain, analyze or queries that meet a certain slowness threshold. As a support engineer, I found this very useful many times because our customers, they use an orm and sometimes what the orm does is a little bit unpredictable or it's hard to know. So explain, analyze will basically tell you what the orm sent to the query planner and how the query planner executed the query. So here's an example of explain and explain analyze. So at the top, we see explain, select from pbench accounts. And what you see is just that it scans the two tables and then it joins them together with a nested loop. And that's what it plans to do. Now in the bottom one, you see explain, analyze and the same query, and it shows you the same, the same query plan, but it also shows you the actual the actual time and the number of rows that it actually found during the scans. So as you can see here, the query took 25 milliseconds to do the sequential scan on pd bench accounts, and then it took 0.025 milliseconds to do the scan on pd bench branches. And then after joining it all together and sending it back to the user, it took 61 milliseconds. So this is very useful to help you identify any kind of bottlenecks in your prairies. So let's say you identify that there's a problem. What you'll want to do is make sure that you're using the correct data types. So some customers have the habit of just using all text or all infor their data as for their columns. And that isn't the best way to use postgrepostgres, is it has the ability to do indexing on your specific data types. And if you index correctly and set your data types up correctly, you'll actually get better performance. I know a lot of customers, they use json because know the whole world is using json. Json is a text format, and it's hard to build indexes. So use json only when you have to try to extract the data out of your application and insert it into particular columns so that indexing and query can run faster. Now, indexing is very important to have indexes, because the way that indexes work is that it basically is a shortcut to identify the data that you would want from that query. So here's an example. So I have updated the database. I have sent biid to aiid. In the real world, you may not want to ever do that, but let's say in the situation we do now, let's say I do explain, analyze on pdbench accounts, and I noticed that I'm doing a suinscan and it's taking me 45 milliseconds. Now, in the real world, 45 milliseconds may be pretty fast, but in the application world, it could be kind of slow. So we're doing some sequential scan for for this row, for this value, where bid equals one. Now we're going to scan the entire table, and we're going to only discover that only one row has biid. Who's one? Now why do we need to scan the entire table? That would be a waste of time right now. If you have an index, basically it tells you which which rows or which parts of the database have bid equals one and will only point you to that particular row that you need. So if I create an index like you see here, create index ptba biid idx on pdbenaccounts bid, and I this do the select again, you can see that this time around I do an index scan and it found my row in 0.77 milliseconds. And once I found that, I go grab the data that I need presented back to the user in less than 12.12 milliseconds. So using using explain analyze is actually very, very useful when it comes to improving the performance on your chanco lanow things to avoid, what not to do in the database. Okay, really quickly, don't ever call kill nine on any postgreprocess. What that does is it causes postgres to crash and enter into recovery mode and then it has to scan all the data files as to replay the wall log. And it could take a very long time. That could be very slow and a cause and outage for your application. Another thing is you want to be pay very careful attention to idle transactions. So as a developer, always commit or roll back any queries that are called when you're using psqball t, you want to make sure that you commit a rollback and get out of psql just so that there are no transactions left idle. Otherwise, we've seen many customers do this. You know some coworker started a transaction, got distracted, and then you know I got up to get a cup of coffee and never actually rolled back a transaction. And all the other sessions were piled up because they needed to get a lock on a table and because they couldn't slowed down the data ase, it basically caused an outage. So you want to make sure by using pg Stat activity to look for the phrase idle and transaction and deal with those appropriately, you'll want to cross referference with a view called pklocks and thathelp. You see what other sessions are being affected by this idle transaction? Okay. When it comes to making schema changes, try not to drop anything. So don't drop indexes, don't drop schemas, don't drop columns, don't drop tables. Rename them, rename them so that if you need to, if you ever need to roll that back, you bring that table back or bring that index back, you just rename it back to what it was before. So that way, there's a record of the old data that you might need to access, or at least dump dump them with pdump to file. And then that way you can come back and use them again if you need to. Once again, do not delete anything from the pt data folder, especially in the pt wall folder. We've had many customers who thought they were deleting log files, but in reality, they were deleting wall files. And as a result, they had database corruption and usability issues. So finally, you know if you need help, there's actually a lot of places where you can get help with using post quiris. There's a lack slack channel. There are very active mailing list. There's irc, there's a wiki. The post qudocumentation is really, really good. I recommend that you take a look at it. And then finally, if you need person to person support, edb is available to provide support for you and your organization. So thank you very much for attending this talk and hearing this presentation. And I hope that you enjoy all the other presentations that you will be viewing at tancocon 2023. Bye bye.