welcome III don't know anything about anything other than my show so I'll just get into my show so I just said I just put a couple of questions up here to think about you know this is a small enough group that I think we can be reasonably into interactive so even the people at the fire end there which I see is mainly the analytic team while you guys are sociable I you know if you have any questions comments thoughts you know yell out because I don't have any slides I have one slide so my one slide is like is my answer to these questions like what am I doing with deep learning and I don't know if how many of you saw this news but we had a we had a good week at analytic we're built we're using deep learning to transform how medicine is practiced and this week we announced our series B so we've now raised 15 million dollars for that mission and more importantly announced our first major site-wide integration which is at a company called capital which is the largest radiology service provider or actually the fastest growing radiology service provider in Australia so it's been an exciting week because we now have access to you know tens of millions of patients of images we now have access to hundreds of radiologists who are ready to start using our you know big learning algorithms to help them with their work and this is kind of the first step for us towards our goal which is that every medical decision will hopefully one day be powered by insights from in lytic algorithms or analytic software we're starting in radiology because radiology is already being digitized for 20 years it's kind of a lot of low-hanging fruit there and also of course computer vision is something we're deep learning is already super human performance in at least some areas so it's a good place for us to start I mentioned this partly to show off because I'm proud that we've we've got here you know but partly also because this is my kind of sample answer to these questions for those of you that don't know I was previously president of Kaggle I after I left cowgirl I spent a year doing nothing but trying to answer these questions for myself so I spent a year speaking to hundreds of industry leaders scientists politicians you know everybody basically and what I would do would I would I had this standard kind of deep learning presentation like twelve slides a bit like the TED talk I've got which some of you have seen where basically said here's what deep learning can do right now what would it mean if you had access to that technology and then they would you know almost always they would be like that would transform my entire industry and so I spent you know a year trying to figure out okay well with industry should we try and transform and we decided you know in the end medicine because medicine you know there's four billion people in the world that don't have access to modern medical diagnostics which is not OK particularly in China and India and then secondly it's the world's largest industry eight trillion dollars a year and those places that don't have access to medicine like China in India that's where about half of new IT health IT spending will be coming from in the next ten years so it's also a huge financial opportunity so that's what it looked like for me when I tried to answer these questions and so I wanted to start here because the rest of the day is going to be you know tutorials about how to do with deep learning right so I kind of wanted to start with like okay well what should you do or what's a way of thinking about what chip you should you do so my starting point I think today then would be to say well what what is deep learning really and my my answer to that question is it's basically that's way too big it's basically linear algebra plus optimization and so and those are two topics which are not rocket science and you can basically demonstrate all the pieces of deep learning in Excel not only can you but I will do it right now so here is Excel the world's most popular functional programming language seriously this is a totally pure functional programming language and it's used by 500 million people around the world so if you are interested in in helping people use and understand deep learning this is the best way to start in my opinion is in Excel so here's some data I put together some random numbers and I then have a you know ax plus B so in this case the ax plus B is going to be 30 plus 2 times 8 ok so here's some and appears an independent variable is a dependent variable how do we go about reverse engineering the 2 in the 30 so well here's the whole thing so in Excel I said alright here's our just copying and pasting here's our exes and here's our why's right and I said well let's uh start by assuming that the intercept is y and the slope is 1 all right let's just assume that so here we are Row 1 Row one data point number one input is 14 output is 58 if the intercept was 1 and the slope was 1 then we would be predicting 14 times 1 plus 1 equals 15 all right can I see that ok drop me to zoom in a bit I'll zoom in a bit if you can't see it just raise your hand so how how bad was that prediction well it should have been 58 we predicted 15 so the arrow squared is 1849 all right well that's not very good how do we make it a little bit better we make it a little bit better by changing the intercept and the slope to be a little bit closer to what they should be we know what they should be because we can take the derivative with respect to those two things we can do it two ways the first is that we can use the chain rule which is this is back propagation okay so step number one is that the error is basically ax plus B minus y squared the derivative of that with respect to ax plus B minus y is just 2 ax plus B minus y so the first piece of the chain rule then will be the error the derivative with respect to B well what's the derivative of B with respect to the inputs it's one Oh the derivative here we've got there here derivative of e with respect to B is equal to one times my calculation and then the derivative with respect to a is well ax the derivative of that with respect to a is just X so we've got x times through ax plus B minus y so that's back propagation the chain rule that small back propagation is is the chain rule so we can use Excel to calculate our derivatives use their derivative number one two times all that here's our derivative with respect to a and now we're done so now that we've got the derivatives we can change the slope in the intercept to be a little bit better by simply subtracting the derivative from the original slope so here it is the original slope minus the derivative times something what's this something this is the learning rate right in other words this is just one example right so it's this is not the truth of the matter in every case so we're just going to go a small step at each x one ten thousandth of the way there so that's going to give us a new value of la and here's a new value of B and so for the next row in Excel our new intercept is simply equal to but if we just calculated our new slope is equal to whatever we just calculated and then thanks to the magic of Excel we can just go chill down and so by the end of it we have our new value for a and our new value for B at the end of one mini-batch so that's that's it right and like I guess the other thing that good people do when they're doing deep learning is that they check their derivatives so we can also do that neck sell finite differencing they find out differencing simply means that we say how much would our prediction change by if we changed our intercept by a tiny bit so here it is here is take our intercept add 0.01 to it and see how much our prediction would change by and do the same for a change that by 0.01 see how much that would change by and that gives us a estimate of each of those two derivatives so you can hear I can by finite differencing estimate is 1202 my actual estimate was 1204 okay that looks good so this is like everything that you would do in your Theano or cafe or whatever we've got the derivative calculations we've got the backdrop using the chain rule we've got the finite difference in check that our derivatives are correct and it's one line of excel so the purpose of this really is to say whether you like Excel or not it's like we can do you know one entire epoch in one line of Excel with no coding and nothing but I think we've got plus and minus and divided and time that's it so it's not rocket science and so what would we do next well we'd basically say well here's the intercept and the slope at the end of one mini-batch so to another mini-batch let's take this where copy it and we'll paste it here oh the intercept and slope okay so I've just replaced the intercept and slope starting point with what we just calculated and this now up has just automatically calculated in another mini-batch and so we could just keep copying and pasting and copying and pasting and of course we don't have to do that by hand we can record a macro which I have done so here is here is my macro copy the last slope and paste it into the new slope copy the last constant pasted into the new constant and also of course what we should do as good deep learning people is we should visualize our learning so we also have a graph so each time it's going to take the rmse and also post the rmse as well so let's do ten steps let's just do five steps so if i do i equals 1 to 5 do one of these steps copy paste copy paste cut and paste so we'll go run okay done so here is my first five points and here is my era by epoch and interestingly you can see that it's it's actually decreasing at a linear rate which is pretty interesting and we've got the RMS see down in step one it went from a hundred and fifty to one hundred and four from step two it's gone from a hundred and four to slightly less than a hundred and four slightly less slightly less it's going very very slowly so what would we do to fix that where you would maybe increase the learning rate well let's try and creasing the learning rate uh-oh what just happened so one nice thing about using Excel for this is that we can fiddle around and see what's going on so what just happened was that we were basically saying here's our you know data points we're trying to hit right and basically we've kind of found this function that if we increase or decrease the value let's say of of a this is a this is our error then basically we've tried to say hey there's some kind of shape here and we've started moving it kind of closer and closer and closer to the middle but then when we hit made the learning rate too high we actually jump over to the other side which caused us to jump to the other side which caused us to go crazy right so we can see why there's a balance between getting the rolling rate high versus low so let's put it back to where it was so we can see it this rate the error is going to take a very very long time to get to zero it's going down by that point two each time so how do we fix that well if we look at these derivatives you can see that the derivative with respect to B this is just the sum of the squared of the derivatives on average so the derivative with respect to B is pretty low compared to a so maybe we should do something to make it so that those two are more aligned with each other so those two are basically that the learning is different for those two different variables who's heard of a degrade so Atta grads one of the most popular tweaks to use to cast a gradient descent of the last year or two basically all that is is I've done it in Excel this is exactly the same sheet as the last sheet but what we're going to do is we're going to say here's our summer squared of gradients and rather than have a single learning rate H of a and B is going to have a different learning rate and the loading rate is going to be equal to the overall learning rate divided by the sum of the squared of the derivatives so in other words each of my two variables learning rates can now go at different speeds so this is this street so that's exactly the same as the last sheet except for that one difference you can see I've got an entirety of one division being added in here the cool thing about that is that we can increase our learning rate a lot but I've increased it from one ten-thousandth to two so we would expect to see things go much faster so let's see what happens if we go to our grad and this is as you can see it's exactly the same the only difference is that I'm also copying the gradient across at the end of each mini batch so each mini batch will use my new per variable learning rates so let's try that reset and let's run five steps again last time I think I've got to work one hundred and four point three with our rmse and this time it's got 271 so that one tweak of having per variable learning rates which is just equal to the learning rate divided by the average derivative it has made a huge difference so maybe if we run this another five steps let's see what happens okay it's we have a problem again so as we got closer and closer to the bottom things started getting flatter and flatter and so it got to the point where it's very easy to actually don't leak and of overstep and as soon as that happened once you can see what happens to the derivatives their points you makes you worse than worse than worse so this shows us why being very careful of learning rates is important so in this case what I would probably do is I would say all right let's run we know that we can run for five steps without a problem and maybe then we need to decrease our learning rate maybe by half and then now maybe we can run another three steps looking good try running another two steps looking good so we kind of got our MSC down to 39 now that's not bad but this is like how most of how many people here kind of implement deep learning or use deep learning train algorithms day to day I'm not very many that's interesting so you'll find that the vast majority of people who are doing this this is how they do deep learning they basically run things for a while when they're learning gate goes crazy they go back they tweak it at some point that this even has a name it's called a learning rate schedule and it's to me you know we can see from using Excel but that's kind of insane so I'm going to propose something which some what do you call this again do go time warping or something to yoga I I just let my Excel thing into yoga I had to give it a name because in in deep learning one of the rules is like every stupid little tweak has to have a name that makes it sound really impressive so what I thought was was basically well let's copy the same worksheet again but maybe each time we notice something going wrong will automatically decrease the learning rate and what is something going wrong mean well as I just watched the previous sheet I noticed that when the numerical stability or when the kind of numbers went crazy it happened when the average derivative increased by a lot it more than doubled so what I did was I took my highly sophisticated VB macro and I added a little if statement that said if the average derivative has more than doubled in this mini batch and half the learning rate so this is exactly the same as a previous sheet with this one additional change so let's see what that does so we reset and let's try running this now so start with an RMS C or 308 and this time I'm going to run ten steps without checking in a thing without changing anything and hopefully it's going to automatically fix itself let's see what happens not bad so we've now got down to an R MSC wake up Oh 11 so in and this worksheet took me you don't know half an hour to write and so so far we've kind of reinvented at a grad we've invented a new approach to automatic learning rate annealing and you can see in Excel we've got to a point where our optimization running on top of our linear algebra is getting our error rate nearly perfect in fact if we run it for another let's just run it for another three cycles and see what happens my god there we go down or two so I won't run you through the next three but I've got three more sheets so maybe I'll put this on the internet so you can have a look at it I tried adding momentum which is another very popular tweak I guess a de grata momentum of the two most popular tweaks momentum is simply to say hey last time I increased my variable my weight by this amount maybe the next time I should continue going in the same direction so my entire change to add momentum was to add that which is basically say take the chat last change and multiply it by something in this case I said 0.5 and add it to the next change so you can try adding momentum and then I tried adding both momentum and at a grad together and just for fun I'll show you how well that works so here's my both and it's reset and I also tried something else interesting here which as I said for the first five steps let's not use a de grad the idea being that we'll take five steps to kind of get to a reasonable area of the function and then for the next 15 steps we will use that a greater it's just something I kind of thought yeah that probably makes sense because that this when you start training a model it's always actually a good idea to realize when you start training deep learning your first few epochs well not kind of qualitatively different as you optimize them cuz you're in kind of a really crappy part of the function space so the first few pops basically gets you into a vaguely sane part of the function space so again I've kind of you know this is now my entire visual basic macro so let's see how well that goes run okay so you see it zipping along here's my slope and y-intercept changing and you can see here I'm not even using learning rate annealing right so by basically just making sure that my initial learning rate was stable I didn't even have to use that learning rate annealing and I'm now without even worrying about learning rate annealing we've got down to our MSE of one in twenty eight box and then finally let's do it all so this one has a degrade learning rate annealing whatever the hell we call it time warp something and let's just see how well that goes so if you put it all together here is my old he said so as you can see it's not exactly complex we're now up to like ten lines of code so let's try running just eight epochs so now in just eight epochs which is half the number we had before cup at Antoinette mercy of six so I think a few messages here you know one is if you're you know if you're playing with this stuff keep it simple make sure you can like visualize what's going on and return my slack off obviously and furthermore recognize that little tweaks to your algorithm change how fast you can learn things by many orders of magnitude so if you went back to our first sheet it takes it would have taken like 6000 epochs to get to this point with some of these tweaks I'm down to eight epochs and this is actually what we find you know so when lytic when we're training these algorithms you know these minor kind of tweaks I mean that's something that used to take weeks suddenly takes seconds when we started doing CT scans it took I think it was like 10 days to train a single CT and we're now down to 0.02 seconds so a lot of people like talked about how many GPUs they've gotten they're distributed you know deep learning whatever whatever we're still at a point where kind of the the level of knowledge academic knowledge about how to do deep learning is still at the case where we're kind of people are finding order of magnitude improvements you know every few months and so the idea of spending a lot of time distributing something across 100 GPUs compared to spending like a few hours figuring out a slightly better algorithm improvement you know like most it oolitic we very much would focus on on the algorithm improvements in fact if you have 100 GPUs you know all of your engineering work is going to take a lot longer because you're thinking about all this distributed computation and stuff and furthermore you're going to get lazy right you're not going to think about how do I do it in point oh two seconds you're going to be like hey about 100 GPUs I better use them so that's kind of my little starting like deep learnings actually you know pretty simple the only step I missed there was I had my I had my linear algebra piece which was the y equals ax plus B thing the only difference we do when we deep learning is we stack these things together and between each stack we add an additional line which basically says if this is less than zero and set it to zero if it's greater than or equal to zero then we'll leave it as it is and so that basically adds a non-linearity which looks like that and so if you stack these nonlinearities on top of each other this thing here if I told you everything's got to have a fancy name this thing's here is called a rectified linear unit or Lu so don't get distracted I see a lot of you guys haven't done much big running before so don't get distracted by all these you know new words so people invent it when you see well you think you know that right in other words one more line of Excel so then the next question I posed at the start was you know what is it for what can we do that we couldn't do before but it turns out that stacking these things on top of each other and running an optimizer through them does something pretty neat which is that for example if the thing that you were running the optimizer over level over here you go is one and a half million images and this is some great work from couple years ago from Matt Sylar who's now at clarify and at this point was NYU it turns out that if you if the thing that you're training against is images the first set of weights that first stack automatically discovers simple geometric shapes like this the second layer of weights automatically discovers slightly more complex geometric shapes like these by the time you get to the third layer it's discovered shapes like these kinds of patterns which automatically match these kinds of images by the time you get to the fifth layer it's automatically discovered things which uniquely identify unicycle wheels or things that uniquely identify bird and lizard eyeballs for example and so this incredibly simple Excel spreadsheet when you run it on a GPU which can handle 16 trillion operations per second which is more than Excel this is what it turns into so very very simple basics turn into very very complex things so what this lets us do which we couldn't do before is basically to learn things with unstructured data so you know I've been kind of doing stuff in machine learning for 20 years and during nearly all of that time we had to work with things that kind of look like CSV files or spreadsheets things where we had rows and columns and that was what we kind of run our random forests on or regularize for the district regression or whatever with deep learning though we can take any kind of data and automatically build these kinds of incredibly rich features just using that training approach that we just set structs or in Excel which means that we can now we have okay which means that we can now do the same thing that we used to do with structured data with things like images natural language at analytic we do what a 3d MRI and CT audio some of you must sell that blog post a year or so ago from Sondra della moon looking at taking songs and automatically categorizing them using deep learning any kind of signal or time series these are all things which until a couple of years ago there really wasn't good ways to compute with them with deep learning we now can though you know if you guys are thinking about doing a PhD starting a company or joining a company the question you should be asking is what are the areas in the world would use these things that make a difference so for example doctors look at all of these things to decide how to treat you to try to keep you alive so in other words if you give doctors access to these tools they could save millions of lives and avoid millions of unnecessary treatments so there's an example another example anybody in intelligence searching for troop movements on the ground is basically looking at that and that so why not automate it for them anybody trying to figure out when their drill in in fracking or whatever is about to die is basically looking at a time series signal and doing detection on that so you can replace that with deep learning etc etc you know so sometimes I hear about people who tell me that they've created a deep learning company you know what that reminds me of in the early 90s everybody told me that they were creating an Internet company that's like what the an Internet company an Internet company is the same as a deep learning company it's it's it's mistaking that all the thing you use it for so like you know Amazon is kind of internet for books and then increase in and after that internet for shopping you know Google is internet for search so on and so forth so internet Forex you know anybody knew using the internet nearly 90's could tell everything was going to get on the internet I think everybody probably here can tell that everything is going to use deep learning so what are you going to do with it deep learning for what you know if the answer is I'm building a deep learning company just recognize you're building the same kind of commoditize Abul foundation that an Internet service provider was in the early 90s you're building something that's basically going to look the same for everybody somebody will win and the person who wins will become you know the AOL or whatever of it's time and in fact you're gonna hear today from possibly the one who will be that AOL which is Nirvana because they're actually building literally building hardware specialized to this purpose and so unless you're like Nirvana you have neuroscientists and electrical engineers and like you're gonna build a whole new hardware platform you know don't try and be a deep learning company think about there's all these things that no one's ever been able to do before so what are you gonna do deep learning for what you know if you're looking at doing a PhD you know deep learning for what deep learning for medicine deep learning for oil and gas deep learning for finding terrorists I don't know the the final thing I mentioned before I wrap up is as you solve your you know hopefully genuinely important impactful problem using deep learning the current state of the tools is as you all know right the current state of the tools is you know they all require code what percentage of the world can code the percentage of the developed world that can code is under 1% okay so if you build if you build a deep learning tool to help you solve your problem with deep learning and it requires code and you've just basically said to 99% of the world I am NOT interested in helping you almost every deep learning tool today requires manual tuning like like that learning rate schedule that I showed you or setting setting all kinds of parameters or so forth why spend a lot of time trying to get an extra point one percent performance rather than try to spend the same amount of time removing all of that manual tuning for your users and doing it automatically so so what have you got so far we've got we've got kind of parameter tuning we've got code another one is none of the current generation have era bars what's the point of giving somebody a prediction if you don't tell them how confident you are of that prediction what are they meant to do with it and again these are all things we had solutions to right so Michael Jordan at Berkeley has a paper called bag of little bootstraps it's gems which can take any black box algorithm and add error bars to it how come nobody's done that fatigue learning yet you know like everybody who wants to use deep learning needs this functionality in practice you know this parameter cheating there are plenty of algorithms to automatically tune parameters why are they not automatically used every time you lose one of these algorithms as far as I know NVIDIA is the only company which has even tried to build a good code free deep learning toolkit so I think you know we a lot of us have got distracted by you know things like the image net competition you know it gets far far far too much press if somebody beats you know that benchmark by 0.1 percent but if somebody comes up with a way to let normal people use deep learning to solve important and impactful problems that previously were unsolvable you know no one hears about it but don't worry about that because if that's you and no one's hearing about it you're still going to make billions of dollars right like the internet the the internet companies where people did internet Forex they're the people that made billions of dollars you know they weren't necessarily getting a whole lot of press or whatever for inventing a faster teach TCP stack all right but they were the ones that actually impacted people's lives and therefore they were the ones that made a difference and made a lot of money so my final suggestion for if you're building your own deep learning tools or working on others is that it's not enough just to show a prediction you also have to show why so for example when we say to a radiologist we think this is a malignant tumor that's not enough why is it a malignant tumor you know if they're if it's another radiologist telling them that they would ask them that question so naturally they ask our algorithms that question and the great thing about deep learning is that we can show them we can say well here's this tumor I'm looking out for you in deep learning space here are the five most similar right and here is what turned out to be the result of the biopsies done on those and then a radiologist can look at that evidence and say oh I see I see why these are similar I see now that this actually has some of the same you know particular texture details as some of these tumors and I can see that in all of those cases it turned out to be malignant so that's great evidence all right so you don't need to go in and like show them the 600 million weights you can use deep learning to show people evidence and examples again if you're if you're if you can't do this then I think you're the application of your tool is going to be very limited because you're asking people just to the trusted okay so that's my starting point so for the rest it's today you know you're going to learn about how to do deep learning what I hopefully have shown you is that you know deep learning basically is pretty simple it's also incredibly powerful it lets us solve things that were previously unsolvable and so thinking about what you want to solve is a is a really good step so I've gotten five minutes over time so maybe I spend two minutes if anybody's got questions otherwise come see me I'll be here all day and of course also if you want to use deep learning to save lives and also make a lot of money come see me about working bin maalik because it's really fun and there's quite a few analytic people here who can tell you the same thing yes ma'am you you there there is no one supervise people who try to convince you otherwise are tricking you because what did they actually do is like look over here at the unsupervised model which I'm training with the supervised thing so so this is actually the it's a great question this is actually the stuff that's that makes a difference this is like when people compute with deep learning the same way that today we compute with code these are the things that make a difference right so like what what are the kinds of things we spend time on at analytic we spend time on things like I have a whole bunch of data you know MRIs of which we only know for this little group which one's had malignant tumors so what do we do with all the rest right traditionally this is what people would call unsupervised learning but if you look at every example of unsupervised learning you just invent a supervised problem to solve so for example the classic case is the so-called auto-encoder there with an autoencoder you basically say these the goal of the supervised learning is to recreate the original image but with a bottleneck layer in other words a layer in which there are less neurons than the size of the original image so you've turned it into a supervised problem and the supervised problem is recreate your input there are other cases called Siamese networks which basically create you know again kind of arbitrary supervised problems which is figure out whether these two images are images of the same thing or not another example was the classic word to vac stuff that Google did where the problem that they solved was they took you know a hundred million eight eleven word long strings from books they made a copy of them and in the copy they replaced the middle word of the eleven with a random word and then thus the supervised problem was try and predict which which were the original sentences which were the you know randomized sentences so the rule of thumb is come up with you know as useful a or semantically you a kind of fake problem to solve as you can if you do the autoencoder one you're going to end up with a neural network which has all kinds of features you don't care about you know it's going to figure out how to how to exactly replicate the background when you don't care about the background you know so basically for unsupervised learning come up with a you know an arbitrary problem to solve that is this you know has as much semantic similarity to what you're going to be using it for as you can you so yeah that's that's the second area that we spend a lot of time on is transfer learning so with transfer learning it's basically saying you know what this is a MRI of the brain but we've already done a hundred thousand MRIs of the lung how do we take the stuff we learnt about the lung and use it for the brain and or you can do similar things with hey we've already done English speech recognition how do we do Chinese and so this is this whole area of transfer learning again which is like super powerful because if you can do that effectively then you can basically tone all these kind of unlabeled problems into partially labeled problems it would mean that every time we try to look at a disease for which we have very little data we can use all the data that we have about kind of similar diseases to help us instance so yeah transfer learning is another big area is a largely solved problem now largely solved which is basically it's very similar to the way random forests of the overfitting problem for structured learning random forests solve that problem by creating lots and lots of decision trees all of which were like slightly randomly bad the main thing that's happening in deep learning is each time you train in your network you randomly remove half than half the neurons and that basically enforces that none of them are going to overfit so that's called dropout and it turns out that there there's strong connections between that and just actually taking your original input data and adding noise to it so that's another thing you can do is add noise to your inputs things like that you know can basically you can avoid overfitting you use look at your validation set if you are overfitting you can just increase the amount of dropout or increase you know noise until you know your validation sets working well last one so the the thing that's going to make it useful in deep learning is if you're able to use unstructured data that previously was not really amenable to analysis all of the experience in cackle competitions has shown us that the winners consistently have found that adding kind of metadata or doing hand engineering of the features is not at all useful so and we've kind of found the same thing in analytic with our medical data so generally speaking I you know it's just take the raw data you know take the images or take the audio signals in their raw form app I do for example there are Chinese voice recognition now it's just that the input data is the waveform you know the important thing is like what you what you correlate it against you know the the dependent variable or in in medical speak the ground truth you know if we try to use the radiology report as the data we try to match then we're going to make all the same mistakes that a radiologist you made if on the other hand where you managed to get hold of the actual biopsy results then we can make sure that we train it to be better than a radiologist because it's finding the truth so I you know generally I find it's not so much the the input data which is where you have spent a lot of thinking it's like keep that as raw and simple as possible it's more like what's the what's the truth data is the thing that you need to be careful about all right well thank you so much everybody and thanks for your time and hope to see you around the rest of the day Back To Top