text
stringlengths 0
782k
|
|---|
Dear Fellow Scholars, this is two-minute papers with Dr. Karo Zsolnai-Fehir. Today, we will try to create a copy of ourselves and place it into a virtual world. Now this will be quite a challenge. Normally, to do this, we have to ask an artist to create a digital copy of us which takes a lot of time and effort. But there may be a way out. Look, with this earlier AIB technique, we can take a piece of target geometry and have an algorithm try to rebuild it to be used within a virtual world. The process is truly a sight to behold. Look at how beautifully it sculpts this piece of geometry until it looks like our target shape. This is wonderful. But wait a second, if we wish to create a copy of ourselves, we probably want it to move around too. This is, however, a stationary piece of geometry. No movement is allowed here. So, what do we do? What about movement? Well, have a look at this new technique, we're getting a piece of geometry with movement cannot get any simpler than this. Just do your thing, record it with a camera and give it to the AI. And I have to say, I am a little skeptical. Look, this is what a previous technique could get us. This is not too close to what we are looking for. So, let's see what the new method can do with this data. And, uh oh, this is not great. So, is this it? Is the geometry cloning dream dead? Well, don't despair quite yet. This issue happens because our starting position and orientation is not known to the algorithm, but it can be remedied. How? Well, by adding additional data to the AI to learn from. And now, hold on to your papers and let's see what it can do now. And, oh my goodness, are you seeing what I am seeing? Our movement is now are replicated in a virtual world almost perfectly. Look at that beautiful animation. Absolutely incredible. And, if even this is not good enough, look at this result too. So good. Loving it. And believe it or not, it has even more coolness up the sleeve. If you have been holding on to your papers so far, now squeeze that paper because here comes my favorite part of this work. And that is a step that the authors call scene-fitting. What is that? Essentially, what happens is that the AI re-imagines us as a video game character and sees our movement, but does not have an idea as to what our surroundings look like. What it does is that from this video data, it tries to reconstruct our environment, essentially, recreating it as a video game level. And that is quite a challenge. Look, at first, it is not close at all. But, over time, it learns what the first obstacle should look like, but still, the rest of the level, not so much. Can this still be improved? Let's have a look together as we give it some more time and our character a few more concussions, it starts to get a better feel of the level. And it really works for a variety of difficult dynamic motion types. Cartwheel, backflips, parkour jumps, dance moves, you name it. It is a robust technique that can do it all. So cool. And note that the authors of the paper gave us not just the blueprints for the technique in the form of a research paper, but they also provide the source code of this technique to all of us free of charge. Thank you so much. I am sure this will be a huge help in democratizing the creation of video games and all kinds of virtual characters. And if we add up all of these together, we get this. This truly is a sight to behold. Look, so much improvement, just one more paper down the line. And just imagine what we will be able to do a couple more papers down the line. Well, what do you think? Let me know in the comments below. This video has been supported by Wates and Biases. Check out the recent offering, fully connected, a place where they bring machine learning practitioners together to share and discuss their ideas, learn from industry leaders, and even collaborate on projects together. You see, I get messages from you fellow scholars telling me that you have been inspired by the series, but don't really know where to start. And here it is. Fully connected is a great way to learn about the fundamentals, how to reproduce experiments, get your papers accepted to a conference, and more. Make sure to visit them through wmb.me slash papers or just click the link in the video description. Our thanks to Wates and Biases for their longstanding support and for helping us make better videos for you. Thanks for watching and for your generous support, and I'll see you next time.
|
Dear Fellow Scholars, this is two-minute papers with Dr. Karajona Ifehir. Today, we are going to see what DeepMind's AI is able to do after being unleashed on the Internet and reading no less than two trillion words. And amusingly, it also thinks that it is a genius. So is it? Well, we are going to find out together today. I am really curious about that, especially given how powerful these recent AI language models are. For instance, open AI's GPT3 language model AI can now write poems and even continue your stories. And even better, these stories can change direction and the AI can still pick them up and finish them. Recipes work too. So while open AI is writing these outstanding papers, I wonder what scientists at DeepMind are up to these days. Well, check this out. They have unleashed their AI that they call Goffhr on the Internet and asked it to read as much as it can. That is two trillion words. My goodness, that is a ton of text. What did it learn from it? Oh boy, a great deal. But mostly this one can answer questions. Hmm, questions? There are plenty of AI's around that can answer questions. Some can even solve a math exam straight from MIT. So why is this so interesting? Well, while humans are typically experts at one thing or very few things, this AI is nearly an expert at almost everything. Let's see what it can do together. For instance, we can ask a bunch of questions about biology and it will not only be quite insightful, but it also remembers what we were discussing a few questions ago. That is not trivial at all. So cool. Now note that not all of its answers are completely correct. We will have a more detailed look at that in a moment. Also, what I absolutely loved seeing when reading the paper is that we can even ask what it is thinking. And look, it expresses that it wishes to play on its smartphone. Very human-like. Now, make no mistake. This does not mean that the AI is thinking like a human is thinking. At the risk of simplifying it, this is more like a statistical combination of things that it had learned that people say on the internet when asked what they are thinking. Now note that many of these new works are so difficult to evaluate because they typically do better on some topics than previous ones and worse on others. The comparison of these techniques can easily become a bit subjective depending on what we are looking for. However, not here. Pull down to your papers and have a look at this. Oh wow, my goodness. Are you seeing what I am seeing? This is OpenAI's GPT3 and this is Goffhr. As you see, it is a great leap forward, not just here and there, but in many categories at the same time. Also, GPT3 used 175 billion parameters to train its neural network. Goffhr uses 280 billion parameters and as you see, we get plenty of value for these additional parameters. So, what does all this mean? This means that as these neural networks get bigger and bigger, they are still getting better. We are steadily closing in on the human level experts in many areas at the same time and progress is still not plateauing. It still has more left in the tank. How much more? We don't know yet, but as you see, the pace of improvement in AI research is absolutely incredible. However, we are still not there yet. We have a lot of knowledge in the area of humanities, social sciences and medicine is fantastic, but at mathematics of all things, not so much. You will see about that in a moment. And if you have been holding onto your paper so far, now squeeze that paper because would you look at that? What is it that I am seeing here? Oh boy, it thinks that it is a genius. Well, is it? Let's ask some challenging questions about Einstein's field equations, black holes and more, and find out. Hmm, well, it has a few things going for it. For instance, it has a great deal of factual knowledge. However, it can also get quite confused by very simple questions. Do geniuses mess up this multiplication? I should hope not. Also, have a look at this. We noted that it is not much of a math wizard. When asked these questions, it gives us an answer, and when we ask, are you sure about that? It says it is very confident. But, it is confidently incorrect, I am afraid, because none of these answers are correct. So, a genius AI, well, not quite yet. Human level intelligence, also not yet. But this is an incredible step forward, just one more paper down the line. And just imagine what we will be able to do just a couple more papers down the line. What do you think? Does this get your mind going? Let me know your ideas in the comments below. This episode has been supported by Lambda GPU Cloud. If you are looking for inexpensive cloud GPUs for AI, check out Lambda GPU Cloud. Like this, they've recently launched an NVIDIA RTX 8000 with 48GB of memory. And hold onto your papers, because Lambda GPU Cloud can cost less than half of AWS and Azure. Join researchers at organizations like Apple, MIT and Caltech in using Lambda Cloud instances, workstations or servers. Make sure to go to LambdaLabs.com slash papers to sign up for one of their amazing GPU instances today. Our thanks to Lambda for their long-standing support and for helping us make better videos for you. Thanks for watching and for your generous support, and I'll see you next time.
|
Dear Fellow Scholars, this is two-minute papers with Dr. Karo Zonai-Fehir. Today we are going to see how Nvidia's new AI transfers real objects into a virtual world. So, what is going on here? Simple, in-goals just one image or a set of images of an object. And the result is that an AI really transfers this real object into a virtual world almost immediately. Now that sounds like science fiction. How is that even possible? Well, with this earlier work, it was possible to take a target geometry from somewhere and obtain a digital version of it by growing it out of nothing. This work reconstructed is geometry really well. But geometry only. This other work tried to reconstruct not just the geometry, but everything. For instance, the material models too. Now, incredible as this work is, it is still baby steps in this area. As you see, both the geometry and the materials are still quite coarse. So, is that it done? Is the transferring real objects into virtual worlds dream dead? It seems so. Why? Because we either have to throw out the materials to get a really high quality result, or if we wish to get everything, we have to be okay with a coarse result. But, I wonder, can this be improved somehow? Well, let's find out together. And here it is. And video's new work tries to take the best of both worlds. What does that mean? Well, they promise to reconstruct absolutely everything. Geometry, materials, and even the lighting setup and all of this with high fidelity. Well, that sounds absolutely amazing, but I will believe it when I see it. Let's see together. Well, that's not quite what we are looking for, is it? This isn't great, but this is just the start. Now, hold on to your papers and marvel at how the AI improves this result over time. Oh yes, this is getting better and... My goodness! After just as little as two minutes, we already have a usable model. That is so cool. I love it. We go on a quick bathroom break and the AI does all the hard work for us. Absolutely amazing. And it gets even better. Well, if we are okay with not a quick bathroom break, but with taking a nap, we get this. Just an hour later. And if that is at all possible, it gets even better than that. How is it possible? Well, imagine that we have a bunch of photos of a historical artifact and you know what's coming. Of course, creating a virtual version of it and dropping it into a physics simulation engine where we can even edit this material or embed it into a class simulation. How cool is that? And I can't believe it. It still doesn't stop there. We can even change the lighting around it and see what it would look like in all its glory. That is absolutely beautiful. Loving it. And if we have a hot dog somewhere and we already created a virtual version of it, but now, what do we do with it? Of course, we engage in the favorite pastime of the computer graphics researcher that is throwing jelly boxes at it. And with this new technique, you can do that too. And even better, we can take an already existing solid object and reimagine it as if it were made of jelly. No problem at all. And you know what? It is final pastime. Let's not just reconstruct an object, why not throw an entire scene at the AI? See if it buckles. Can it deal with that? Let's see. And I cannot believe what I am seeing here. It resembles the original reference scene so well even when animated that it is almost impossible to find any differences. Have you found any? I have to say I doubt that because I have swapped the labels. Oh yes, this is not the reconstruction. This is the real reconstruction. This will be an absolutely incredible tool in democratizing creating virtual worlds and giving it into the hands of everyone. Bravo Nvidia. So, what do you think? Does this get your mind going? What else could this be useful for? What do we expect to happen? A couple more papers down the line? Please let me know in the comments below. I'd love to hear your thoughts. What you see here is a report of this exact paper we have talked about which was made by weights and biases. I put a link to it in the description. Make sure to have a look. I think it helps you understand this paper better. Weight and biases provide tools to track your experiments in your deep learning projects. Using their system, you can create beautiful reports like this one to explain your findings to your colleagues better. It is used by many prestigious labs including OpenAI, Toyota Research, GitHub and more. And the best part is that weights and biases is free for all individuals, academics and open source projects. Make sure to visit them through wnb.com slash papers or just click the link in the video description and you can get a free demo today. Our thanks to weights and biases for their long standing support and for helping us make better videos for you. Thanks for watching and for your generous support and I'll see you next time.
|
Dear Fellow Scholars, this is two-minute papers with Dr. Karo Zsolnai-Fehir, today we will see how and what DeepMind's AI is able to learn from a human, but with a twist. And the twist is that we are going to remove the human teacher from the game and see what happens. Is it just memorizing what the human does or is this AI capable of independent thought? Their previous technique did something like that, like guiding us to the bathroom, start a band together, or even find out the limitations of the physics within this virtual world. However, there is a big difference. This previous technique had time to study. This new one not so much. This new one has to do the learning on the job. Oh yes, in this project we will see an AI that has no pre-collected human data. It really has to learn everything on the job. So can it? Well, let's see together. Phase 1 Pandemonium. Here the AI says, well, here we do things. I am not so sure. Occasionally it gets a point. But now, look. Uh-oh, the red teacher is gone. And oh boy, it gets very confused. It does not have much of an idea what is going on. But a bit of learning happens and later, phase 2 following. It is still not sure what is going on, but when it follows this red chap, it realized that it is getting a much higher score. It only got 2. And now look at it. It is learning something new here. And look, he is gone again, but it knows what to do. Well, kind of. It still probably wonders why its score is decreasing. So a bit later, phase 3 memorization. But the usual dance with the demonstrator, but now when he is gone, it knows exactly what to do and just keeps improving its score. And then comes the crown jewel. Phase 4 Independence. No demonstrator anywhere to be seen. This is the final exam. It has to be able to independently solve the problem. But here comes the twist. For the previous phase, we said memorization and for this we say independence. Why is this different? Well, look, do you see the difference? Hold on to your papers because we have switched up the colors. So the previous strategy is suddenly useless. Oh yes, if it walks the same path as before, it will not get a good score and initially that's exactly what is trying to do. But over time, it is now able to learn independently and indeed find the correct order by itself. And what I absolutely loved here is, look, over time the charts verify that indeed, as soon as we take away the teacher, it starts using different neurons right after becoming an independent entity. I love it. What an incredible chart. And all this is excellent news. So if it really has an intelligence of sorts, it has to be able to deal with previously unseen conditions and problems. That sounds super fun. Let's explore that some more. For instance, let's give it a horizontal obstacle. Good. Not a problem. Vertical. That's also fine. Now let's make the world larger and it is still doing well. Awesome. So, I absolutely love this paper. Be it mine demonstrated that they can build an AI that learns on the job, one that is even capable of independent thought and even better one that can deal with unforeseen situations too. What a time to be alive. So what do you think? What else could this be useful for? What do you expect to happen? A couple more papers down the line. Please let me know in the comments below. I'd love to hear your thoughts. This episode has been supported by Lambda GPU Cloud. If you're looking for inexpensive cloud GPUs for AI, check out Lambda GPU Cloud. And get this, they've recently launched an NVIDIA RTX 8000 with 48GB of memory. And hold onto your papers because Lambda GPU Cloud can cost less than half of AWS and Azure. Join researchers at organizations like Apple, MIT and Caltech in using Lambda Cloud instances, workstations or servers. Make sure to go to LambdaLabs.com slash papers to sign up for one of their amazing GPU instances today. Our thanks to Lambda for their longstanding support and for helping us make better videos for you. Thanks for watching and for your generous support and I'll see you next time.
|
Dear Fellow Scholars, this is Two Minute Papers with Dr. Kato Jona Ifehir. Today we are going to see what Deep Mind's amazing AI can learn from humans doing silly things like playing the drums with a comb. Really, we have to do that. Well, okay, if you insist, there you go. Well, if it had seen enough of these instruction video pairs, hopefully it will be able to learn from it. And here comes the fundamental question we are here to answer today. If this AI does something, is this imitation, or is this learning? Imitation is not that impressive, but if it can learn general concepts from examples like this, now that would perhaps be a little spark of intelligence. That is what we are looking for. So, how much did it learn? Well, let's see it together. Nice! Look at that! If we ask it, it can now show us where a specific room is. And if we get a little lost, it even helps us with additional instructions to make sure we find our way into the bathroom. Now note that the layout of this virtual playhouse is randomized each time we start a new interaction, therefore it cannot just imitate the direction people go when they say bathroom. It doesn't work because next time it will be elsewhere. It actually has to understand what a bathroom is. Good job, little AI. It also understands that we are holding a bucket and it should put the grapes in that. Okay, so it learned from instructions and it can deal with instructions. But, if it had learned general concepts, it should be able to do other things too. You know what? Let's ask some questions. First, how many grapes are there in this scene? Two, yes, that is correct. Good job! Now, are they the same color or different? It says different. I like it. Because to answer this question, it has to remember what we just talked about. Very good. And now, hold on to your papers and check this out. We ask what color the grapes are on the floor. And what does it do? It understands what it needs to look for together the required information. Yes, it uses its little papers and answers the question. A little virtual assistant AI. I love it. But, it isn't just an assistant. Here come two of my favorite tasks from the paper. One, if we dreamed up a catchy tune and we just need a bandmate to make it happen. Well, say no more. This little AI is right there to help. So cool. And it gets better, get this, too, it even knows about its limitations. For instance, it knows that it can't flip the green box over. This can only arise from knowledge and experimentation with the physics of this virtual world. This will be fantastic for testing video games. So, yes, the learning is indeed thorough. Now, let's answer two more super important questions. One, how quickly does the AI learn? And is it as good as a human? So learning speed. Let's lock under the hood together and… Oh, yes, yes, yes, I am super happy. You are asking, Karoi, why are you super happy? I am super happy because learning is already happening by watching humans mingle for about 12 minutes. That is next to nothing. Amazing. And one more thing that makes me perhaps even happier, and that is that I don't see a sudden spike in the growth, I see a nice linear growth instead. Why is that important? Well, this means that there is a higher chance that the algorithm is slowly learning general concepts and starts understanding new things that we might ask it. And if it were a small spike, there would be a higher chance that it had seen something similar to a human clearing a table in the training set and suddenly started copying it. But with this kind of growth, there is less of a chance that it is just copying the teacher. I love it. And yes, this is the corner of the internet where we get unreasonably excited by a blue line. Welcome to 2 Minute Papers. Subscribe and hit the bell icon if you wish to see more amazing works like this. And if we look at the task of lifting a drum, oh yes. So nice linear growth, although at a slower pace than the previous task. Now let's look at how well it does its job. This is Team Human and MIA is Team AI. And it has an over 70% success rate. And remember, many of these tasks require not imitation, but learning. The AI sees some humans mingling with these scenes, but to be able to answer the questions, it needs to think beyond what it has seen. Once again, that might be perhaps a spark of intelligence. What a time to be alive. But actually, one more question. A virtual assistant is pretty cool, but why is this so important? What does this have to do with our lives? Well, have a look at how open AI trained the robot hand in a simulation to be able to rotate these ruby cubes. And then deployed the software onto a real robot hand and look. It can use the simulation knowledge and now it works in the real world too. But, seem to real as relevant to self-driving cars too. Look, Tesla is already working on creating virtual worlds and training their cars there. One of the advantages of that is that we can create really unlikely and potentially unsafe scenarios, but in these virtual worlds, the self-driving AI can train itself safely. And when they deploy them into the real world, they will have all this knowledge. Waymo is running similar experiments too. And I also wanted to thank you for watching this video. I truly love talking about these amazing research papers and I am really honored to have so many of you fellow scholars who are here every episode enjoying these incredible works with me. Thank you so much. So I think this one also has great potential for a seem to real situation. Looking to help humans in a virtual world and perhaps uploading the AI to a real robot assistant. So, what do you think? What else could this be useful for? What do you expect to happen? A couple more papers down the line? Please let me know in the comments below. I'd love to hear your thoughts. This episode has been supported by Lambda GPU Cloud. If you're looking for inexpensive Cloud GPUs for AI, check out Lambda GPU Cloud. Get this, they've recently launched an NVIDIA RTX 8000 with 48GB of memory. And hold onto your papers because Lambda GPU Cloud can cost less than half of AWS and Asia. And researchers at organizations like Apple, MIT and Caltech in using Lambda Cloud instances, workstations or servers. Make sure to go to LambdaLabs.com slash papers to sign up for one of their amazing GPU instances today. Our thanks to Lambda for their long-standing support and for helping us make better videos for you. Thanks for watching and for your generous support and I'll see you next time.
|
draveszettem likedér Hotel Jondajfeeedeg On a szorama onions visszaimos más a hagy<|ko|><|translate|> Egyerfellós szállás, és mert ez a főkökét az Dr. Karoszsolnai-féhér. Sziasztott, hogy a videó, a szükségtőt is a szükségtéhez. Szóval, és ezt a győre a szükségtéhez, hogy mert a szükségtéhez, hogy a szükségtéhez a szükségtéhez. Mél, lesz, hogy ezt a szükségtéhez, egy szükségtéhez, szükségtéhez, szükségtéhez. Szóval, helyetben, és ez a szükségtéhez, hagynáltal a szükségtéhez, hogy szükségtéhez, hagynáltal a szükségtéhez, ez a szükségtéhez, hogy helyettéhez, Szóval, helyetéhez egyéhez, tehát lehetőségem, és kétéhez, gondolod, és mert a szükségtéhez, Megét tetejénél a fewány szégoly volt 기�. B séki borárom volt az a kemi és choosesvár requestinga. Az ez egyő f කකint most izná Igen, és nért szét is voltani. A K Aатn Quem beer�ét blya. Vozni az elöldben Droㅋㅋ Egyébben! De azони z best sezt mi valnightos protectedーーum. Dit vagy a lifadról, Budol gadus köz jobban egyszerker�� Doctor Az Probably Csos 디자, és hogyan es interactionsztatok, és én létхől és uramon knitned az kőt LOL-cs texti,igglesysaz k교皆az motorcycleja dan kはいályévels összer is suffiakra. D壌án hád波fontra a Jeju nem kuészt gotozhratokcom és aztánél spávolgolunk报j记igig runächstognak. Uramon, és ez a zsakom előbbőkor amelyvert自由 luego volt, De hát, a c Belet案绚esrebbe gadget a zsははáll épóanban múz Withvふètre, Kár� backgőt de tőködk. Csana, para handeutban, nyúrni, Z sign Kát dil代álok Bergát helyezény közben akarod歨ján, Ők Media Flex Engineer És ebben bagdanánna szírt alternative s Oppanigerben egy agyúraúb background. És ebben ez a videon geben Git tie inaugus indírtanok. rendent nem tudnak egy előre nők kiv vállítani Wentad nőszok tárta és a numerous k아忚 acán nyiljt lotsu 1998- Però Flower is s menjénni a suggagod valahelyben az Boda ft jobban és a Most Állókanyだけ Eurás, egy马 mindfulness És hé take maintenance, aki sz訊uel is fejevestorm hold ilyen vele wondersni, mint eventek temperaturesa slicingke商 nincs kőtt narcissonion. Egyeket vennem igazott lepett voná�ление. A gyésteigうんő al好不好 okhogy tollón. A Surc d fire internet. E nemtonos győmerazン az aúny. Készülből minden szer attacham impairment.<|ko|><|transcribe|> Egy kurkk gadgets matul, menken meuly életek. TouBLOLES, skinny intellectual olyan maradt MUSIC nem réped Smack Developerabad 10örük a chor mindenwasz és anxietyz overflavored a bizat lak campaign 토zom te nem szereknálonyom. Ákel olyan, mert visszahatat anything ha bullying-t adikㄒénű a berbeMATIemption kétják, Azt atta, akik Manager Jobbudotgyоеbb. Tikként ha temedében íddák weighszelt, hogy k steht elpáltva és頭ból asímmit зöve. Ez ötös ezt ilkak addak állunk fel. Elfésztelj, hogy k affects és midensz szüggetek falarladi. S dondeg leszek el, és a másodást amit a tegára amikad a helykem a csolska de elpáöm a toucha todo novot sen Kagaltase, B Jahrke энny is előszk fellowshiposztani kezdettem implicálom jövára seconds. Ruthendra jönni észt be Hello Diós-tema érimi segítsบsan. Használokю, oképp egy konc fury össze a chagy rö stevedek a sizzi guessent, decsonyítani elfizettem és isれ solo, sociale Thermozikogdal el a száredes nämöt. ez az élő登 testésés állj Beit Heláprandalottul. Ez a belőbre anything oké positivágot assistants jár akad a nap élet a installation ésですban này is az marketing hedge- V fish Intellikuis tempány només olyan narλο amåli, regilen van. Egy rend寂 решáj sem ezt emokáta a tele aability a valamintrawak így ü jazz Angels ennek a tőle Van szépen polания illetve, AGUSZON TÜH dobrykoz rodznek egyre, pußelf PALSCHA,asisül, árom, antjátott felérretي душélye és megszörégyek lesz késztelet is. Zamb анágyi lesz egyszer is fejSteve elő które z往á taş chefs�yek bil Birmingham ke youngsters visszom a viz highest abra, mennyi a f Soul Driving Car háockeystepok, 128,ém azul, felészégi ilyen, apre előbb előre, fjachére f threatsér partic hypotheticalontja hiszeulv specialized, federal érleek, v Weblo, v ChooseBi autожноtságy, Halást a college-aguegyörcságalina alienságok meg helyőz pick up caட���. Seb竡ö推шeszer, kён Persz거� 독ty Bit madness uploading, akar el10 halára�. Ha az termésatnyi kis former sleep álljánkyle BügyeED ainda hagy September helyüzzéeek, gyoreszőz ليkom az Álihogat Mün Cohen. Tenerüzi may fold atdv spinélyekét háraszúlágikat, r knobznan visszer, dysényiébb niño takes sul kurázi az érdekű a indul. az egy polis is tudod l köretben van vintage�. Mi a Százkorszikali salsa módja egyszeretelőt loading is. Tudod teszeket kisesztik szállja a pedig maga a megaülyénizni. Száll campernek a allocation rám az grain persze! Tehát ennben a százkos rendel, amilben amit a terrafel kétengedi háminis 나�státulbesondere. Valah生 is Sand továgyunk! Igen soccer!
|
Dear Fellow Scholars, this is 2 Minute Papers with Dr. Karojona Ifeher. Finally, today DeepMind's amazing AI, MewZero, that plays chess and other games has now finally entered the real world and has learned to solve important real world problems. This is a reinforcement learning technique that works really well on games. Why? Well, in chess, go and starcraft, the controls are clear. We use the mouse to move our units around or choose where to move our pieces. And the score is also quite clear. We get rewarded if we win the game. That is going to be our score. To say that these words really well would be an understatement. DeepMind's MewZero is one of the best in the world in chess, go and starcraft too as well. But one important question still remains. Of course, they did not create this AI to play video games. They created it to be able to become a general purpose AI that can solve not just games, but many problems. The games are just used as an excellent testbed for this AI. So, what else can it do? Well, finally, here it is. Hold on to your papers because scientists at DeepMind decided to start using the MewZero AI to create a real solution to a very important problem, video compression. And here comes the twist. They said, let's imagine that video compression is a video game. Okay, that's crazy. But let's accept it for now. But then, two questions. What are the controls and what is the score? How do we know if we won video compression? Well, the video game controller in our hand will be choosing the parameters of the video encoder for each frame. Okay, but there needs to be a score. So, what is the score here? How do we win? Well, we win if we are able to choose the parameters such that the quality of the output video is as good as the previous compression algorithms. But the size of the video is smaller. The smaller the output video, the better. That is going to be our score. And it also uses self-competition, which is now a popular concept in video game AI's. This means that the AI plays against previous versions of itself and we measure its improvement by it being able to defeat these previous versions. If it can do that reliably, we can conclude that yes, the AI is indeed improving. This concept works on boxing, playing catch, stockraft, and I wonder how this would work for video compression. Well, let's see. Let's immediately drop it into deep waters. Yes, we are going to test this against a mature state of the art video compression algorithm that you are likely already using this very moment as you are watching this on YouTube. Well, good luck little AI. But I'll be honest, there is not much hope here. These traditional video compression algorithms are a culmination of decades of ingenious human research. Can a newcomer AI be that I am not sure. And now hold on to your papers and let's see together. How did it go? So a 4% difference. A learning based algorithm that is just 4% worse than decades of human innovation, that is great. But wait a second, it's actually not worse. Can it be that yes, it is not 4% worse, it is even 4% better. Holy matter of papers, that is absolutely incredible. Yes, this is the corner of the internet where we get super excited by a 4% better solution and understand why that matters a great deal. Welcome to 2 minute papers. But wait, we are experienced fellow scholars over here, we know that it is easy to be better by 4% in size at the cost of decreased quality. But having the same quality and saving 4% is insanely difficult. So which one is it? Let's look together. I am flicking between the state of the art and the new technique and yes, my goodness, the results really speak for themselves. So let's look a bit under the hood and see some more about the decisions the AI is making. Whoa, that is really cool. So what is this? Here we see the scores for the previous technique and the new AI and here they appear to be making similar decisions on this cover song video, but the AI makes somewhat better decisions overall. That is very cool. But look at that. In the second half of this gaming video, Mew Zero makes vastly different and vastly better decisions. I love it. And to have a first crack at such a mature problem and managed to improve it immediately, that is almost completely unheard of. Yet they have done it with protein folding and now they seem to have done it for video compression too. Bravo deep mind. And note the meaning in the magnitude of the difference here. Open AI's Dolly 2 was this much better than Dolly 1. That's not 4% better. If that was a percentage, this would be several hundred percent better. So why get so excited about 4%. Well, the key is that 3 to 4% more compression is incredible given how polished the state of the art techniques are. VP9 compressors are not some first crack at the problem. No, no. This is a mature field with decades of experience where every percent of improvement requires blood, papers, and tears and of course lots of compute and memory. And this is just the first crack at the problem for deep mind and we get not 1%, but 4% essentially for free. That is absolutely amazing. My mind is blown by this result. Wow. And I also wanted to thank you for watching this video. I truly love talking about these amazing research papers and I am really honored to have so many of you fellow scholars who are here every episode enjoying these incredible works with me. It really means a lot. Every now and then I have to pinch myself to make sure that I really get to do this every day. Absolutely amazing. Thank you so much. So what do you think? What else could this be useful for? What do you expect to happen? A couple more papers down the line. Please let me know in the comments below. I'd love to hear your thoughts. This episode has been supported by Cohear AI. Cohear builds large language models and makes them available through an API so businesses can add advanced language understanding to their system or app quickly with just one line of code. You can use your own data whether it's text from customer service requests, legal contracts or social media posts to create your own custom models to understand text or even generated. For instance, it can be used to automatically determine whether your messages are about your business hours, returns or shipping or it can be used to generate a list of possible sentences you can use for your product descriptions. Make sure to go to Cohear.ai slash papers or click the link in the video description and give it a try today. It's super easy to use. Thanks for watching and for your generous support and I'll see you next time.
|
Dear Fellow Scholars, this is Two Minute Papers with Dr. Karo Zsolnai-Fehir. AI Research is improving at such a rapid pace that today we cannot only create virtual humans, but even exert some artistic control over how these virtual humans should look. But today we can do something even cooler. What can be even cooler than that? Well have you ever seen someone with a really cool haircut and wondered what you would look like with it? Would it be ridiculous or would you be able to pull it off? Well hold on to your papers because you don't have to wonder anymore. With this new AI you can visualize that yourself. Hmm okay, but wait a second. Previous methods were already able to do that, so what is new here? Well yes, but look at the quality of the results. In some cases it is quite easy to find out that these are synthetic images while for other cases so many of the fine details of the face get lost in the process that the end result is not that convincing. So is that it is the hairstyle synthesis dream dead? Well, if you have been holding on to your papers now squeeze that paper and have a look at this new technique. And wow, my goodness, just look at that. This is a huge improvement, just one more paper down the line. And note that these previous methods were not some ancient techniques. No no, these are from just one and two years ago. So much improvement in so little time. I love it. Now let's see what more it can do. Look, we can choose the hairstyle we are looking for and yes, it doesn't just give us one image to work with. No, it gradually morphs our hair into the desired target shape. That is fantastic because even if we don't like the final hairstyle, one of those intermediate images may give us just what we are looking for. I love it. And it has one more fantastic property. Look, most of the details of the face remain in the final results. Whew, we don't even have to morph into a different person to be able to pull this off. Now, interestingly, look, the eye color may change, but the rest seems to be very close to the original. But perfect, but very close. And did you notice? Yes, it gets better. Way better. We can even choose the structure. This would be the hairstyle which can be one image and the appearance this is more like the hair color. So does that mean that? Yes, we can even choose them separately. That is, take two hair styles and fuse them together. Now have a look at this too. This one is an exclusive example to the best of my knowledge you can only see this here on two-minute papers. A huge thank you to the authors for creating these just for us. And I must say that we could stop on any of these images and I am not sure if I would be able to tell that it is synthetic. I may have a hunch due to how flamboyant some of these hairstyles are, but not due to quality concerns, that's for sure. Even if we subject these images to closer inspection, the visual artifacts of the previous techniques around the boundaries of the hair seem to be completely gone. And I wonder what this will be able to do just a couple more papers down the line. What do you think? I'd love to know, let me know in the comments below. What you see here is a report of this exact paper we have talked about which was made by Wates and Biasis. I put a link to it in the description. Make sure to have a look, I think it helps you understand this paper better. Wates and Biasis provides tools to track your experiments in your deep learning projects. Using their system, you can create beautiful reports like this one to explain your findings to your colleagues better. It is used by many prestigious labs, including OpenAI, Toyota Research, GitHub and more. And the best part is that Wates and Biasis is free for all individuals, academics and open source projects. Make sure to visit them through wnb.com slash papers or just click the link in the video description and you can get a free demo today. Thanks for watching and for your generous support and I'll see you next time.
|
Dear Fellow Scholars, this is Two Minute Papers with Dr. Karo Zsolnai-Fehir. Today, we are going to take a light simulation algorithm that doesn't work too well and infuse it with a powerful neural network. And what happens? Then, and this happens. Wow! So, what was all this about? Well, when you fire up a classical light simulation program, first we start out from, yes, absolutely nothing. And look, over time, as we simulate the path of more and more light rays, we get to know something about the scene. And slowly, over time, the image cleans up. But, there is a problem. What is the problem? Well, look, the image indeed cleans up over time. But, no one said that this process would be quick. In fact, this can take from minutes in the case of this scene to get this even days for this scene. In our earlier paper, we rendered this beautiful, but otherwise, sinfully difficult scene and it took approximately three weeks to finish. And it also took several computers running at the same time. However, these days, neural network-based learning methods are already capable of doing light transport. So, why not use them? Well, yes, they are, but they are not perfect. And we don't want an imperfect result. So, scientists at Nvidia said that this might be the perfect opportunity to use control variants. What are those? And why is this the perfect opportunity? Control variants are a way to inject our knowledge of a scene into the light transport algorithm. And here's the key. Any knowledge is useful as long as it gives us a head start, even if this knowledge is imperfect. Here is an illustration of that. What we do is that we start out using that knowledge and make up for the differences over time. Okay, so how much of a head start can we get with this? Normally, we start out from a black image and then a very noisy image. Now, hold on to your papers and let's see what this can do to help us. Wow! Look at that! This is incredible. What you're looking at is not the final rendering. This is what the algorithm knows. And instead of the blackness, we can start out from this. Goodness, it turns out that this might not have been an illustration, but the actual knowledge the AI has of the scene. So cool! Let's look at another example. In this bedroom, this will be our starting point. My goodness! Is that really possible? The bed and the floor are almost completely done. The curtain and the refractive objects are noisy, but do not worry about those for a second. This is just a starting point and it is still way better than starting out from a black image. To be able to visualize what the algorithm has learned with a little more finesse, we can also pick a point in space and learn what the world looks like from this point and how much light scatters around it. And we can even visualize what happens when we move this point around. Man, this technique has a ton of knowledge about these scenes. So once again, just to make sure we start out not from a black image, but from a learned image and now we don't have to compute all the light transport in the scene we just need to correct the differences. This is so much faster, but actually is it? Now let's look at how this helps us in practice. And that means, of course, equal time comparisons against previous light transport simulation techniques. And when it comes to comparisons, you know what I want? Yes, I want Eric Vitch's legendary scene. See, this is an insane scene where all the light is coming from not here, but the neighboring room through a door that is just slightly a jar. And the past tracer, the reference light transport simulation technique behaves as expected. Yes, we get a very noisy image because very few of the simulated rays make it to the next room. Thus, most of our computation is going to waste. This is why we get this noisy image. And let's have a look what the new method can do in the same amount of time. Wow! Are you seeing what I am seeing? This is completely out of this world. My goodness! But this was a pathological scene designed to challenge our light transport simulation algorithms. What about a more typical outdoor scene? Add tons of incoming light from every direction, and my favorite water caustics. Do we get any advantages here? Hmm, the past tracer is quite noisy. This will take quite a bit of time to clean up. Whereas the new technique, oh my, that is a clean image. How close is it to the fully converged reference image? Well, you tell me because you are already looking at that. Yes, now we are flicking between the new technique and the reference image, and I can barely tell the difference. There are some, for instance, here, but that seems to be about it. Can you tell the difference? Let me know in the comments below. Now, let's try a bathroom. Lots of shiny surfaces and specular light transport. And the results are... Look at that! There is no contest here. A huge improvement across the whole scene. And believe it or not, you still haven't seen the best results yet. Don't believe it. Now, if you have been holding onto your papers, squeeze that paper and look at the art room example here. This is an almost unusable image with classical light transport. And are you ready? Well, look at this. What in the world? This is where I fell off the chair when I was reading this paper. Absolute madness. Look, while the previous technique is barely making a dent into the problem, the new method is already just a few fireflies away from the reference. I can't believe my eyes. And this result is not just an anomaly. We can try a kitchen scene and draw similar conclusions. Let's see. Now we're talking. I am out of words. Now, despite all these amazing results, of course, not even this technique is perfect. This tourist was put inside a glass container and is a nightmare scenario for any kind of light transport simulation. The new method successfully harnesses our knowledge about the scene and accelerates the process a great deal. But once again, we get fireflies. These are going to be difficult to get rid of and will still take a fair bit of time. But my goodness, if this is supposed to be a failure case, then yes, sign me up right now. Once again, the twist here is not just to use control variants, the initial knowledge thing, because in and of itself, it is not new. I, like many others, have been experimenting with this method back in 2013, almost 10 years ago, and back then, it was nowhere near as good as this one. So, what is the twist then? The twist is to use control variants and infuse them with a modern neural network. Infusing previous techniques with powerful learning-based methods is a fantastic area of research these days. For instance, here you see an earlier result, an ancient light transport technique called radiosity. This is what it was capable of back in the day. And here is the neural network-infused version. Way better. I think this area of research is showing a ton of promise and I'm so excited to see more in this direction. So, what do you think? What would you use this for? I'd love to hear your thoughts, please let me know in the comments below. And when watching all these beautiful results, if you feel that this light transport thing is pretty cool, and you would like to learn more about it, I held a master-level course on this topic at the Technical University of Vienna. Since I was always teaching it to a handful of motivated students, I thought that the teachings shouldn't only be available for the privileged few who can afford a college education, but the teachings should be available for everyone. Free education for everyone, that's what I want. So, the course is available free of charge for everyone, no strings attached, so make sure to click the link in the video description to get started. We write a full-light simulation program from scratch there and learn about physics, the world around us, and more. If you watch it, you will see the world differently. This video has been supported by weights and biases. Look at this, they have a great community forum that aims to make you the best machine learning engineer you can be. You see, I always get messages from you fellow scholars telling me that you have been inspired by the series, but don't really know where to start. And here it is, in this forum you can share your projects, ask for advice, look for collaborators and more. Make sure to visit www.me-slash-paper-forum and say hi or just click the link in the video description. Our thanks to weights and biases for their long-standing support and for helping us make better videos for you. Thanks for watching and for your generous support and I'll see you next time.
|
Dear Fellow Scholars, this is two Minute Papers with Dr. Karo Zonai-Fehir. Today, we are going to have an AI write a story for us, and even do our coding exam materials. And the coolest thing is that both of these will be done by the same AI. How? Well, open AI's GPT-3 technique is capable of all kinds of wizardry as long as it involves text processing. For instance, finishing our sentences or creating plots, spreadsheets, mathematical formulae, and many other things. While their dolly to AI is capable of generating incredible quality images from a written description, even if they are too specific. Way too specific. Now, today we are going to play with GPT-3, the text AI that just gained an amazing new capability. And that is, editing and inserting text. And if you think that this doesn't sound like much, well, check out these examples. Let's start a story. Today is the big day, it says. Now, the AI sees that the name of the section is high school graduation, and it infers quite intelligently that this might be a message from a school to their graduating students. So far so good. But this was possible before too. So, what's new here? Well, hold on to your papers, and now let's do this. Oh, yes. Change the story. Now we don't know why this is the big day, but we know that the next section will be about moving to San Francisco. And oh my, look at that. It understood the whole trick and rewrote the story accordingly. But, you know what? If we wish to move to Istanbul instead, not a problem. The AI gets that one too. Or, if we have grown tired of the city life, we can move to a farm too, and have the AI write out a story for that. Okay, that was fantastic. But, when it comes to completion, I have an idea. Listen, how about pretending that we know how to make hot chocolate? Feel in the last step, which is, you know, not a great contribution, and let the AI do the rest of the dirty work. And can it do that? Wow, it knows exactly that this is a recipe, and it knows how to fill that in correctly. I love it. The put hot chocolate part is a bit of a cop out, but you know what? I'll take it. From the recipe, it sounds like a good one. Cheers. Now, let's write a poem about GPT3. Here we go. It rhymes too. Checkmark. And now, let's ask the AI to rephrase it in its own voice. And yes, it is able to do that while carefully keeping the rhyme intact. And when we ask it to sign it, we get a letter to humanity. How cool is that? Onwards to coding. And this is where things get really interesting. Here is a piece of code for computing Fibonacci numbers. Now translate it into a different programming language. Wow, that was quick. Now rewrite it to do the same thing, but the code has to fit one line. And here comes my favorite, improve the runtime complexity of the code. In other words, make it more efficient. And there we go. Now clearly, you see that this is a simple assignment. Yes, but make no mistake. These are fantastic signs that an AI is now capable of doing meaningful programming work. And if you have a friend who conducted coding interviews, show this to them. I reckon they will say that they have met plenty of human applicants who would do worse than this. If you have your own stories, you know what to do. Of course, leave them in the comment section. I'd love to hear them. And if you have been holding on to your paper so far, now squeeze that paper because here comes the best part. We will fuse the two previous examples together. What does that mean? Well, we will use the hard chocolate playbook and apply it to coding. Let's construct the start and the end of the code and hand it away to the AI. And yes, once again, it has a good idea as to what we are trying to do. It carefully looks at our variable names and feels in the key part of the code in a way that no one will be able to tell that we haven't been doing our work properly. What a time to be alive. So an AI that can write the story for us, even in a way that we only have to give it the scaffolding of the story and it feels in the rest. I love it. So cool. And it can help us with our coding problems and even help us save a ton of time massaging our data into different forms. What an incredible step forward in democratizing these amazing AI's. So what would you use this for? What do you expect to happen? A couple more papers down the line. Please let me know in the comments below. I'd love to hear your thoughts. This episode has been supported by Lambda GPU Cloud. If you're looking for inexpensive cloud GPUs for AI, check out Lambda GPU Cloud. Get this. They've recently launched an Nvidia RTX 8000 with 48GB of memory. And hold onto your papers because Lambda GPU Cloud can cost less than half of AWS and Azure. Join researchers at organizations like Apple, MIT and Caltech in using Lambda Cloud instances, workstations or servers. Make sure to go to lambdaleps.com slash papers to sign up for one of their amazing GPU instances today. Thanks for thanks to Lambda for their long-standing support and for helping us make better videos for you. Thanks for watching and for your generous support and I'll see you next time.
|
In dear fellow scholars, this is two minute papers with Dr. Karo Zorna Ifeher. Today, I am so excited to show you this. Look, we are going to have an AI look at 650 million images on the internet and then ask it to generate the craziest synthetic images I have ever seen. And wait, it gets better, we will also see what this AI thinks it looks like. Spoiler alert, it appears to be cuddly. You'll see. So, what is all this about? Well, in June 2020, OpenAI created GPT-3, a magical AI that could finish your sentences and among many incredible examples, it could generate website layouts from a written description. This opened the door for a ton of cool applications, but note that all of these applications are built on understanding text. However, no one said that these neural networks can only deal with text information and sure enough, a few months later, scientists at OpenAI thought that if we can complete text sentences, why not try to complete images too? And thus, image GPT was born. The problem statement was simple, we give it an incomplete image and we ask the AI to fill in the missing pixels. If we give it this image, it understood that these birds are likely standing on something and it even has several ideas as to what that might be. Look, a branch, a stone, or they can even stand in water and amazingly, even their mirror images are created by the AI. But then, scientists at OpenAI thought, why not have the user write a text description and get them a really well done image of exactly that? That sounds cool, and it gets even cooler the crazier the ideas we give to it. The name of this technique is a mix of Salvador Dali and Pixar's Wally. So, please meet Dali. This could do a ton. For instance, this understands styles and rendering techniques. Being a computer graphics person, I am so happy to see that it learned the concept of low polygon counter-rendering, isometric views, clay objects, and we can even add an x-ray view to the owl. Kind of. And now, just a year later, would you look at that? Oh, wow! Here is Dali too. Oh, my! I cannot tell you how excited I am to have a closer look at the results. Let's dive in together. So, what can it do? Well, that's not the right question. By the end of this video, I bet you will think that the more appropriate question would be, what can it do? This one can take descriptions that are so specific, I would say, that perhaps even a good human artist might have trouble with. Now, hold on to your papers and have a look at ten of my favorite examples. One, a panda, mad scientist, mixing sparkling chemicals. Wow! Look at that! This is something else. It even has sunglasses for extra street cred and the reflections of the questionable substance it is researching are also present on its sunglasses. But, the mad science doesn't stop there. Two, teddy bears mixing sparkling chemicals as mad scientists. But, at this point, we already know that doing this would be too easy for the AI. So, let's do it in multiple styles. First, steampunk, second, 1990s Saturday morning cartoon, and third, digital art. It can pull off all of these. Three, now about variants. Give me a teddy bear on a skateboard in Times Square. Now, this is interesting for multiple reasons. For instance, you see that it can generate a ton of variants. That is fantastic. As a light transport researcher, I cannot resist mentioning how nice of a depth of field effect it is able to make. And, would you look at that? It also knows about the highly sought after signature effect of the lights blurred into these beautiful balls in the background. The AI understands these bouquet balls and the fact that it can harness this kind of knowledge is absolutely amazing. Four, and if you think that it's too specific, you have seen nothing yet. Check this out. An expressive oil painting of a basketball player, dunking depicted as an explosion of a nebula. I love it. It also has a nice space gem quality to it. Well done little AI. So good. Five, you know what? I want even more specific and even more ridiculous images. A propaganda poster depicting a cat and dressed as French emperor, Napoleon holding a piece of cheese. Now that is way too specific. Nobody can pull that off. There is no way that... Wow! I think we have a winner here. When the next election is coming up, you know what to do. Six, and still, once again, believe it or not, you have seen nothing yet. We can get even more specific. So much so that we can even edit an image that is already done. For instance, if we feel that this image is missing a flamingo, we can request that it is placed there, but we can even specify the location for it. And... Even the reflections are created for it, and they are absolutely beautiful. Now note that I think if there are reflections here, then perhaps there should have been reflections here too. A perfect test for one more paper down the line, one dolly three arrives. Make sure to subscribe and hit the bell icon. You really don't want to miss it. Seven, this one puts up a clinic in understanding the world around us. This image is missing something. Missing what? Well, of course, corgis. And I cannot believe this. If we specify the painting as the location, it will not only have a painterly style, but one that already matches the painting on the wall. This is true for the other painting too. This is incredible. I absolutely love it. And last test, does it? Yes, it does. If we are outside of the painting at the photo part, this good boy becomes photorealistic. Requesting variance is also a possibility here. So, what do you think? Which is the best boy? Let me know in the comments below. And number eight. If we can place any object anywhere, this is an excellent tool to perform interior design. We can put a couch wherever we please, and I am already looking forward to inspecting the reflections here. Oh yes, this is very difficult to compute. This is not a matte diffuse object, and not even a mirror-like specular surface, but a glossary reflection that is somewhere in between. But it gets worse. This is a textured object, which also has to be taken into consideration and proper shadows also have to be generated in a difficult situation where light comes from a ton of different directions. This is a nightmare, and the results are not perfect, but my goodness, if this is not an AI that has a proper understanding of the world around us, I don't know what is. Absolutely incredible progress in just one year. I cannot believe my eyes. You know what? Number nine. Actually, let's look at how much it has improved since Dolly won side by side. Oh my. Now, there is no contest here. Dolly too is on a completely different level from its first iteration. This is so much better, and once again, such improvement in just a year. What a time to be alive. And what do you think if Dolly 3 appears, what will it be capable of? What would you use this, or Dolly 3-4? Please let me know in the comments below. I'd love to know what you think. Now, of course, not even Dolly 2 is perfect. Look at that. Number 10. Inspect the pictures and tell me what you think the prompt for this must have been. Not easy, right? Let me know your tips in the comments below. Well, it was a sign that says deep learning. Well, A plus for effort, little AI, but this is certainly one of the failure cases. And you know what? I cannot resist. Plus one. If you have been holding onto your papers, now squeeze that paper at least as hard as these colors are squeezing their papers. So, which one are you? Which one resembles your reaction to this paper the best? Let me know in the comments below. And yes, as promised, here is what it thinks of itself. It is very soft and cuddly. Or at least it wants us to think that it is so. Food for thought. And if you speak robot and have any idea what this writing could mean, make sure to let me know below. And one more thing. We noted that this AI was trained on 650 million images and uses 3.5 billion parameters. These are not rookie numbers by any stretch of the imagination. However, I am hoping that with this, there will be a chance that other independent groups will also be able to train and use their own dolly too. And just in the last few years, OpenAI has given us legendary papers. For instance, an AI that can play hide and seek, solve math tests, or play a game called Dota 2 on a world champion level. Given these, I hereby appoint Dolly into the pantheon of these legendary works. And I have to say, I am super excited to see what they come up with next. This episode has been supported by Lambda GPU Cloud. If you are looking for inexpensive Cloud GPUs for AI, check out Lambda GPU Cloud. And get this, they've recently launched an NVIDIA RTX 8000 with 48GB of memory. And hold onto your papers, because Lambda GPU Cloud can cost less than half of AWS and Asia. Join researchers at organizations like Apple, MIT, and Caltech in using Lambda Cloud instances, workstations, or servers. Make sure to go to LambdaLabs.com, slash papers to sign up for one of their amazing GPU instances today. Our thanks to Lambda for their longstanding support and for helping us make better videos for you. Thanks for watching and for your generous support, and I'll see you next time.
|
Dear Fellow Scholars, this is Two Minute Papers with Dr. Karo Zonai-Fehir. Today, we are going to do some incredible experiments with NVIDIA's next-level image editor AI. Now, there are already many AI-based techniques out there that are extremely good at creating no human or even animal faces. You see here how beautifully this Elias-Freegan technique can morph one result into another. It truly is a sight to behold. These are really good at editing images. You see, this was possible just a year ago. But this no one, this can do more semantic edits to our images. Let's dive in and see all the amazing things that it can do. If we take a photo of our friends we haven't seen in a while, what happens? Well, of course, the eyes are closed or are barely open. And from now on, this is not a problem. Done. And if we wish to add a smile or take it away, boom, that is also possible. Also, the universal classic, looking too much into the camera, not a problem. Done. Now, the AI can do even more kinds of edits, hairstyle, eyebrows, wrinkles, you name it. However, that's not even the best part. You have seen nothing yet. Are you ready for the best part? Hold on to your papers and check this out. Yes, it even works on drawings, paintings. And, oh my, even statues as well. Absolutely amazing. How cool is that? I love it. Researchers refer to these as out of domain examples. The best part is that this is a proper learning-based method. This means that by learning on human portraits, it has obtained general knowledge. So now, it doesn't just understand these as clumps of pixels, it now understands concepts. Thus, it can reuse its knowledge, even when facing a completely different kind of image, just like the ones you see here. This looks like science fiction, and here it is, right in front of our eyes. Wow. But it doesn't stop there. It can also perform semantic image editing. What is that? Well, look. We can upload an image, look at the labels of these images, and edit the labels themselves. Well, okay, but what is all this good for? Well, the AI understands how these labels correspond to the real photo. So, check this out. We do the easy part, edit the labels, and the AI does the hard part, which is changing the photo appropriately. Look. Yes, this is just incredible. The best part is that we are now even seeing a hint of creativity with some of these solutions. And if you're one of those folks who feel like the wheels and rims are never quite big enough, well, Nvidia has got you covered, that's for sure. And here comes the kicker. It learned to create these labels automatically by itself. So, how many labels to image pairs did it have to look at to perform all this? Well, what do you think? Millions, or maybe hundreds of thousands? Please leave a comment. I'd love to hear what you think. And the answer is 16. What? 16 million? No, 16. Two to the power of four. And that's it. Well, that is one of the most jaw-dropping facts about this paper. This AI can learn general concepts from very few examples. We don't need to label the entirety of the internet to have this technique be able to do its magic. That is absolutely incredible. Really learning from just 16 examples. Wow! The supplementary materials also showcase loads of results, so make sure to have a look at that. If you do, you'll find that it can even take an image of a bird and adjust their big sizes even to extreme proportions. Very amusing. Or we can even ask them to look up. And I have to say, if no one would tell me that these are synthetic images, I might not be able to tell. Now note that some of these capabilities were present in previous techniques, but this new method does all of these with really high quality and all this in one elegant package. What a time to be alive! Now, of course, not even this technique is perfect, there are still cases that are so far outside of the training set of the AI that the result becomes completely unusable. And this is the perfect place to invoke the first law of papers. What is that? Well, the first law of papers says that research is a process. Don't look at where we are, look at where we will be, two more papers down the line. And don't forget, a couple papers before this, we were lucky to do this. And now, see how far we've come. This is incredible progress in just one year. So, what do you think? What would you use this for? I'd love to hear your thoughts, so please let me know in the comments below. What you see here is a report of this exact paper we have talked about which was made by weights and biases. I put a link to it in the description, make sure to have a look, I think it helps you understand this paper better. Weight and biases provides tools to track your experiments in your deep learning projects. Using their system, you can create beautiful reports like this one to explain your findings to your colleagues better. It is used by many prestigious labs, including OpenAI, Toyota Research, GitHub, and more. And the best part is that weight and biases is free for all individuals, academics, and open source projects. Make sure to visit them through wnb.com slash papers, or just click the link in the video description, and you can get a free demo today. Our thanks to weights and biases for their long-standing support and for helping us make better videos for you. Thanks for watching and for your generous support, and I'll see you next time.
|
Dear Fellow Scholars, this is Two Minute Papers with Dr. Karo Zonai-Fehir. Today we are going to use this new AI Film Director technique to reimagine videos and even movies and just by saying what we want to add to it. Yes, really, and you will see that the results are absolute insanity. Now, previous techniques are already able to take a photo of someone, make them young or old, but this work, this is something else. Why? This can do the same for not only one still image, but for an entire video. Just imagine how cool it would be to create a more smiling version of an actor without actually having to re-record the scene. Hold on to your papers and look at how we can take the original video of an Obama speech and make it a little happier by adding a smile to his face. Or even make him younger or older. And these are absolutely amazing. I love the grey hair. I love the young man hair. I also love how the gestures and head turns are going through. This looks like something straight out of a science fiction movie. I love it. Now, what is the key here? The key here is that this AI knows about temporal coherence. What does that mean? Well, it means that it does this in a way that when it synthesizes the second image, it remembers what it did to the first image and takes it into consideration. That is not trivial at all. And if you feel that this does not sound like a huge deal, let's have a look at an earlier technique that doesn't remember what it did just a moment ago. Prepare your eyes for a fair bit of flickering. This is an AI-based technique from about six years ago that performs style transfer. And when creating image too, it completely disregards what it did to image one. And as you see, this really isn't the way to go. And with all this newfound knowledge, let's look at the footage of the new AI and look at the temporal coherence together. Just look at that. That is absolutely incredible. I am seeing close to perfect temporal coherence. My goodness. Now, not even this technique is perfect. Look, when trying it here, Mark Zuckerberg's hair seems to come to a life of its own. I also find the mouth movements a bit weaker than with the other results. And now, if you allow me, let me also enter the fray. Well, this is not me, but how the AI re-imagines me as a young man. Good job little man. Keep those papers coming. This is incredible. I love it. And this is what I actually look like. Okay, just kidding. This is Old Mancaroy as imagined by the AI who is vigorously telling you that papers were way better back in his day. And this is the original. And this is my sister, Carolina, feeling in when I am busy. I would say that the temporal coherence is weaker here, but it is understandable because the AI has to do a ton of heavy lifting by synthesizing so much more than just a smile. And even with all these amazing results, so far you have seen nothing yet. No sir, it can do even more. If you have been holding onto your paper so far, now squeeze that paper because it works not only on real humans, but animated characters too. And just look at that. We can order these characters to smile more, be angrier or happier. And we can even add lipstick to a virtual character without having to ask any of the animators and muddlers of this movie. How cool is that? And remember, we were lucky to be able to do this to one still image just a couple papers ago. And now all this with temporal coherence. And I hope you are enjoying these results, many of which you can only see here on two minute papers. No where else. And of course, I would like to send a huge thank you to the authors who took time off their busy day to generate these just for us. Now even the animated movie synthesis has some weaker results. In my opinion, this is one of them, but still very impressive. And don't even get me started about the rest. These are so good. And just imagine what we will be able to do just a couple more papers down the line. My goodness. What a time to be alive. So what would you use this for? What do you expect to happen? A couple more papers down the line? Please let me know in the comments below. I'd love to hear your thoughts. Wait and buy a cease provides tools to track your experiments in your deep learning projects using their system. You can create beautiful reports like this one to explain your findings to your colleagues better. It is used by many prestigious labs, including OpenAI, Toyota Research, GitHub and more. And the best part is that waits and biases is free for all individuals, academics and open source projects. Make sure to visit them through wmb.com slash papers or just click the link in the video description and you can get a free demo today. Our thanks to waits and biases for their long standing support and for helping us make better videos for you. Thanks for watching and for your generous support and I'll see you next time.
|
Dear Fellow Scholars, this is Two Minute Papers with Dr. Karo Zonai-Fehir. Today we are going to explore what happens if we unleash an AI to read the internet and then ask it some silly questions and we'll get some really amazing answers. But how? Well, in the last few years, open AI set out to train an AI named GPT-3 that could finish your sentences. Then they made image GPT and they could even finish your images. Yes, not kidding. It could identify that the cat here likely holds a piece of paper and finish the picture accordingly and even understood that if we have a droplet here and we see just a portion of the ripels then this means a splash must be filled in. So, in summary GPT-3 is trained to finish your sentences or even your images. However, scientists at OpenAI identified that it really isn't great at following instructions. Here is an excellent example. Look, we ask it to explain the moon landing to a six-year-old and it gets very confused. It seems to be in text completion mode instead of trying to follow our instructions. Meanwhile, this is instruct GPT near new method which can not only finish our sentences, but also follow our instructions. And would you look at that? It does that successfully. Good job, a little AI. But this was a really simple example, but of course we are experienced fellow scholars here, so let's try to find out what this AI is really capable of in three really cool examples. Remember in the Hitchhiker's Guide to Galaxy where people could ask an all-knowing machine the most important questions of things that trouble humanity. Yes, we are going to do exactly that. Round one, so they are all-knowing AI. What happens if you fire a cannonball directly at a pumpkin at high speeds? Well, what do you see that? According to GPT-3, pumpkins are strong magnets. I don't know which internet forum told the AI that? Not good. And now hold on to your papers and let's see the new techniques answer together. Wow! This is so much more informative. Let's break it down together. It starts out by hedging, noting that it is hard to say because there are too many unpredictable factors involved. And knowing as it might seem, it is correct to say all this. Good start. Then, at least some of the factors that might decide the fate of that pumpkin like the size of the cannonball, distance and velocity. Yes, we are getting there, but please give me something concrete. Yes, there we go. It says that, quote, some of the more likely possible outcomes include breaking or knocking the pumpkin to the ground, cracking the pumpkin, or completely obliterating it. Excellent. A little smarty pants AI at our disposal. Amazing. I love it. Round 2, Code Samarization. What deep mines Alpha Code is capable of reading a competition level programming problem and coding up a correct solution right in front of our eyes. That is all well and good, but if we give GPT-3 a piece of code and ask it what it does, well the answer is not only not very informative, but it's also incorrect. No instruct GPT gives a much more insightful answer, which shows a bit of understanding of what this code does. That is amazing. Note that it is not completely right, but it is directionally correct. Partial credit for the AI. Round 3, write me a poem. About what? Well, about a wise frog. With GPT-3 we get the usual confusion. By round 3 we really see that it was really not meant to do this. And with instruct GPT, let's see... Hmm. An all-knowing frog who is the master of this guys, a great teacher, and quite possibly the bringer of peace and tranquility to humanity. All written by the AI. This is fantastic. I love it. What a time to be alive. Now we only looked at three examples here, but what about the rest? Where you're not for a second, open AI scientists ran a detailed user study and found out that people preferred instruct GPT solutions way more often than the previous techniques on a larger set of questions. That is a huge difference. Absolutely amazing. Once again, incredible progress. Just one more paper down the line. So, is it perfect? No. Of course not. Let's highlight one of its limitations. If the question contains false premises, it accepts the premise as being real and goes with it. This leads to making things up. Yes, really. Check this out. Why aren't birds real? GPT3 says something. I am not sure what this one is about. This almost sounds like gibberish. While instruct GPT accepts the fact that birds aren't real and even helps us craft an argument for that. This is a limitation and I must say a quite remarkable one. An AI that makes things up. Food for thought. And remember at the start of this episode we looked at this moon landing example. Did you notice the issue there? Please let me know in the comments below. So, what is the issue here? Well, beyond not being all that informative, it was asked to describe the moon landing in a few sentences. This is not a few sentences. This is one sentence. If we give it constraints like that, it tries to adhere to them, but is often not too great at that. And of course, both of these shortcomings show us the way for an even better follow-up paper, which based on previous progress in AI research could appear maybe not even in years, but much quicker than that. If you are interested in such a follow-up work, make sure to subscribe and hit the bell icon to not miss it when it appears on two-minute papers. So, what would you use this for? What do you expect to happen? A couple more papers down the line? Please let me know in the comments below. I'd love to hear your thoughts. This episode has been supported by Lambda GPU Cloud. If you're looking for inexpensive Cloud GPUs for AI, check out Lambda GPU Cloud. They've recently launched Quadro RTX 6000, RTX 8000, and V100 instances, and hold onto your papers because Lambda GPU Cloud can cost less than half of AWS and Azure. Plus, they are the only Cloud service with 48GB RTX 8000. Join researchers at organizations like Apple, MIT, and Caltech in using Lambda Cloud instances, workstations or servers. Make sure to go to lambdaleps.com, slash papers to sign up for one of their amazing GPU instances today. Our thanks to Lambda for their long-standing support and for helping us make better videos for you. Thanks for watching and for your generous support, and I'll see you next time.
|
Dear Fellow Scholars, this is Two Minute Papers with Dr. Karo Zonai-Fehir. As you see here, I love Two Minute Papers. But fortunately, I am not the only one Elon Musk loves Two Minute Papers too. What's more, even cats love it. Everybody loves Two Minute Papers. So, what was that? Well, these are all synthetic videos that were made by this new Magical AI. So, here we are adding synthetic data to an already existing photorealistic video and the AI is meant to be able to understand the movement, facial changes, geometry changes, and track them correctly, and move the mustache, the tattoos, or anything else properly. That is a huge challenge. How is that even possible? Many AI-based techniques from just a year or two ago were not even close to being able to pull this off. However, now, hold onto your papers and have a look at this new technique. Now that is a great difference. Look at that. So cool. This is going to be really useful for, at the very least, two things. One, augmented reality applications. For instance, have a look at this cat. Well, the bar on the internet for cat videos is quite high and this will not cut it. But wait, yes, now we're talking. The other examples also showcase that the progress in machine learning and AI research these days is absolutely incredible. I would like to send a big thank you to the authors for taking the time off their busy day to create these results only for us. That is a huge honor. Thank you so much. So far, this is fantastic news, especially given that many works are limited to one domain. For instance, Microsoft's human generator technique is limited to people. However, this works on people, dogs, cats and even Teslas. So good. I love it. What a time to be alive. But wait, I promised two applications. Not one. So what is the other one? Well, image editing. Image editing. Really? What about that? That is not that new. But... Aha! Not just image editing, but mass scale image editing. Everybody can get antlers, tattoos, stickers, you name it. We just chuck in the dataset of images, choose the change that we wish to make and outcomes the entirety of the edited dataset automatically. Now that is a fantastic value proposition. But of course, the season follow-scaler immediately knows that surely not even this technique is perfect. So, where are the weak points? Look... Oh yes. That. I see this issue for nearly every technique that takes on this task. It still has troubles with weird angles and occlusions. But we also have good news. If you can navigate your way around an online collab notebook, you can try it too. You know what to do? Yes, let the experiments begin. So, what would you use this for? Who would rock the best mustache? And what do you expect to happen? A couple more papers down the line. Please let me know in the comments below. This episode has been supported by Kohir AI. Kohir builds large language models and makes them available through an API so businesses can add advanced language understanding to their system or app quickly with just one line of code. You can use your own data, whether it's text from customer service requests, legal contracts, or social media posts to create your own custom models to understand text or even generated. For instance, it can be used to automatically determine whether your messages are about your business hours, returns, or shipping. Or it can be used to generate a list of possible sentences you can use for your product descriptions. Make sure to go to Kohir.ai slash papers or click the link in the video description and give it a try today. It's super easy to use. Thanks for watching and for your generous support and I'll see you next time.
|
In dear Fellow Scholars, this is 2 Minute Papers with Dr. Karo Jolai Fahir. Today we are going to see if Waymo's AI can recreate a virtual copy of San Francisco from 2.8 million photos. To be able to do that, they rely on a previous AIB technique that can take a collection of photos like these and magically create a video where we can fly through these photos. This is what we call a nerve-based technique and these are truly amazing. Essentially, photos go in, the AI fears in the gaps and reality comes out. And there are a lot of gaps between these photos, all of which are filled in with high quality synthetic data. So, as you see with these previous methods, great leaps are being made, but one thing stayed more or less the same. And that is, the scale of these scenes is not that big. So, scientists at Waymo had a crazy idea and they said rendering just a tiny scene is not that useful. We have millions of photos laying around, why not render an entire city like this? So, can Waymo do that? Well, maybe, but that would take Waymo. I am sorry, I am so sorry, I just couldn't resist. Well, let's see what they came up with. Look, these self-driving cars are going around the city, they take photos along their journey and... Well, I have to say that I am a little skeptical here. Have a look at what previous techniques could do with this dataset. This is not really usable. So, could Waymo pull this off? Well, hold on to your papers and let's have a look together. My goodness, this is their fully reconstructed 3D neighborhood from these photos. Wow, that is superb. And don't forget, most of this information is synthetic, that is, field in by the AI. Does this mean that? Yes, yes it does. It means three amazing things. One, we can drive a different path that has not been driven before by these cars and still see the city correctly. Two, we can look at these buildings from viewpoints that we don't have enough information about and the AI feels in the rest of the details. So cool. But it doesn't end there. No sir, not even close. Three, here comes my favorite. We can also engage in what they call appearance modulation. Yes, some of the driving took place at night, some during the daytime, so we have information about the change of the lighting conditions. What does that mean? It means that we can even fuse all this information together and choose the time of day for our virtual city. That is absolutely amazing. I love it. Yes, of course, not even this technique is perfect. The resolution and the details should definitely be improved over time. Plus, it does well with a stationary city, but with dynamic moving objects, not so much. But do not forget, the original first nerve paper was published just two years ago and it could do this. And now just a couple papers down the line and we have not only these tiny scenes, but entire city blocks. So much improvement in just a couple papers. How cool is that? Absolutely amazing. And with this, we can drive and play around in a beautiful virtual world that is a copy of the real world around us. And now, if we wish it to be a little different, we can even have our freedom in changing this world according to our artistic vision. I would love to see more work in this direction. But wait, here comes the big question. What is all this good for? Well, one of the answers is seem to real. What is that? Seem to real means training an AI in a simulated world and trying to teach it everything it needs to learn there before deploying it into the real world. Here is an amazing example. Look, open AI trained the robot hand in a simulation to be able to rotate these ruby cubes. And then deployed the software onto a real robot hand and look, it can use this simulation knowledge and now it works in the real world too. But seem to real has relevance to self-driving cars too. Look, Tesla is already working on creating virtual worlds and training their cars there. One of the advantages of that is that we can create really unlikely and potentially unsafe scenarios, but in these virtual worlds, the self-driving AI can train itself safely. And when we deploy them into the real world, they will have all this knowledge. It is fantastic to see Waymo also moving in this direction. What a time to be alive. So, what would you use this for? What do you expect to happen? A couple more papers down the line? Please let me know in the comments below. I'd love to hear your thoughts. What you see here is a report of this exact paper we have talked about which was made by Wades and Biasis. I put a link to it in the description. Make sure to have a look. I think it helps you understand this paper better. Wades and Biasis provides tools to track your experiments in your deep learning projects. Using their system, you can create beautiful reports like this one to explain your findings to your colleagues better. It is used by many prestigious labs including OpenAI, Toyota Research, GitHub and more. And the best part is that Wades and Biasis is free for all individuals, academics and open source projects. Make sure to visit them through www.nb.com slash papers or just click the link in the video description and you can get a free demo today. Our thanks to Wades and Biasis for their long-standing support and for helping us make better videos for you. Thanks for watching and for your generous support and I'll see you next time.
|
Dear Fellow Scholars, this is Two Minute Papers with Dr. Karo Zonai-Fehir. Today we are going to have a look at this insane light transport simulation paper and get our minds blown in three chapters. Chapter 1. Radiosity. Radiosity is an old, old light transport algorithm that can simulate the flow of light within a scene. And it can generate a scene of this quality. Well, not the best clearly, but this technique is from decades ago. Back in the day, this was a simple and intuitive attempt at creating a light simulation program and quite frankly, this was the best we could do. It goes something like this. Slice up the scene into tiny surfaces and compute the light transport between these tiny surfaces. This worked reasonably well for diffuse light transport, matte objects if you will. But it was not great at rendering shiny objects. And it gets even worse. Look, you see these blocky artifacts here? These come from the fact that the scene has been subdivided into these surfaces and the surfaces are not fine enough for these boundaries to disappear. But, yes, I hear you asking, Karoy, why talk about this ancient technique? I'll tell you in a moment, you'll see, I promise. So, yes, radiosity is old. Some professors still teach it to their students. It makes an interesting history lesson, but I haven't seen any use of this technique in the industry for decades now. If radiosity would be a vehicle, it would be a horse carriage in the age of high-tech Tesla cars. So, I know what you're thinking. Yes, let's have a look at those Teslas. Chapter 2. Neural rendering. Many modern light simulation programs can now simulate proper light transport which shiny objects, none of these blocky artifacts, they are, as you see, absolutely amazing. They can render all kinds of material models, detailed geometry, caustics, color bleeding, you name it. However, they seem to start out from a noisy image and as we compute the path of more and more light rays, this noisy image clears up over time. But, this still takes a while. How long? Well, from minutes to days. Ouch. And then, a neural rendering entered the frame. Here you see our earlier paper that replaced the whole light simulation program with a neural network that learned how to do this. And it can create these images so quickly that it easily runs not in minutes or days, but as fast as you see here. Yes, real time on a commodity graphics card. Now note that this neural render is limited to this particular scene. With this, I hope that it is easy to see that we are now so far beyond radiosity that it sounds like a distant memory of the olden times. So, once again, why talk about radiosity? Well, check this out. Chapter 3. Neural Radiosity Excuse me, what? Yes, you heard it right. This paper is about neural radiosity. This work elevates the old, old radiosity algorithm by using a similar formulation to the original technique, but also infusing it with a powerful neural network. It is using the same horse carriage, but strapping it onto a rocket, if you will. Now you have my attention. So, let's see what this can do together. Look. Yes, it can render really intense, specular highlights. Hold on to your papers and… Wow. Once again, the results look like a nearly pixel-perfect copy of the reference simulation. And we now understand the limitations of the old radiosity, so let's strike where it hurts the most. Yes, perfectly specular, mirror-like surfaces. Let's see what happens here. Well, I can hardly believe what I am seeing here. No issues whatsoever. Still close to pixel-perfect. This new paper truly elevates the good old radiosity to the next level. So good. Loving it. But wait a second. I hear you asking, yes, Karoi, this is all well and good. But if we have the reference simulation, why not just use that? Good question. Well, that's right. The reference is great, but that one takes up to several hours to compute, and the new technique can be done super quickly. Yet they look almost exactly the same. My goodness. In fact, let's look at an equal time comparison against one of the Tesla techniques past tracing. We give the two techniques the same amount of time and see what they can produce. Let's see. Now that is no contest. Look, this is Eric Vitch's legendary scene where the light only comes from the neighboring room through a door that is only slightly a jar. This is notoriously difficult for any kind of light transport algorithm. And yet, look at how good this new one is entangling it. Now, note that not even this technique is perfect. It has two caveats. One training takes place per scene. This means that we need to give the scenes to the neural network in advance. So it can learn how light bounces off of this place. And this can take from minutes to hours. But, for instance, if we have a video game with a limited set of places that you can go, we can train our neural network on all of them in advance and deploy it to the players who can then enjoy it for as long as they wish. No more training is required after that. But the technique would still need to be a little faster than it currently is. And two, we also need quite a bit of memory to perform all this. And yes, I think this is an excellent place to invoke the first law of papers which says that research is a process. Do not look at where we are, look at where we will be two more papers down the line. And two more papers down the line, who knows, maybe we get this in real time and with a much friendlier memory consumption. Now, there is one more area that I think would be an excellent direction for future work. And that is about the per scene training of the neural network. In this work, it has to get a feel of the scene before the light simulation happens. So how does the knowledge learn on one scene transfer to others? I imagine that it should be possible to create a more general version of this that does not need to look at a new scene before the simulation takes place. And in summary, I absolutely love this paper. It takes an old old algorithm, blows the dust off of it and completely reinvigorates it by infusing it with a modern learning based technique. What a time to be alive! So what do you think? What would you use this for? I'd love to hear your thoughts. Please let me know in the comments below. And when watching all these beautiful results, if you feel that this light transport thing is pretty cool and you would like to learn more about it, I had a master-level course on this topic at the Technical University of Vienna. Since I was always teaching it to a handful of motivated students, I thought that the teachings shouldn't only be available for the privileged few who can afford a college education, but the teachings should be available for everyone. Free education for everyone, that's what I want. So the course is available free of charge for everyone, no strings attached, so make sure to click the link in the video description to get started. We write a full-light simulation program from scratch there and learn about physics, the world around us, and more. If you watch it, you will see the world differently. Percepti Labs is a visual API for TensorFlow carefully designed to make machine learning as intuitive as possible. This gives you a faster way to build out models with more transparency into how your model is architected, how it performs, and how to debug it. And it even generates visualizations for all the model variables and gives you recommendations both during modeling and training and thus all this automatically. I only wish I had a tool like this when I was working on my neural networks during my PhD years. Visit perceptilabs.com, slash papers, and start using their system for free today. Thanks to perceptilabs for their support and for helping us make better videos for you. Thanks for watching and for your generous support, and I'll see you next time.
|
Dear Fellow Scholars, this is Two Minute Papers with Dr. Karo Zsolnai-Fehir. Today we are going to see how Nvidia's new AI can do three seemingly impossible things with just one elegant technique. For reference, here is a previous method that can take a collection of photos like these and magically create a video where we can fly through these photos. This is what we call a nerve-based technique and these are truly amazing. Essentially, photos go in and reality comes out. So, I know you're thinking Karo, this looks like science fiction. Can even this be topped and the answer is yes, yes it can. Now you see, this new technique can also look at a small collection of photos and be it people or cats, it learns to create a continuous video of them. This is look fantastic and remember, most of the information that you see here is synthetic, which means it is created by the AI. So good, but wait, hold on to your papers because there is a twist. It is often the case for some techniques that they think in terms of photos, while other techniques think in terms of volumes. And get this, this is a hybrid technique that thinks in terms of both. Okay, so what does that mean? It means this. Yes, it also learned to not only generate these photos, but also the 3D geometry of these models at the same time. And this quality for the results is truly something else. Look at how previous techniques struggle with the same task. Wow, they are completely different than the input model. And you might think that of course they are not so good, they are probably very old methods. Well, not quite. Look, these are not some ancient techniques. For instance, Giraffe is from the end of 2020 to the end of 2021, depending on which variant they used. And now let's see what the new method does on the same data. Wow, my goodness, now that is something. Such improvement is so little time. The pace of progress in AR research is nothing short of amazing. And not only that, but everything it produces is multi-view consistent. This means that we don't see a significant amount of flickering as we rotate these models. There is a tiny bit on the fur of the cats, but other than that, very little. That is a super important usability feature. But wait, it does even more. Two, it can also perform one of our favorites, super resolution. What is super resolution? Simple, of course pixelated image goes in and what comes out. Of course, a beautiful detailed image. How cool is that? And here comes number three. It projects these images into a latent space. What does that mean? A latent space is a made up place where we are trying to organize data in a way that similar things are close to each other. In our earlier work, we were looking to generate hundreds of variants of a material model to populate this scene. In this latent space, we can concoct all of these really cool digital material models. A link to this work is available in the video description. Now, let's see. Yes, when we take a walk in the internal latent space of this technique, we can pick a starting point, a human face, and generate these animations as this face morphs into other possible human faces. In short, it can generate a variety of different people. Very cool. I love it. Now, of course, not even this technique is perfect. I see some flickering around the teeth, but otherwise, this will be a fantastic tool for creating virtual people. And remember, not only photos of virtual people, we get the 3D geometry for their heads too. With this, we are one step closer to democratizing the creation of virtual humans in our virtual worlds. What a time to be alive. And if we have been holding onto your paper so far, now squeeze that paper because get this. You can do all of this in real time. And all of these applications can be done with just one elegant AI technique. Once again, scientists at Nvidia knocked it out of the park with this one. Bravo. So, what about you? If all this can be done in real time, what would you use this for? I'd love to know. Please let me know in the comments below. This episode has been supported by Lambda GPU Cloud. If you're looking for inexpensive Cloud GPUs for AI, check out Lambda GPU Cloud. They recently launched Quadro RTX 6000, RTX 8000, and V100 instances. And hold onto your papers because Lambda GPU Cloud can cost less than half of AWS and Azure. Plus, they are the only Cloud service with 48GB RTX 8000. Join researchers at organizations like Apple, MIT and Caltech in using Lambda Cloud instances, workstations or servers. Make sure to go to lambdaleps.com slash papers to sign up for one of their amazing GPU instances today. Our thanks to Lambda for their long-standing support and for helping us make better videos for you. Thanks for watching and for your generous support, and I'll see you next time.
|
Dear Fellow Scholars, this is two-minute papers with Dr. Karo Zsolnai-Fehir. Oh my goodness! This work is history in the making. Today we are going to have a look at Alpha Fold, perhaps one of the most important papers of the last few years. And you will see that nothing that came before even comes close to it and that it truly is a gift to humanity. So what is Alpha Fold? Alpha Fold is an AI that is capable of solving protein structure prediction which we will refer to as protein folding. Okay, but what is a protein and why does it need folding? A protein is a string of amino acids. These are the building blocks of life. This is what goes in which in reality has a 3D structure. And that is protein folding. Letters go in and a 3D object comes out. This is hard. How hard exactly? Well, let's compare it to DeepMind's amazing previous projects and we'll see that none of these projects even come close in difficulty. For instance, DeepMind's previous AI learned to play chess. Now, why does this matter as we already have D-Blue which is a chess computer that can play at the very least as well as Casper of Dead and it was built in 1995? So, why is chess interesting? Well, the space of possible moves is huge. And D-Blue in 1995 was not an AI in the stricter sense, but a handcrafted technique. This means that it can play chess and that's it. One algorithm, one game. If you want it to play a different game, you write a different algorithm. And yes, that is the key difference. DeepMind's chess AI is a general learning algorithm that can learn many games. For instance, Japanese chess show G2. One algorithm, many games. And yes, chess is hard, but these days the AI can manage. Then, GO is the next level. This is not just hard, it is really hard. The space of possible moves is significantly bigger and we can't just evaluate all the long-term effects of our moves. It is even more hopeless than chess. And that's often why people say that this game requires some sort of intuition to play. But DeepMind's AI solved that too and beat the world champion GO player 4-1 in a huge media event. The AI can still manage. Now, get this. If chess is hard and GO is very hard, then protein folding is sinfully difficult. Once again, the string of text and coding the amino acids go in and a 3D structure comes out. Why is this hard? Why not just try every possible 3D structure and see what sticks? Well, not quite. The third space for this problem is still stupendously large, perhaps not as big as playing a continuous strategy game like Starcraft 2, but the search here is much less forgiving. Also, we don't have access to a perfect scoring function, so it is very difficult to define what exactly should be learned. You see, in a strategy game, a win is a win, but for proteins, nature doesn't really tell us what it is up to when creating these structures. Thus, DeepMind did very well in chess and GO and Starcraft 2 and challenging as they are, they are not even close to being as challenging as protein folding. Not even close. To demonstrate that, look, this is Casp. I've heard DeepMind CEO Demis Hassabis call it the Olympics of protein folding. If you look at how teams of scientists prepare for this event, you will probably agree that yes, this is indeed the Olympics of protein folding. At about a score of 90, we can think of protein folding as a mostly solved problem. But, no need to worry about definitions, though. Look, we are not even close to 90. And it gets even worse. Look, this GDT score means the global distance test. This is a measure of similarity between the predicted and the real protein structure. And, wait a second, what? The results are not only not too good, but they appear to get worse over time. Is that true? What is going on here? Well, there is an explanation. The competition gets a little harder over time, so even flat results mean that there is a little improvement over time. And now, hold on to your papers, and let's look at the results from DeepMind's AIB solution, AlphaFold. Wow, now we're talking. Look at that. The competition gets harder, and it is not only flat, but can that really be, it is even better than the previous methods. But, we are not done here. No, no, not even close. If you have been holding onto your papers so far, now squeeze that paper, because what you see here is old news. Only two years later, AlphaFold 2 appeared. And, just look at that. It came in GAN's blazing. So much so that the result is, I can't believe it. It is around the 90th mark. My goodness, that is history in the making. Yes, this is the place on the internet, where we get unreasonably excited by a large blue bar. Welcome to two-minute papers. But, what does this really mean? Well, in absolute terms, AlphaFold 2 is considered to be about three times better than previous solutions. And, all that in just two years. That is a miracle right in front of our eyes. Now, let's pop the hood and see what is inside this AI. And, hmm, look at all these elements in the system that make this happen. So, where do we even start? Which of these is the most important? What is the key? Well, everything. And, nothing. I will explain this in a moment. That does not sound very enlightening. So, what is going on? Well, indeed, mind ran a detailed ablation study on what mattered and the result is the following. Everything mattered. Look, with few exceptions, every part adds its own little piece to the final result. But, none of these techniques are a silver bullet. But, to understand a bit more about what is going on here, let's look at three things. One, AlphaFold 2 is an end-to-end network that can perform iterative refinement. What do these mean? What this means is that everything needed to solve the task is learned by the network. And, that it starts out from a rough initial guess. And then, it gradually improves it. You see this process here, and it truly is a sight to behold. Two, it uses an attention-based model. What does that mean? Well, look. This is a convolutional neural network. This is wired in a way that information flows to neighboring neurons. This is great for image recognition, because usually, the required information is located nearby. For instance, let's imagine that we wish to train a neural network that can recognize a dog. What do we need to look at? Well, floppy ears, a black snout, fur, okay, we're good, we can conclude that we have a dog here. Now, have you noticed? Yes, all of this information is located nearby. Therefore, a convolutional neural network is expected to do really well at that. However, check this out. This is a transformer, which is an attention-based model. Here, information does not flow between neighbors, no sir. Here, information flows everywhere. This has spontaneous connections that are great for almost anything if we can use them well. For instance, when reading a book, if we are at page 100, we might need to recall some information from page 1. Transformers are excellent for tasks like that. They are still quite new, just a few years old, and are already making breakthroughs. So, why use them for protein folding? Well, things that are 200 amino acids apart in the text description can still be right next to each other in the 3D space. Yes, now we know that for that we need attention networks, for instance, a transformer. These are seeing a great deal of use these days. For instance, Tesla also uses them for training their self-driving cars. Yes, so these things mattered. But so many other things did too. Now, I mentioned that the key is everything. And nothing. What does that mean? Well, look here. Apart from a couple examples, there is no silver bullet here. Every single one of these improvements bummed the score a little bit. But all of them are needed for the breakthrough. Now, one of the important elements is also adding physics knowledge. How do you do that? Well, typically the answer is that you don't. You see, when we design a handcrafted technique, we write the knowledge into an algorithm by hand. For instance, in chess, there are a bunch of well-known openings for the algorithm to consider. For protein folding, we can tell the algorithm that if you see this structure, it typically bends this way. Or we can also show it common protein templates kind of like openings for chess. We can add all these valuable expertise to a handcrafted technique. Now, we noted that scientists at DeepMind decided to use an end-to-end learning system. I would like to unpack that for a moment because this design decision is not trivial at all. In fact, in a moment, I bet you will think that it's flat out counterintuitive. Let me explain. If we are a physics simulation researcher, and we have a physics simulation program, we take our physics knowledge and write a computer program to make use of that knowledge. For instance, here you see this being used to great effect, so much so that what you see here is not reality, but a physics simulation. All handcrafted. Clearly, using this concept, we can see that human ingenuity goes very far, and we can write super powerful programs. Or we can do end-to-end learning, where, surprisingly, we don't write our knowledge into the algorithm at all. We give it training data instead, and let the AI build up its own knowledge base from that. And AlphaFold is an end-to-end learning project, so almost everything is learned. Almost. And one of the great challenges of this project was to infuse the AI with physics knowledge without impacting the learning. That is super hard. So, training her? How long does this take? Well, get this. Deep-mind can train this incredible folding AI in as little as two weeks. Hmm, why is two weeks so little? Well, after this step is done, the AI can be given a new input and will be able to create this 3D structure in about a minute. And we can then reuse this training neural network for as long as we wish. Phew, so this is a lot of trouble to fold these proteins. So, what is all this good for? The list of applications is very impressive. I'll give you just a small subset of them that I really liked. For instance, it helps us better understand the human body or create better medicine against malaria and many other diseases develop more healthy food or develop enzymes to break down plastic waste and that is just the start. Now, you're probably asking, Karoy, you keep saying that this is a gift to humanity. So, why is it a gift to humanity? Well, here comes the best part. A little after publishing the paper, DeepMind made these 3D structure predictions available free for everyone. For instance, they have made their human protein predictions public. Beyond that, they already have made their predictions public for yeast, important pathogens, crop species and more. And thus, I have already seen follow-up works on how to use this for developing new drugs. What a time to be alive! Now, note that this is but one step in a thousand-step journey. But one important step nonetheless. And I would like to send huge congratulations to DeepMind. Something like this costs a ton to develop a note that it is not easy or maybe not even possible to immediately make a product out of this and monetize it. This truly is a gift to humanity. And a project like this can only emerge from proper long-term thinking that focuses on what matters in the long term. Not just thinking about what is right now. A bravo. Now, of course, not even AlphaFull2 is perfect. For instance, it's not always very confident about its own solutions. And it also performs poorly in antibody interactions. Both of these are subject to intense scrutiny and follow-up papers are already appearing in these directions. Now, one last thing. Why does this video exist? I got a lot of questions from you asking why I made no video on AlphaFull2. Well, protein folding is a highly multidisciplinary problem which, beyond machine learning, requires tons of knowledge in biology, physics, and engineering. Thus, my answer was that I don't feel qualified to speak about this project, so I better not. However, something has changed. What has changed? Well, now I had the help of someone who is very qualified. As qualified as it gets, because it is the one and only John Jumper, the first author of the paper who kindly agreed to review the contents of this video to make sure that I did not mess up too badly. Thus, I would like to send a big thank you to John, his team, and deep-mind for creating AlphaFull2 and helping this video come into existence. It came late, so we missed out on a ton of views, but that doesn't matter. What matters is that you get an easy to understand and accurate description of AlphaFull2. Thank you so much for your patience. This video has been supported by weights and biases. Being a machine learning researcher means doing tons of experiments and, of course, creating tons of data. But I am not looking for data, I am looking for insights. And weights and biases helps with exactly that. They have tools for experiment tracking, data set and model versioning, and even hyper-parameter optimization. No wonder this is the experiment tracking tool choice of open AI, Toyota Research, Samsung, and many more prestigious labs. Make sure to use the link WNB.me slash paper intro, or just click the link in the video description, and try this 10-minute example of weights and biases today to experience the wonderful feeling of training a neural network and being in control of your experiments. After you try it, you won't want to go back. Thanks for watching and for your generous support, and I'll see you next time.
|
Dear Fellow Scholars, this is two-minute papers with Dr. Karo Jona Ifehir. Today, we are going to see how a robot doggy can help us explore dangerous areas without putting humans at risk. Well, not this one, mind you, because this is not performing too well. Why is that? Well, this is a proprioceptive robot. This means that the robot is essentially blind. Yes, really. It only senses its own internal state and that's it. For instance, it knows about its orientation and twist of the base unit, a little joint information like positions and velocities, and that's about it. It is still remarkable that it can navigate at all. However, this often means that the robot has to feel out the terrain before being able to navigate on it. Why would we do that? Well, if we have difficult seeing conditions, for instance dust, fog, or smoke, a robot that does not rely on seeing is suddenly super useful. However, when we have good seeing conditions, as you see, we have to be able to do better than this. So, can we? Well, have a look at this new technique. Hmm, now we're talking. This new robot is extra-oceptive, which means that it has cameras, it can see, but it also has proprioceptive sensors too. These are the ones that tell the orientation and similar information. But this new one fuses together proprioception and extra-oception. What does that mean? Well, it can see and it can feel too. Kind of. The best of both worlds. Here you see how it sees the world. Thus, it does not need to feel out the terrain before the navigation, but, and here's the key. Now, hold on to your papers, because it can even navigate reasonably well, even if its sensors get covered. Look. That is absolutely amazing. And with the covers removed, we can give it another try, let's see the difference, and oh yeah, back in the game, baby. So far, great news, but what are the limits of this machine? Can it climb stairs? Well, let's have a look. Yep, another problem. Not only that, but it can even go on a long hike in Switzerland and reach the summit without any issues, and in general, it can navigate a variety of really difficult terrains. So, if it can do all that, now let's be really tough on it and put it to the real test. This is the testing footage before the grand challenge. And flying colors. So, let's see how it did in the grand challenge itself. Oh my, you see here, it has to navigate an underground environment completely, autonomously. This really is the ultimate test. No help is allowed, it has to figure out everything by itself. And this test has uneven terrain with lots of rubble, difficult seeing conditions, dust, tunnels, caves, you name it, an absolute nightmare scenario. And this is incredible. During this challenge, it has explored more than a mile, and how many times has it fallen? Well, what do you think? Zero. Yes, zero times. Wow, absolutely amazing. While we look at how well it can navigate the super slippery and squishy terrains, let's ask the key question. What is all this good for? Well, chiefly for exploring, under-explored, and dangerous areas. This means tons of useful applications, for instance, a variant of this could help save humans stuck under rubble, or perhaps even explore other planets without putting humans at risk. And more. And what do you think? What would you use this for? Please let me know in the comments below. What you see here is a report of this exact paper we have talked about, which was made by Wates and Biasis. I put a link to it in the description. Make sure to have a look. I think it helps you understand this paper better. Wates and Biasis provides tools to track your experiments in your deep learning projects. Using their system, you can create beautiful reports like this one to explain your findings to your colleagues better. It is used by many prestigious labs, including OpenAI, Toyota Research, GitHub, and more. And the best part is that Wates and Biasis is free for all individuals, academics, and open source projects. Make sure to visit them through wnb.com slash papers, or just click the link in the video description, and you can get a free demo today. Our thanks to Wates and Biasis for their long standing support, and for helping us make better videos for you. Thanks for watching and for your generous support, and I'll see you next time.
|
Dear Fellow Scholars, this is two-minute papers with Dr. Karo Jona Ifehir. You will not believe the things you will see in this episode. I promise. Earlier we saw that OpenEIS GPT-3 and Codex AI techniques can be used to solve grade school level math brain teasers, then they were improved to be able to solve university level math questions. Note that this technique required additional help to do that. And a follow-up work could even take a crack at roughly a third of mathematical, Olympiad level math problems. And now let's see what the excellent scientists at deep-mind have been up to in the meantime. Check this out. This is Alpha Code. What is that? Well, in college, computer scientists learn how to program those computers. Now deep-mind decided to instead teach the computers how to program themselves. Wow, now here you see an absolute miracle in the making. Look, here is the description of the problem and here is the solution. Well, I hear you saying, Karoy, there is no solution here and you are indeed right, just give it a second. Yes, now hold on to your papers and marvel at how this AI is coding up the solution right now in front of our eyes. But that's nothing we can also ask what the neural network is looking at. Check this out, it is peeping at different important parts of the problem statement and proceeds to write the solution taking these into consideration. You see that it also looks at different parts of the previous code that it had written to make sure that the new additions are consistent with those. Wow, that is absolutely amazing. I almost feel like watching a human solve this problem. Well, it solved this problem correctly. Unbelievable. So, how good is it? Well, it can solve about 34% of the problems in this dataset. What does that mean? Is that good or bad? Now if we have been holding on to your paper so far, squeeze that paper because it means that it roughly matches the expertise level of the average human competitor. Let's stop there for a moment and think about that. An AI that understands an English description mixed in with mathematical notation and codes up the solution as well as the average human competitor at least on the tasks given in this dataset. Phew, wow. So what is this wizardry and what is the key? Let's pop the hood and have a look together. Oh yes. One of the keys here is that it generates a ton of candidate programs and is able to filter them down to just a few promising solutions. And it can do this quickly and accurately. This is huge. Why? Because this means that the AI is able to have a quick look at a computer program and tell with a pretty high accuracy whether this will solve the given task or not. It has an intuition of sorts if you will. Now interestingly it also uses 41 billion parameters. 41 billion is tiny compared to OpenEAS GPT3 which has 175 billion. This means that currently AlphaCode uses a more compact neural network and it is possible that the number of parameters can be increased here to further improve the results. If we look at DeepMind's track record improving on these ideas I have no doubt that however amazing these results seem now who have really seen nothing yet. And wait, there is more. This is where I completely lost my papers. In the case that you see here it even learned to invent algorithms. A simple one mind you, this is DFS, a search algorithm that is taught in first year undergrad computer science courses but that does not matter. What matters is that this is an AI that can finally invent new things. Wow. A Google engineer with also a world class competitive programmer was also asked to have a look at these solutions and he was quite happy with the results. He said to one of the solutions that quote, it looks like it could easily be written by a human very impressive. Now clearly it is not perfect, some criticisms were also voiced. For instance, sometimes it forgets about variables that remain unused. Even that is very human like I must say. But do not think of this paper as the end of something. No no, this is but a stepping stone towards something much greater. And I will be honest, I can't even imagine what we will have just a couple more papers down the line. What a time to be alive. So, what would you use this for? What do you think will happen a couple papers down the line? I'd love to know. Let me know in the comments below. And if you are excited to hear about potential follow up papers to this, make sure to subscribe and hit the bell icon. So definitely not want to miss it when it comes. This episode has been supported by Lambda GPU Cloud. If you are looking for inexpensive Cloud GPUs for AI, check out Lambda GPU Cloud. They recently launched Quadro RTX 6000, RTX 8000 and V100 instances. And hold onto your papers because Lambda GPU Cloud can cost less than half of AWS and Azure. Because they are the only Cloud service with 48GB RTX 8000. Join researchers at organizations like Apple, MIT and Caltech in using Lambda Cloud instances, workstations or servers. Make sure to go to lambdaleps.com slash papers to sign up for one of their amazing GPU instances today. Our thanks to Lambda for their longstanding support and for helping us make better videos for you. Thanks for watching and for your generous support and I'll see you next time.
|
Dear Fellow Scholars, this is two-minute papers with Dr. Karo Zsolnai-Fehir. Today, through the power of AI research, we are going to see how easily we can ask virtual characters to not only say what we say, but we can even become an art director and ask them to add some emotion to it. You and your men will do that. You have to go in and out very quick. Are those Eurasian footwear cowboy chaps or jolly earth-moving headgear? Probably the most important thing we need to do is to bring the country together and one of the skills that I bring to bear. So, how does this happen? Well, in-goes, what we say, for instance, me, uttering, dear Fellow Scholars, or anything else. And here's the key. We can also specify the emotional state of the character and the AI does the rest. That is absolutely amazing, but it gets even more amazing. Now, hold onto your papers and look. Yes, that's right, this was possible back in 2017, approximately 400 too-minute papers episodes ago. And whenever I showcase results like this, I always get the question from you Fellow Scholars asking, yes, this all looks great, but when do I get to use this? And the answer is, right now. Why? Because Nvidia has released audio to face a collection of AI techniques that we can use to perform this quickly and easily. Look, we can record our voice, live, and have a virtual character say what we are seeing. But it doesn't stop there, it also has three amazing features. One, we can even perform a face swap not only between humanoid characters, but my goodness, even from a humanoid to, for instance, a rhino. Now, that's something. I love it, but wait, there is more. There is this. And this too. Two, we can still specify emotions like anger, sadness, and excitement, and the virtual character will perform that for us. We only need to provide our voice no more acting skills required. In my opinion, this will be a gutsynt in any kind of digital media, computer games, or even when meeting our friends in a virtual space. Three, the usability of this technique is out of this world. For instance, it does not eat a great deal of resources, so we can easily run multiple instances of it at the same time. This is a wonderful usability feature. One of many that really makes or breaks when it comes to a new technique being used in the industry or not. An aspect not to be underestimated. And here is another usability feature. It works well with Unreal Engine's Meta human. This is a piece of software that can create virtual humans. And with that, we can not only create these virtual humans, but become the voice actors for them without having to hire a bunch of animators. How cool is that? Now, I believe this is an earlier version of Meta human. Here is the newer one. Wow, way better. Just imagine how cool it would be to voice these characters automatically. Now, the important lesson is that this was possible in a paper in 2017, and now, in a few years, it has vastly improved so much so that it is now out there deployed in a real product that we can use right now. That is a powerful democratizing force for computer animation. So, yes, the papers that you see here are real, as real as it gets, and this tech transfer can often occur in just a few years' time, in some other cases, even quicker. What a time to be alive. So, what would you use this for? I'd love to know what you think, so please let me know in the comments below. This episode has been supported by CoHear AI. CoHear builds large language models and makes them available through an API so businesses can add advanced language understanding to their system or app quickly with just one line of code. You can use your own data, whether it's text from customer service requests, legal contracts, or social media posts to create your own custom models to understand text, or even generate it. For instance, it can be used to automatically determine whether your messages are about your business hours, returns, or shipping, or it can be used to generate a list of possible sentences you can use for your product descriptions. Make sure to go to CoHear.ai slash papers, or click the link in the video description and give it a try today. It's super easy to use. Thanks for watching and for your generous support and I'll see you next time.
|
Dear Fellow Scholars, this is two-minute papers with Dr. Karojona Ifehir. Today, we are going to perform pose estimation with an amazing twist. You'll love it. But wait, what is pose estimation? Well, simple, a video of people goes in and the posture they are taking comes out. Now, you see here that previous techniques can already do this quite well even for videos. So, by today, the game has evolved. Just pose estimation is not that new. We need pose estimation plus something else. We need a little extra, if you will. So what can that little extra be? Let's look at three examples. For instance, one Nvidia has a highly advanced pose estimation technique that can refine its estimations by putting these humans into a virtual physics simulation. Without that, this kind of foot sliding often happens, but after the physics simulation, not anymore. As a result, it can understand even this explosive sprinting motion. This dynamic serve too, you name it, all of them are very close. So, what is this good for? Well, many things, but here is my favorite. If we can track the motion well, we can put it onto a virtual character so we ourselves can move around in a beautiful, imagined world. So, that was one. Pose estimation plus something extra, where the something extra is a physics simulation. Nice. Now, two, if we allow an AI to read the Wi-Fi signals bouncing around in a room, it can perform pose estimation even through walls. Kind of. Once again, pose estimation with something extra. And three, this is pose estimation with inertial sensors. This works when playing a friendly game of table tennis with a friend, or, wait, maybe a not so friendly game of table tennis. And this works really well, even in the dark. So all of these are pose estimation plus something extra. And now, let's have a look at this new paper which performs pose estimation plus, well, pose estimation as it seems. Okay, I don't see anything new here, really. So why is this work on two-minute papers? Well, now, hold on to your papers and check this out. Yes. Oh, yes. So what is this? Well, here's the twist we can give a piece of video to this AI, write a piece of text as you see up here, and it will not only find what we are looking for in the video, market, but then even track it over time. Now that is really cool. We just say what we want to be tracked over the video, and it will do it automatically. It can find the dog and the capybara. These are rather rudimentary descriptions, but it is by no means limited to that. Look, we can also say a man wearing a white shirt and blue shorts riding a surfboard. And yes, it can find it. And also we can add a description of the surfboard and it can tell which is which. And I like the tracking too. The scene has tons of high frequency changes, lots of occlusion, and it is still doing really well. Loving it. So I am thinking that this helps us take full advantage of the text descriptions. Look, we can ask it to mark the parrot and the cacotoo, and it knows which is which. So I can imagine more advanced applications where we need to find the appropriate kind of animal or object among many others, and we don't even know what to look for. Just say what you want, and it will find it. I also liked how this is done with a transformer neural network that can jointly process the text and video in one elegant solution. That is really cool. Now of course, every single one of you fellow scholars can see that this is not perfect, not even close. Depending on the input, tempero coherence issues may arise. These are the jumpy artifacts from frame to frame. But still, this is swift progress in machine learning research. We could only do this in 2018, and we were very happy about it. And just a couple of papers down the line, we just say what we want, and the AI will do it. And just imagine what we will have a couple more papers down the line. I cannot wait. So what would you use this for? Please let me know in the comments below. And wait, the source code, an interactive demo, and a notebook are available in the video description. So, you know what to do? Yes, let the experiments begin. This video has been supported by weights and biases. Check out the recent offering fully connected, a place where they bring machine learning practitioners together to share and discuss their ideas, learn from industry leaders, and even collaborate on projects together. You see, I get messages from you fellow scholars telling me that you have been inspired by the series, but don't really know where to start. And here it is. Fully connected is a great way to learn about the fundamentals, how to reproduce experiments, get your papers accepted to a conference, and more. Make sure to visit them through wnb.me slash papers, or just click the link in the video description. Our thanks to weights and biases for their longstanding support, and for helping us make better videos for you. Thanks for watching and for your generous support, and I'll see you next time.
|
Dear Fellow Scholars, this is two-minute papers with Dr. Karo Zona Efei. Today, we are going to torment a virtual armadillo, become happy, or sad, depending on which way we are bending, create a ton of jelly sandwiches, design a crazy bridge, and more. So, what is going on here? Well, this new paper helps us enhance our physics simulation programs by improving how we evaluate derivatives. Derivatives describe how things change. Now, of course, evaluating derivatives is not new. It's not even old. It is ancient. Here you see an ancient technique for this, and it works well most of the time, but... Whoa! Well, this simulation blew up in our face, so, yes, it may work well most of the time, but, unfortunately, not here. Now, if you're wondering what should be happening in this scene, here is the reference simulation that showcases what should have happened. Yes, this is the famous pastime of the computer graphics researcher tormenting virtual objects basically all day long. So, I hope you're enjoying this reference simulation, because this is a great reference simulation. It is as reference as a simulation can get. Now, I hope you know what's coming. Hold on to your papers, because this is not the reference simulation. What you see here is the new technique described in this paper. Absolutely amazing. Actually, let's compare the two. This is the reference simulation for real this time. And this is the new complex step finite difference method. The two are so close, they are essentially the same. I love it. So good. Now, if this comparison made you hungry, of course, we can proceed to the jelly sandwiches. Here is the same scene simulated with a bunch of previous techniques. And, my goodness, all of them look different. So, which of these jelly sandwiches is better? Well, the new technique is better, because this is the only one that preserves volume properly. This is the one that gets us the most jelly. With each of the other methods, the jelly either reacts incorrectly or at the end of the simulation, there is less jelly than the amount we started with. Now, you're probably asking, is this really possible? And the answer is yes, yes it is. What's more, this is not only possible, but this is a widespread problem in physics simulation. Our seasoned fellow scholars had seen this problem in many previous episodes. For instance, here is one with the tragedy of the disappearing bunnies. And preserving the volume of the simulated materials is not only useful for jelly sandwiches. It is also useful for doing extreme yoga. Look, here are a bunch of previous techniques trying to simulate this. And, what do we see? Extreme bending, extreme bending, and even more extreme bending. Good, I guess. Well, not quite. This yoga shouldn't be nearly as extreme as we see here. The new technique reveals that this kind of bending shouldn't happen given these material properties. And wait, here comes one of my favorites. The new technique can also deal with this crazy example. Look, a nice little virtual hyperelastic material where the bending energy changes depending on the bending orientation, revealing a secret. Or two secrets, as you see, it does not like bending to the right so much, but bending to the left, now we're talking. And it can also help us perform inverse design problems. For instance, here we have a hyperelastic bridge built from over 20,000 parts. And here we can design what vibration frequencies should be present when the wind blows at our bridge. And here comes the coolest part we can choose this in advance. And then the new technique quickly finds the suitable geometry that will showcase the prescribed vibration types. And it pretty much converges after 4 to 6 Newton iterations. What does that mean? Yes, it means that the technique comes up with an initial guess and it needs to refine it only 4 to 6 times until it comes up with an excellent solution. So, better hyperelastic simulations quickly and conveniently? Yes, please, sign me up right now. And what would you use this for? Let me know in the comments below. Percepti Labs is a visual API for TensorFlow carefully designed to make machine learning as intuitive as possible. This gives you a faster way to build out models with more transparency into how your model is architected, how it performs, and how to debug it. And it even generates visualizations for all the model variables and gives you recommendations both during modeling and training and does all this automatically. I only wish I had a tool like this when I was working on my neural networks during my PhD years. Visit perceptilabs.com slash papers and start using their system for free today. Our thanks to perceptilabs for their support and for helping us make better videos for you. Thanks for watching and for your generous support and I'll see you next time.
|
Dear Fellow Scholars, this is two-minute papers with Dr. Karojona Ifehir. Today we are going to have a little taste of how smart an AI can be these days. And it turns out these new AI's are not only smart enough to solve some grade school math problems, but get this, a new development can perhaps even take a crack at university level problems. Is that even possible or is this science fiction? Well, the answer is yes, it is possible, kind of. So why kind of? Well, let me try to explain. This is opening as work from October 2021. The goal is to have their AI understand these questions, understand the mathematics, and reason about a possible solution for grade school problems. Hmm, all right. So this means that the GPT-3 AI might be suitable for the substrate of the solution. What is that? GPT-3 is a technique that can understand text, try to finish your sentences, even build websites, and more. So can it even deal with these test questions? Let's see it together. Hold on to your papers because in goes a grade school level question, a little math brain teaser if you will, and outcomes. My goodness, is that right? Here, outcomes not only the correct solution to the question, but even the thought process that led to this solution. Imagine someone claiming that they had developed an AI discapable 10 years ago. This person would have been logged into an asylum. And now it is all there, right in front of our eyes. Absolutely amazing. Okay, but how amazing. Well, it can't get everything right all the time and not even close. If we do everything right, we can expect it to be correct about 35% of the time. Not perfect, not even close, but it is an amazing step forward. So what is the key here? Well, yes, you guys did right. The usual suspects. A big neural network, and lots of training data. The key numbers are 175 billion model parameters, and it needs to read a few thousand problems and their solutions as training samples. That is a big rocket, and lots of rocket fuel, if you will. But this is nothing compared to what is to come. Now, believe it or not, here is a follow-up paper from just a few months later, January 2022, that claims to do something even better. This is not from OpenAI, but it piggybacks on OpenAI technology, as you will see in a moment. And this work promises that it can solve university-level problems. And when I saw this in the paper, I thought, really, now great school materials, okay, that is a great leap forward, but solving university-level math exams, that's where the gloves come off. I am really curious to see what this can do. Let's have a look together. Some of these brain teasers smell very much like MIT to me, surprisingly short and elegant questions that often seem much easier than they are. However, all of these require a solid understanding of fundamentals and sometimes even a hint of creativity. Let's see, yes, that is, indeed right, these are MIT introductory course questions. I love it. So can it answer them? Now, if we have been holding on to your paper so far, squeeze that paper, and let's see the results together. My goodness, these are all correct. Flying colors, perfect accuracy, at least on these questions. This is swift progress in just a few months. Absolutely amazing. So how is this black magic done? Yes, I know that's what you're waiting for, so let's pop the hood and look inside together. Mm-hmm. All right, two key differences from OpenAI's GPT-3-based solution. Difference number one. It gets additional guidance. For instance, it is told what topic we are talking about, what code library to reach out for, and what is the definition of mathematical concepts, for instance, what is a singular value decomposition. I would argue that this is not too bad. Students typically get taught these things before the exam, too. In my opinion, the key is that this additional guidance is done in an automated manner. The more automated, the better. Difference number two. The substrate here is not GPT-3, at least not directly, but codex. Codex is OpenAI's GPT language model that was fine-tuned to be excellent at one thing. And that is writing computer programs or finishing your code. And as we've seen in a previous episode, it really is excellent. For instance, it can not only be asked to explain a piece of code, even if it is written in assembly, or create a pong game in 30 seconds, but we can also give it plain text descriptions about a space game and it will write it. Codex is super powerful, and now it can be used to solve previously unseen university level math problems. Now that is really something. And it can even generate a bunch of new questions, and these are bona fide real questions. Not just exchanging the numbers, the new questions often require completely different insights to solve these problems. A little creativity I see well done little AI. So how good are these? Well, according to human evaluators, they are almost as good as the ones written by other humans. And thus, these can even be used to provide more and more training data for such an AI, more fuel for that rocket, and good kind of fuel. Excellent. And it doesn't end there. In the meantime, as of February 2022, scientists at OpenAI are already working on a follow-up paper that solves no less than high school mathematical-olimpiate problems. These problems require a solid understanding of fundamentals, proper reasoning, and often even that is not enough. Many of these tasks put up a seemingly impenetrable wall and climbing the wall typically requires a real creative spark. Yes, this means that this can get quite tough. And their new method is doing really well at these, once again, not perfect, not even close. But it can solve about 30 to 40% of these tasks, and that is a remarkable hit rate. Now we see that all of these works are amazing, and they all have their own trade-offs. They are good and bad at different things, and have different requirements. And most of all, they all have their own limitations. Thus, none of these works should be thought of as an AI that just automatically does human level math. What we see now is that there is swift progress in this area, and amazing new papers are popping up, not every year, but pretty much every month. And this is an excellent place to apply the first law of papers, which says that research is a process. Do not look at where we are, look at where we will be, two more papers down the line. So, what would you use this for? Please let me know in the comments below. I'd love to hear your ideas. And also, if you are excited by this kind of incredible progress in AR research, make sure to subscribe and hit the bell icon to not miss it when we cover these amazing new papers. This video has been supported by weights and biases. Look at this, they have a great community forum that aims to make you the best machine learning engineer you can be. You see, I always get messages from you fellow scholars telling me that you have been inspired by the series, but don't really know where to start. And here it is. In this forum, you can share your projects, ask for advice, look for collaborators, and more. Make sure to visit WNB.ME slash paper forum and say hi or just click the link in the video description. Our thanks to weights and biases for their long standing support and for helping us make better videos for you. Thanks for watching and for your generous support and I'll see you next time.
|
Dear Fellow Scholars, this is two-minute papers with Dr. Karo Zorna Ifeher. After reading a physics textbook on the laws of fluid motion, with a little effort, we can make a virtual world come alive by writing a computer program that contains these laws resulting in beautiful fluid simulations like the one you see here. The amount of detail we can simulate with these programs is increasing every year, not only due to the fact that computer hardware improves over time, but also the pace of progress in computer graphics research is truly remarkable. And this new paper promises detailed spiral spectral, fluid, and smoke simulations. What does that mean? It means that the simulations can be run inside a torus, spheres, cylinders, you name it. But wait, is that really new when using traditional simulation techniques we can just enclose the smoke in all kinds of domain shapes where the simulation will take place? People have done this for decades now. Here is an example of that. So what is new here? Well, let's have a look at some results and hopefully find out together. Let's start with the details first. This is the new technique. Hmm, I like this one. This is a detailed simulation. Sure, I'll give it that. But we can already create detailed simulations with traditional techniques. So once again, what is new here? You know what? Actually, let's compare it to a traditional smoke simulation technique and give it the same amount of time to run and see what that looks like. Wow, that is a huge difference. And yes, believe it or not, the two simulations run in the same amount of time. So, yes, it creates detailed simulations. Checkmark. And it has not only the details, but it has other virtues too. Now, let's bring up the heat some more. This is a comparison to not an older classical technique, but a spherical spectral technique from 2019. Let's see how the new method fares against it. Well, they both look good. So maybe this new method is not so much better after... Wait a second. Ouch. The previous one blew up. And the new one, yes. This still keeps going. Such improvement in just about two years. So, it is not only fares, but it is robust too. That is super important for real world use. Details and robustness. Checkmark. Now, let's continue with the shape of the simulation domain. Yes, we can enclose the simulation within this domain where the spherical domain itself can be imagined as an impenetrable wall, but it doesn't have to be that way. Look, we can even open it up. Very good. OK, so it is fast. It is robust. It supports crazy simulation domain shapes and even better, it looks detailed. But, are these the right details? Is this just pleasing for the eye or is this really how smoke should behave? The authors tested that tool and now hold on to your papers and look. I could add the labels here, but does it really matter? The tool look almost exactly the same. Almost pixel perfect. By the way, here you go. So, wow, the list of positives just keeps on growing. But, we are experienced fellow scholars here, so let's continue interrogating this method. Does it work for different viscosities? At the risk of simplifying what is going on, the viscosity of a puff of smoke relates to how nimble the simulation is. And it can handle a variety of these physical parameters too. OK, next. Can it interact with other objects too? I ask because some techniques look great in a simple, empty simulation domain, but break down when placed into a real scene where a lot of other objects are moving around. Well, not this new one. That is a beautiful simulation. I love it. So, I am getting more and more convinced with each test. So, where is the catch? What is the price to be paid for all this? Let's have a look. For the quality of the simulations we get, it runs in a few seconds per frame. And it doesn't even need your graphics card, it runs on your processor. And even then this is blazing fast. And implementing this on the graphics card could very well put this into the real time domain. And boy, getting these beautiful smoke puffs in real time would be an amazing treat. So, once again, what is the price to be paid for this? Well, have a look. Aha, there it is. That is a steep price. Look, this needs tons of memory. Tens of gigabytes. No wonder this was run on the processor. This is because modern computer graphics card don't have nearly as much memory on board. So, what do we do? Well, don't despair not even for a second. We still have good news. And the good news is that there are earlier research works that explore compressing these data sets down. And it turns out their size can be decreased dramatically. A perfect direction for the next paper down the line. And what do you think? Let me know in the comments below what you would use this for. And just one or two more papers down the line. And maybe we will get these beautiful simulations in our virtual worlds in real time. I can't wait. I really cannot wait. What a time to be alive. Wets and biases provides tools to track your experiments in your deep learning projects using their system. You can create beautiful reports like this one to explain your findings to your colleagues better. It is used by many prestigious labs, including OpenAI, Toyota Research, GitHub and more. And the best part is that Wets and Biases is free for all individuals, academics and open source projects. Make sure to visit them through WNB.com slash papers or just click the link in the video description. And you can get a free demo today. Our thanks to Wets and Biases for their long standing support and for helping us make better videos for you. Thanks for watching and for your generous support. And I'll see you next time.
|
Dear Fellow Scholars, this is two-minute papers with Dr. Karo Zsolnai-Fehir. Today, we are going to learn why you should not even try to handcuff a computer graphics researcher. And it's not because they are criminal masterminds. No, no. So, what is it then? Well, if they have read this paper, handcuffing them will not work. See, they will get out easily. So, what is all this insanity here? Well, this paper is about simulating repulsion. Or to be more exact, computing repulsive forces on curves and surfaces. Now, of course, your first question is okay, but what is that good for? Well, believe it or not, it doesn't sound like it, but it is good for so many things, the applications just keep coming and coming. So, how does this work? Well, repulsion can prevent intersections and collisions. And look at that. If we run this process over time, it creates flows. In other words, it starts mingling with the geometry in interesting, and it turns out also useful ways. For instance, imagine that we have this tangled handcuff. And someone comes up to us and says they can untangle this object by molding. And not only that, but this person gets even more brazen. They say that they even do it gently. Or, in other words, without any intersections. No colliding or breaking is allowed. Well, then I say I don't believe a word of it, show me. And the person does exactly that. Using the repulsive force algorithm, we can find a way to untangle them. This seems like black magic. Also, with some adjustments to this process, and running it backwards in time, we can even start with a piece of geometry and compute an ideal shrink wrap for it. Even better when applied to point clouds, we can even create a full 3D geometry that represents these points. And get this. It works even if the point clouds are incomplete. Now, make no mistake, the topic of point cloud reconstruction has been studied for decades now. So much so that during my PhD years, I attended to well over 100 paper talks on this topic. And I am by no means an expert on this, not even close, but I can tell you this looks like a pretty good solution. And these applications emerge only as a tiny side effect of this algorithm. But it's not over, there is more. It can even fix bad mesh geometries, something that artists encounter in the world all the time. Loving it. And it can also create cool and crazy surfaces for your art installations. Now, so far we have applied repulsion to surfaces. But this concept can also be applied to curves as well, which opens up a whole new world of really cool applications. Let's start with the most important one. Do you know what happens when you put your wired earbuds into your pocket? Yes, of course. Exactly, this happens every time, right? And now, hold on to your papers, because it turns out it can even untangle your headphones without breaking them. Now, wait a second, if it can generate curves that don't intersect, maybe it could also be used for path planning for characters in a virtual world. Look, these folks intersect. But if they use this new technique for path planning, there should be no intersections. And yes, no intersections, thereby no collisions. Excellent. And if we feel like it, we can also start with a small piece of noodle inside a bunny and start growing it. Why not? And then we can marvel at how over time it starts to look like in testings. And it just still keeps growing and growing without touching the bunny. So cool. Now, onto more useful things. It can even be used to visualize social media connections and family trees in a compact manner. Or it can move muscle fibers around and the list just goes on. But you are experienced fellow scholars over here. So I hear you asking, well, wait a second, all these sounds quite trivial. Just apply repulsive forces to the entirety of the surfaces. And off we go. Why does this have to be a paper published at a prestigious conference? Well, actually, let's try this. If we do that, oh oh, this happens. Or this happens. The algorithm is not so trivial after all. And this is what I love about this paper. It proposes a simple idea. And this idea is simple, but not easy. If we think it is easy, this happens. But if we do it well, this happens. The paper describes in detail how to perform this so it works properly and provides a ton of immediate things it can be used to. This is not a product. This is an idea and a list of potential applications for it. In my opinion, this is basic academic research at its best. Bravo. I love it. And what would you use this for? Please let me know in the comments below. I'd love to hear your ideas. And until then, we will have no more tangled earbuds in our pockets. And we can thank Computer Graphics Research for that. What a time to be alive. This video has been supported by weights and biases. And being a machine learning researcher means doing tons of experiments and of course creating tons of data. But I am not looking for data. I am looking for insights. And weights and biases helps with exactly that. They have tools for experiment tracking, data set and model versioning, and even hyper parameter optimization. No wonder this is the experiment tracking tool choice of open AI Toyota Research, Samsung, and many more prestigious labs. Make sure to use the link WNB.ME-slash-paper-paste. Or just click the link in the video description. And try this 10 minute example of weights and biases today to experience the wonderful feeling of training a neural network and being in control of your experiments. After you try it, you won't want to go back. Thanks for watching and for your generous support. And I'll see you next time.
|
Dear Fellow Scholars, this is Two Minute Papers with Dr. Károly Zsolnai-Fehér. Today we are going to do this and this and this. One of these applications is called a Nerf. What is that? Nerfs mean that we have a collection of photos like these and magically create a video where we can fly through these photos. Yes, typically scientists now use some sort of learning based AI method to fill in all this information between these photos. This is something that sounded like science fiction just a couple years ago and now here we are. Now these are mostly learning based methods, therefore these techniques need some training time. Wanna see how their results evolve over time? I surely do, so let's have a look together. This Nerf paper was published about a year and a half or two years ago. We typically have to wait for at least a few hours for something to happen. Then came the Planoxos paper with something that looks like black magic. Yes, that's right, these trains in a matter of minutes. And it was published just two months ago. Such improvement in just two years. But here is Nvidia's new paper from about a month ago. And yes, I hear you asking, Karoi, are you telling me that a two month old paper of this caliber is going to be outperformed by a one month old paper? Yes, that is exactly what I'm saying. Now hold on to your papers and look here, with the new method the training takes. What? Last time then I have to add this sentence because it is already done. So first we wait from hours to days. Then two years later it trains in minutes and a month later just a month later it trains in a couple seconds. Basically, nearly instantly. And if we let it run for a bit longer but still less than two minutes it will not only outperform a naive technique but will even provide better quality results than a previous method while training for about ten times quicker. That is absolutely incredible. I would say that this is swift progress in machine learning research but that word will not cut it here. This is truly something else. But if that wasn't enough, nothing is not the only thing this one can do. It can also approximate a gigapixel image. What is that? That is an image with tons of data in it and the AI is asked to create a cheaper neural representation of this image. And we can just keep zooming in and zooming in and we still find no details there. And if you have been holding on to your paper so far, now squeeze that paper because what you see here is not the result. But the whole training process itself. Really? Yes, really. Did you see it? Well, did you blink? Because if you did, you almost certainly missed it. This was also trained from scratch right in front of our eyes. But it's so quick that if you take just a moment to hold onto your papers a bit more tightly and you already missed it. Once again, a couple of papers before this took several hours at the very least. That is outstanding. And if we were done here, I would already be very happy, but we are not done yet, not even close. It can still do two more amazing things. One, this is a neural sign distance field it has produced. That is a mapping from 3D coordinates in a virtual world to distance to a surface. Essentially, it learns the geometry of the object better because it knows what parts are inside and outside. And it is blazing fast, surprisingly even for objects with detailed geometry. And my favorite, it can also do neural radiance caching. What is that? At the risk of simplifying the problem, essentially, it is learning to perform a light transport simulation. It took me several years of research to be able to produce such a light simulation. So let's see how long it takes for the AI to learn to do this. Well, let's see. Holy matter of papers and video, what are you doing? I give up. As you see, the pace of progress in AI and computer graphics research is absolutely incredible and even better, it is accelerating over time. Things that were wishful thinking 10 years ago became not only possible, but are now easy over the span of just a couple of papers. I am stunned. What a time to be alive. This episode has been supported by Lambda GPU Cloud. If you are looking for inexpensive Cloud GPUs for AI, check out Lambda GPU Cloud. They recently launched Quadro RTX 6000, RTX 8000 and V100 instances and hold onto your papers because Lambda GPU Cloud can cost less than half of AWS and Azure. Plus, they are the only Cloud service with 48GB RTX 8000. Join researchers at organizations like Apple, MIT and Caltech in using Lambda Cloud instances, workstations or servers. Make sure to go to lambdalabs.com slash papers to sign up for one of their amazing GPU instances today. Our thanks to Lambda for their long-standing support and for helping us make better videos for you. Thanks for watching and for your generous support and I'll see you next time.
|
Dear Fellow Scholars, this is two minute papers with Dr. Karo Zsolnai-Fehir. Today we are going to look at Nvidia's spectacular new AI that can generate beautiful images for us. But this is not image generation of any kind. No, no, this is different. Let's see how it is different. For instance, we can write a description and the appropriate image comes out. Snowy mountains, pink, cloudy sky, checkmark. Okay, so we can give it our direction. Make no mistake, this is fantastic, but this has been done before, so nothing new here. Yet, or we can create a segmentation map, this tells the AI what things are. The sea is down there, mountains and sky up here. Looking great, but this has been done before too. For instance, Nvidia's previous Gauguin paper could do this too. Nothing new here. Yet, or we can tell the AI where things are by sketching. This one also works, but this has been done too. So, is there nothing new in this paper? Well, of course there is. And now, hold onto your papers and watch as we fuse all of these descriptions together. We tell it where things are and what things are. But I wonder if we can make this mountain snowy and a pink cloudy sky on top of all things. Yes, we can. Oh wow, I love it. The sea could have a little more detail, but the rest of the image is spectacular. So, with this new technique, we can tell the AI where things are, what things are, and on top of it, we can also give it our direction. And here's the key, all of these can be done in any combination. Now, did you see it? Curiously, an image popped up at the start of the video when we unchecked all the boxes. Why is that? Is that a bug? I'll tell you in a bit what that is. So far, these were great examples, but let's try to push this to its limits and see what it can do. For instance, how quickly can we iterate with this? How quick is it to correct mistakes or improve the work? Oh boy, super quick. When giving our direction to the AI, we can update the text and the output refreshes almost as quickly as we can type. The sketch to image feature is also a great tool by itself. Of course, there is not only one way. There are many pictures that this could describe. So, how do we control what we get? Well, it can even generate variants for us. With this new work, we can even draw a piece of rock within the sea and the rock will indeed appear. And not only that, but it understands that the waves have to go around it too. An understanding of physics, that is insanity. My goodness. Or better, if we know in advance that we are looking for tall trees and autumn leaves, we can even start with the art direction. And then, when we add our labels, they will be satisfied. We can have our river, okay, but the trees and the leaves will always be there. Finally, we can sketch on top of this to have additional control over the hills and clouds. And get this, we can even edit real images. So, how does this black magic work? Well, we have four neural networks, four experts, if you will. And the new technique describes how to fuse their expertise together into one amazing package. And the result is a technique that outperforms previous techniques in most of the tested cases. So, are these some ancient methods from many years ago that are outperformed or are they cutting edge? And here comes the best part. If you have been holding onto your paper so far, now squeeze that paper because these techniques are not some ancient methods, not at all. Both of these methods are from the same year as this technique, the same year. Such improvements in last year. That is outstanding. What a time to be alive. Now, I made a promise to you early in the video, and the promise was explaining this. Yes, the technique generates results even when not given a lot of instruction. Yes, this was the early example when we unchecked all the boxes. What quality images can we expect then? Well, here are some uncurated examples. This means that the authors did not cherry pick here, just dumped a bunch of results here. And, oh my goodness, these are really good. The details are really there. The resolution of the images could be improved, but we saw with the previous Gaugan paper that this can improve a great deal in just a couple years. Or with this paper in less than a year. I would like to send a huge congratulations to the scientists at NVIDIA Bravo. And you fellow scholars just saw how much of an improvement we can get just one more paper down the line. So, just imagine what the next paper could bring. If you are curious, make sure to subscribe and hit the bell icon to not miss it when the follow-up paper appears on two-minute papers. What you see here is a report of this exact paper we have talked about, which was made by Wades and Biasis. I put a link to it in the description. Make sure to have a look. I think it helps you understand this paper better. Wades and Biasis provides tools to track your experiments in your deep learning projects. Their system is designed to save you a ton of time and money, and it is actively used in projects at prestigious labs such as OpenAI, Toyota Research, GitHub, and more. And the best part is that Wades and Biasis is free for all individuals, academics, and open source projects. It really is as good as it gets. Make sure to visit them through wnb.com slash papers or just click the link in the video description and you can get the free demo today. Our thanks to Wades and Biasis for their long-standing support and for helping us make better videos for you. Thanks for watching and for your generous support, and I'll see you next time.
|
Dear Fellow Scholars, this is two-minute papers with Dr. Karojona Ifehir. Today we are going to create a virtual world with breathtakingly detailed pieces of geometry. These are just some of the results of the new technique and boy, are they amazing? I wonder how expensive this will get? We'll see about that in a moment. Now here we wish to create a virtual world and we want this world to be convincing. Therefore we will need tons of high resolution geometry like the ones you see from these previous episodes. But all these details in the geometry means tons of data that has to be stored somewhere. Traditional techniques allow us to crank up the detail in the geometry, but there is a price to be paid for it. And the price is that we need to throw more memory and computation at the problem. The more we do, the higher the resolution of the geometry that we get. However, here comes the problem. Ouch! Look! At higher resolution levels, yes! The price to be paid is getting a little steep. Hundreds of megabytes of memory is quite steep for just one object, but wait, it gets worse. Oh, come on! Gigabytes, that is a little too much. Imagine a scene with hundreds of these objects laying around. No graphics card has enough memory to do that. But this new technique helps us add small bumps and ridges to these objects much more efficiently than previous techniques. This opens up the possibility for creating breathtakingly detailed digital objects where even the tiniest imperfections on our armor can be seen. Okay, that is wonderful. But let's compare how previous techniques can deal with this problem and see if this new one is any better. Here is a traditional technique at its best when using two gigabytes of memory. That is quite expensive. And here is the new technique. Hmm, look at that. This looks even better. Surely this means that it needs even more memory. How many gigabytes does this need? Or, if not gigabytes, how many hundreds of megabytes? What do you think? Please let me know in the comments section below. I'll wait. Thank you. I am super excited to see your guesses. Now, hold on to your papers because actually it's not gigabytes. Not even hundreds of megabytes. No, no. It is 34 megabytes. That is a pittance for a piece of geometry of this quality. This is insanity. But wait, it gets better. We can even dramatically change the displacements on our models. Here is the workflow. We plug this geometry into a light simulation program. And we can change the geometry, the lighting, material properties, or even clone our object many, many times. And it still runs interactively. Now, what about the noise in these images? This is a light simulation technique called past tracing. And the noise that you see here slowly clears up over time as we simulate the path of many more millions of light rays. If we let it run for long enough, we end up with a nearly perfect piece of geometry. And I wonder how the traditional technique is able to perform if it is also given the same memory allowance. Well, let's see. Wow, there is no contest here. Even for other cases, it is not uncommon that the new technique can create an equivalent or even better geometry quality, but use 50 times less memory to do that. So we might get more detailed virtual worlds for cheaper. Sign me up right now. What a time to be alive. And one more thing. When checking out the talk for the paper, I saw this. Wow! 20 views. 20 people have seen this talk. Now, I always say that views are of course not everything, but I am worried that if we don't talk about it here on too many papers, almost no one will talk about it. And these works are so good. People have to know. And if you agree, please spread the word on these papers and show them to people, you know. This episode has been supported by Coheer AI. Coheer builds large language models and makes them available through an API so businesses can add advanced language understanding to their system or app quickly with just one line of code. You can use your own data, whether it's text from customer service requests, legal contracts, or social media posts to create your own custom models to understand text or even generated. For instance, it can be used to automatically determine whether your messages are about your business hours, returns, or shipping. Or it can be used to generate a list of possible sentences you can use for your product descriptions. Make sure to go to Coheer.ai slash papers or click the link in the video description and give it a try today. It's super easy to use. Thanks for watching and for your generous support and I'll see you next time.
|
Dear Fellow Scholars, this is Two Minute Papers with Dr. Karo Zsolnai-Fehir. Today, we are going to play with a magical AI where we just sit in our armchair, see the words, and it draws an image for us. Almost anything we can imagine. Almost. And before you ask, yes, this includes drawing Corgis II. In the last few years, open AI set out to train an AI named GPT-3 that could finish your sentences. Then, they made image GPT. This could even finish your images. Yes, matkitting. It could identify that the cat here likely holds a piece of paper and finish the picture accordingly and even understood that if we have a droplet here and we see just a portion of the ripples, then this means a splash must be filled in. And it gets better than the invented an AI they call Dolly. This one is insanity. We just tell the AI what image we would like to see and it will draw it. Look, it can create a custom storefront for us. It understands the concept of low polygon rendering, isometric views, clay objects, and more. And that's not all. It could even invent clocks with new shapes when asked. The crazy thing here is that it understands geometry, shapes, and even materials. For instance, look at this white clock here on the blue table. And it not only put it on the table, but it also made sure to generate appropriate glossary reflections that matches the color of the clock. And get this, Dolly was published just about a year ago and OpenAI already has a follow-up paper that they call glide. And believe it or not, this can do more and it can do it better. Well, I will believe it when I see it, so let's go. Now, hold on to your papers and let's start with a Hedgehog using a calculator. Wow, that looks incredible. It's not just a Hedgehog, plus a calculator, it really is using the calculator. Now paint a fox in the style of the storey-night painting. I love the style and even the framing of the picture is quite good. There is even some space left to make sure that we see that storey-night. Great decision-making. Now a corgi with a red bow tie and a purple party hat. Excellent. And a pixel art corgi with a pizza. These are really good, but they are nothing compared to what is to come, because it can also perform conditional in painting with text. Yes, I am not kidding, have a look at this little girl hugging a dog. But there is a problem with this. Do you know what the problem is? Of course, the problem is that this is not a corgi. Now it is. That is another great result. And if we wish that some zebras were added here, that's possible too. And we can also add a vase here. Look at that. It even understood that this is a glass table and added its own reflection. Now, I am a light transport researcher by trade, and this makes me very, very happy. However, it is also true that it seems to have changed the material properties of the table. It is now much more diffused than it was before. Perhaps this is the AI's understanding of a new object blocking reflections. It's not perfect by any means, but it is a solid step forward. We can also give this gentleman a white hat. And as I look through these results, I find it absolutely amazing how well the hat blends into the scene. That is very challenging. Why? Well, in light transport research, we need to simulate the path of millions and millions of light rays to make sure that indirect illumination appears in a scene. For instance, look here. This is one of our previous papers that showcases how fluids of different colors paint their diffused surroundings to their own color. I find it absolutely beautiful. Now, let's switch the fluid to a different one. And yes, you see the difference. The link to this work is available in the video description below. And you see, simulating these effects is very costly and very difficult. But this is how proper light transport simulations need to be done. And this glide AI can put no objects into a scene and make them blend in so well, this to me also seems a proper understanding of light transport. I can hardly believe what is going on here. Bravo. And wait, how do we know if this is really better than Dolly? Are we supposed to just believe it? No. Not at all. Fortunately, comparing the results against Dolly is very easy. Look, we just add the same prompts and see that there is no contest. The no-glide technique creates sharper, higher resolution images with more detail and it even follows our instructions better. The paper also showcases a user study where human evaluators also favored the new technique. Now, of course, we are not done here, not even this technique is perfect. Look, we can request a cat with eight legs and wait a minute. It tried some multiplication trick, but we are not falling for it. A plus for effort, little AI, but of course, this is clearly one of the failure cases. And once again, this is a new AI where a vast body of knowledge lies within, but it only emerges if we can bring it out with properly written prompts. It almost feels like a new kind of programming that is open to everyone, even people without any programming or technical knowledge. If a computer is a bicycle for the mind, then open AI's glide is a fighter jet. Absolutely incredible. Soon, this might democratize creating paintings and maybe even help inventing new things. And here comes the best part. You can try it too. The notebook for it is available in the video description. Make sure to leave your experiment results in the comments or just read them at me. I'd love to see what you ingenious fellow scholars bring out of this wonderful AI. This episode has been supported by Lambda GPU Cloud. If you're looking for inexpensive Cloud GPUs for AI, check out Lambda GPU Cloud. They've recently launched Quadro RTX 6000, RTX 8000 and V100 instances. And hold onto your papers because Lambda GPU Cloud can cost less than half of AWS and Azure. Plus they are the only Cloud service with 48GB RTX 8000. And researchers at organizations like Apple, MIT and Caltech in using Lambda Cloud instances, workstations or servers. Make sure to go to lambdalabs.com slash papers to sign up for one of their amazing GPU instances today. Our thanks to Lambda for their long-standing support and for helping us make better videos for you. Thanks for watching and for your generous support and I'll see you next time.
|
Dear Fellow Scholars, this is two-minute papers with Dr. Kato Ijona Ifehir. Today we are going to try to teach a goldfish to drive a car of sorts. Now you may be asking, are we talking about a virtual fish like one of those virtual characters here? No, no, this is a real fish. Yes, really. I am not kidding. This is an experiment where researchers talk a goldfish and put it into this contraption that they call an FOV, a fish-operated vehicle. Very good. I love it. This car is built in a way such that it goes in the direction where the fish is swimming. Now, let's start the experiment by specifying a target and asking the fish to get there and give them a trade if they do. After a few days of training, we get something like this. Wow, it really went there. But wait a second, we are experienced fellow scholars here, so we immediately say that this could happen by chance. Was this a cherry-picked experiment? How do we know that this is real proficiency, not just chance? How do we know if real learning is taking place? Well, we can test that, so let's randomize the starting point and see if it can still get there. The answer is… Yes, yes it can. Absolutely amazing. This is just one example from the many experiments that were done in the paper. So, learning is happening. So much so that over time, a little friend learned so much that when it made a mistake, it could even correct it. Perhaps this means that in a follow-up paper, maybe they can learn to even deal with obstacles in the way. Now, note that this was just a couple of videos. There are many, many more experiments reported in the paper. At this point, we are still not perfectly sure that learning is really taking place here. So, let's run a multi-fish experiment and assess the results. Let's see… Yes, as we let them train for longer, all six of our participants show remarkable improvement in finding the targets. The average amount of time taken is also decreasing rapidly over time. These two seem to be extremely good drivers, perhaps they should be doing this for a living. And if we sum up the performance of every fish in the experiment, we see that they were not too proficient in the first sessions, but after the training, wow, that is a huge improvement. Yes, learning indeed seems to be happening here. So much so that, yes, as a result, I kid you not, they can kind of navigate in the real world too. Now, note that the details of the study were approved by the university and were conducted in accordance with government regulations to make sure that nobody gets hurt or mistreated in the process. So, this was a lot of fun. But what is the insight here? The key insight is that maybe navigation capabilities are universal across species. We don't know for sure, but if it is true, that is an amazing insight. And who knows, a couple papers down the line, if the self-driving car projects don't come to fruition, maybe we will have fish operated Tesla's instead. What a time to be alive. This episode has been supported by Lambda GPU Cloud. If you're looking for inexpensive cloud GPUs for AI, check out Lambda GPU Cloud. They've recently launched Quadro RTX 6000 RTX 8000 and V100 instances. And hold on to your papers because Lambda GPU Cloud can cost less than half of AWS and Azure. Plus, they are the only cloud service with 48 gigabyte RTX 8000. Join researchers at organizations like Apple, MIT and Caltech in using Lambda Cloud instances, workstations or servers. Make sure to go to lambdaleps.com slash papers to sign up for one of their amazing GPU instances today. Our thanks to Lambda for their long-standing support and for helping us make better videos for you. Thanks for watching and for your generous support. And I'll see you next time.
|
Dear Fellow Scholars, this is Two Minute Papers with Dr. Karojona Ifehir. Today is going to be all about turbulence. We are going to do this, and we are going to do this, and this, and this. To understand what this new paper has to offer, we have to look at an earlier work named Wavelet Turbulence. Apart from the fact that in my opinion it is one of the best papers ever written, it could do one thing extremely well. In goes a course, a low resolution smoke or fluid simulation, and outcomes a proper simulation with a lot more details. And it is typically accurate enough to fool us. And all this was possible in 2008. Kind of boggles the mind, so much so that this work even won a technical Oscar award. Please remember this accuracy statement as it will matter a great deal. Once again, it is accurate enough to fool us. Now, more than a decade later, here we are, this new paper is leaps and bounds better and can do five amazing things that the previous methods couldn't do. One, it can do spatial temporal upsandpling, upsandpling both in space and in time. What does this mean? It means that in goes a choppy, low resolution simulation and outcomes a smooth detailed simulation. Wow, now that is incredible. It is really able to fill in the information not only in space, but in time too. So good. Now, two previous methods typically try to take this course input simulation and add something to it. But not this new one. No, no. This new method creates a fundamentally new simulation from it. Just look here, it didn't just add a few more details to the input simulation. This is a completely new work. That is quite a difference. Three, I mentioned that wavelet turbulence was accurate enough to fool us. So, I wonder how accurate this new method is. Well, that's what we are here for. So, let's see together. Here comes the choppy input simulation. And here is an upsandpling technique from a bit more than a year ago. Well, better, yes, but the output is not smooth. I would characterize it more as less choppy. And let's see the new method. Can it do any better? My goodness, look at that. That is a smooth and creamy animation with tons of details. Now, let's pop the question. How does it compare to the reference simulation reality, if you will? What? I cannot believe my eyes. I can't tell the difference at all. So, this new technique is not just accurate enough to fool the human eye. This is accurate enough to stand up to the real high resolution simulation. And all this improvement in just one year. What a time to be alive. But wait a second. Does this even make sense if we have the real reference simulation here? Why do we need the upsandpling technique? Why not just use the reference? Well, it makes sense. The plan is that we only need to compute the cheap core simulation, upsandpling it quickly, and hope that it is as good as the reference simulation, which takes a great deal longer. Well, okay. But how much longer? Now hold on to your papers and let's see the results. And yes, this is about 5 to 8 times faster than creating the high resolution simulation we compared to, which is absolutely amazing, especially that it was created with a modern, blazing fast reference simulator that runs on your graphics card. But wait, remember I promised you 5 advantages. Not 3. So, what are the remaining 2? Well, 4, it can perform compression. Meaning that the output simulation will take up 600 times less data on our disk, that is insanity. So how much worse is the simulation stored this way? My goodness, look at that. It looks nearly the same as the original one. Wow. And we are still not done yet, not even close. 5. It also works on super high Reynolds numbers. If we have some of the more difficult cases where there is tons of turbulence, it still works really well. This typically gives a lot of trouble to previous techniques. Now, one more important thing, views are of course not everything. However, I couldn't not notice that this work was only seen by 127 people. Yes, I am not kidding, 127 people. This is why I am worried that if we don't talk about it here on 2 minute papers, almost no one will talk about it. And these works are so good, people have to know. Thank you very much for watching this, and let's spread the word together. Perceptilebs is a visual API for TensorFlow carefully designed to make machine learning as intuitive as possible. This gives you a faster way to build out models with more transparency into how your model is architected, how it performs, and how to debug it. And it even generates visualizations for all the model variables and gives you recommendations both during modeling and training and does all this automatically. I only wish I had a tool like this when I was working on my neural networks during my PhD years. Visit perceptilebs.com slash papers and start using their system for free today. Our thanks to perceptilebs for their support and for helping us make better videos for you. Thank you very much for watching and for your generous support and I'll see you next time.
|
Dear Fellow Scholars, this is two-minute papers with Dr. Karo Zsolnai-Fehir. Hmm, as you see, this is not our usual intro. Why is that? Because today, we are going to simulate the process of real painting on a computer, and we'll find out that all previous techniques mixed paint incorrectly and finally create this beautiful image. Now, there are plenty of previous techniques that help us paint digitally, so why do we need a new paper for this? Well, believe it or not, these previous methods think differently about the shape that the paint has to take, the more sophisticated ones even simulate the diffusion of paint, which is fantastic. They all do these things a little differently, but they agree on one thing. And here comes the problem. The only thing they agree on is that blue plus yellow equals a creamy color. But wait a second, let's actually try this. Many of you already know what is coming. Of course, in real life, blue plus yellow is not a creamy color, it is green. And does the new method know this? Yes, it does. But only this one, and we can try similar experiments over and over again. The results are the same. So, how is this even possible? Does no one know that blue plus yellow equals green? Well, of course they know. But the proper simulation of pigments mixing is very challenging. For instance, it requires keeping track of pigment concentrations and we also have to simulate subsurface scattering, which is the absorption and scattering of light of these pigments. In some critical applications, just this part can take several hours to days to compute for a challenging case. And now, hold on to your papers because this new technique can do all this correctly and in real time. I love this visualization as it is really dense in information, super easy to read at the same time. That is quite a challenge. And in my opinion, just this one figure could win an award by itself. As you see, most of the time it runs easily with 60 or higher frames per second. And even in the craziest cases, it can compute all this about 30 times per second. That is insanity. So, what do we get for all this effort, a more realistic digital painting experience? For instance, with the new method, color mixing now feels absolutely amazing. And if we feel like applying a ton of paint and let it stain the paper, we get something much more lifelike too. Artists who try this will appreciate these a great deal I am sure. Especially that the authors also made this paint color mixing technique available for everyone free of charge. As we noted, computing this kind of paint mixing simulation is not easy. However, using the final technique is, on the other hand, extremely easy. As easy as it gets. If you feel like coding up a simulation and include this method in it, this is all you need to do. Very few paper implementations are this simple to use. It can see us all the mathematical difficulties away from you. Extra points for elegance and huge congratulations to the authors. Now, after nearly every two minute paper's episode, where we showcase such an amazing paper, I get a question saying something like, okay, but when do I get to see or use this in the real world? And of course, rightfully so, that is a good question. For instance, this previous Gaugen paper was published in 2019, and here we are just a bit more than two years later, and it has been transferred into a real product. Some machine learning papers also made it to Tesla's self-driving cars in one or two years, so tech transfer from research to real products is real. But, is it real with this technique? Yes, it is. So much so that it is already available in an app named Rebell 5, which offers a next level digital painting experience. It even simulates different kinds of papers and how they absorb paint. Hmm, a paper simulation, you say? Yes, here are two minute papers. We appreciate that. A great deal. If you use this to paint something, please make sure to leave a comment or tweet at me. I would love to see your scholarly paintings. What a time to be alive. This video has been supported by weights and biases. They have an amazing podcast by the name Gradient Descent, where they interview machine learning experts who discuss how they use learning based algorithms to solve real world problems. They've discussed biology, teaching robots, machine learning in outer space, and a whole lot more. Perfect for a fellow scholar with an open mind. Make sure to visit them through wnb.me slash gd or just click the link in the video description. Our thanks to weights and biases for their longstanding support and for helping us make better videos for you. Thanks for watching and for your generous support and I'll see you next time.
|
And dear fellow scholars, this is two minute papers with Dr. Karojona Ifehir. Today is going to be all about simulating virtual bunnies. What kinds of bunnies? Bunnies in an hourglass with granular material, bunnies in a mixing drum, bunnies that disappear and will try to fix this one too. So, what was all this footage? Well, this is a follow-up work to the amazing monolith paper. What is monolith? It is a technique that helps fixing commonly occurring two-way coupling issues in physics simulations. Okay, that sounds great, but what does two-way coupling mean? It means that here the boxes are allowed to move the smoke and the added two-way coupling part means that now the smoke is also allowed to blow away the boxes. This previous work simulates this phenomena properly. It also makes sure that when thrown at the wall, things stick correctly and a ton of other goodies too. So, this previous method shows a lot of strength. Now, I hear you asking, Karoj, can these get even better? And the answer is, yes it can. That's exactly why we are here today. This new paper improves this technique to work better in cases where we have a lot of friction. For instance, it can simulate how some of these tiny bunnies get squeezed through the argless and get showered by this sand-like granular material. It can also simulate how some of them remain stuck up there because of the frictional contact. Now, have a look at this. With an earlier technique, we start with one bunny and we end up with, wait a minute, that volume is not one bunny amount of volume. And this is what we call the volume dissipation problem. I wonder if we can get our bunny back with the new technique. What do you think? Well, let's see, one bunny goes in, friction happens and yes, one bunny amount of volume comes out of the simulation. Then, we put a bunch of them into a mixing drum in the next experiment where their tormenting shall continue. This is also a very challenging scene because we have over 70,000 particles rubbing against each other. And just look at that. The new technique is so robust that there are no issues whatsoever. Loving it. So, what else is all the simulation math good for? Well, for instance, it helps us set up a scene where we get a great deal of artistic freedom. For instance, we can put this glass container with the water between two walls and look carefully. Yes, we apply a little left-ward force to this wall. And since this technique can simulate what is going to happen, we can create an imaginary world where only our creativity is the limit. For instance, we can make a world in which there is just a tiny bit of friction to slow down the fall. Or we can create a world with a ton more friction. And now, we have so much friction going on that the weight of the liquid cannot overcome anymore and thus the cube is quickly brought to rest between the walls. Luckily, since our bunny still exists, we can proceed onto the next experiment where we will drop it into a pile of granular material. And this previous method did not do too well in this case as the bunny sinks down. And if you think that cannot possibly get any worse, well, I have some news for you. It can. How? Look, with a different previous technique, it doesn't even get to sink in because… Ouch! It crashes when it would need to perform these calculations. Now, let's see if the new method can save the day and… Yes, great! It indeed can help our little bunny remain on top of things. So now, let's pop the scholarly question. How much do we have to wait for such a simulation? The hourglass experiment takes about 5 minutes per frame while the rotating drum experiment takes about half of that 2 and a half minutes. So, it takes a while. Why? Of course, because many of these scenes contain tens of thousands of particles and almost all of them are in constant, frictional interaction with almost all the others at the same time, and the algorithm mustn't miss any of these interactions. All of them have to be simulated. And the fact that through the power of computer graphics research, we can simulate all of these in a reasonable amount of time is an absolute miracle. What a time to be alive! And as always, you know what's coming? Yes, please do not forget to invoke the first law of papers which says that research is a process. Do not look at where we are, look at where we will be two more papers down the line. Granular materials and frictional contact in a matter of seconds perhaps… Well, sign me up for this one. This video has been supported by weights and biases. Check out the recent offering fully connected a place where they bring machine learning practitioners together to share and discuss their ideas, learn from industry leaders, and even collaborate on projects together. You see, I get messages from you fellow scholars telling me that you have been inspired by the series, but don't really know where to start. And here it is. Fully connected is a great way to learn about the fundamentals, how to reproduce experiments, get your papers accepted to a conference, and more. Make sure to visit them through wnbe.me slash papers or just click the link in the video description. Thanks to weights and biases for their longstanding support and for helping us make better videos for you. Thanks for watching and for your generous support, and I'll see you next time.
|
Dear Fellow Scholars, this is Two Minute Papers with Dr. Karo Zsolnai-Fehir. Today we are going to look at an incredible new paper, where we are going to simulate flows around thin shells, rods, the wind blowing at leaves, air flow through a city, and get this, we will produce spiral vortices around the aircraft and even perform some wind tunnel simulations. Now, many of our previous videos are about real-time fluid simulation methods that are often used in computer games and others that run for a long time and are typically used in feature-length movies. But not this, this work is not for that. This new paper can simulate difficult coupling effects, which means that it can deal with the movement of air currents around this thin shell. We can also take a hairbrush without bristles and have an interesting simulation. Or even better, add tiny bristles to its geometry and see how much more of a turbulent flow it creates. I already love this paper. So good! Now onwards to more serious applications. So, why simulate thin shells? What is all this useful for? Well, of course, simulating wind turbines. That is an excellent application. We are getting there. Hmm, talking about thin shells. What else? Of course, aircraft. Now, hold on to your papers and marvel at these beautiful spiral vortices that this simulator can create. So, is this all for show or is this what would really happen in reality? We will take a closer look at that in a moment. With that, yes, the authors claim that this is much more accurate than previous methods in its class. Well done! Let's give it a devilishly difficult test, a real aerodynamic simulation in a wind tunnel. In these cases, getting really accurate results is critical. For instance, here we would like to see that if we were to add a spoiler to this car, how much of an aerodynamic advantage we would get in return. Here are the results from the real wind tunnel test. And now, hold on to your papers and let's see how the new method compares. Wow! Goodness! It is not perfect by any means, but seems accurate enough that we can see the weak flow of the car clearly enough so that we can make a decision on that spoiler. You know what? Let's keep it. So, we compared the simulation against reality. Now, let's compare a simulation against another simulation. So, against a previous method. Yes, the new one is significantly more accurate and then its predecessor. Why? Because the previous one introduces significant boundary layers separations at the top of the car. The new one says that this is what will happen in reality. So, how do we know? Of course, we check. Yes, that is indeed right. Absolutely amazing. And note that this work appeared just about a year ago. So much improvement in so little time. The pace of progress in computer graphics research is out of this world. Okay, so it is accurate. Really accurate. But we already have accurate algorithms. So, how fast is it? Well, the proper aerodynamic simulations take from days to weeks to compute, but this airplane simulation took only minutes. How many minutes? 60 minutes to be exact. Wow! And within these 60 minutes, the same spiral vertices show up as the ones in the real wind tunnel tests. So good. But really, how can this be so fast? How is this even possible? The key to doing all this faster is that the new proposed method is massively parallel. This means that it can slice up one big computing task into many small ones that can be taken care of independently. And as a result, it takes advantage of our graphics cards and can squeeze every drop of performance out of it. That is very challenging for an algorithm of this complexity. Of course, the final simulation should still be done with the old tools just to make sure. However, this new method will be able to help engineers quickly iterate on early ideas and only commit to weak long simulations when absolutely necessary. So, we can go from idea to conclusions in a matter of an hour, not in a matter of a week. This will be an amazing time saver so the engineers can try more designs. Testing more ideas leads to more knowledge that leads to better vehicles, better wind turbines. Loving it. And all this improvement in just one year. If I didn't see this with my own eyes, I could not believe this. What a time to be alive! This episode has been supported by Lambda GPU Cloud. If you're looking for inexpensive Cloud GPUs for AI, check out Lambda GPU Cloud. They've recently launched Quadro RTX 6000 RTX 8000 and V100 instances. And hold on to your papers because Lambda GPU Cloud can cost less than half of AWS and Azure. Plus, they are the only Cloud service with 48GB RTX 8000. Join researchers at organizations like Apple, MIT and Caltech in using Lambda Cloud instances, workstations or servers. Make sure to go to lambdaleps.com slash papers to sign up for one of their amazing GPU instances today. Our thanks to Lambda for their long-standing support and for helping us make better videos for you. Thanks for watching and for your generous support. And I'll see you next time.
|
Dear Fellow Scholars, this is Two Minute Papers with Dr. Karo Zsolnai-Fehir. Today we are going to open an AI-powered hair salon. How cool is that? But how? Well, we recently looked at this related technique from Nvidia that is able to make a rough drawings come to life as beautiful photorealistic images. In goes a rough drawing and out comes an image of this quality. And it processes changes to the drawings with the speed of thought. You can even request a bunch of variations on the same theme and get them right away. So here is a crazy idea. How about doing this with hairstyles? Just draw them and the AI puts it onto the model and makes it look photorealistic. Well, that sounds like science fiction, so I will believe it when I see it. And this no method claims to be able to do exactly that, so let's see the pros first. This will go in and this will come out. How is that even possible? Well, all we need to do is once again produce a crude drawing that is just enough to convey our intentions. We can even control the color, the length of the hair locks, or even request a braid and all of these work pretty well. Another great aspect of this technique is that it works very rapidly so we can easily iterate over these hairstyles. And if we don't feel like our original idea will be the one, we can refine it for as long as we please. This AI tool works with the speed of thought. One of the best aspects of this work. You can see an example here. This does not look right yet. But this quick iteration also means that we can lengthen these braids in just a moment. And yes, better. So how does it perform compared to previous techniques? Well, let's have a look together. This is the input hairstyle and the mat telling the AI where we seek to place the hair. And let's see the previous methods. The first two techniques are very rough. This follow-up work is significantly better than the previous ones, but it is still well behind the true photograph for reference. This is much more realistic. So hold on to your papers and let's see the new technique. Now you're probably thinking, Karoi, what is going on? Why is the video not changing? Is this an error? Well, I have to tell you something. It's not an error. What you are looking at now is not the reference photo. No, no. This is the result of the new technique. The true reference photo is this one. That is a wonderful result. The new method understands not only the geometry of the hair better, but also how the hair should play with the illumination of the scene too. I am a light transport researcher by trade, and this makes me very, very happy. So much so that in some ways this technique looks more realistic than the true image. That is super cool. What a time to be alive. Now, not even this technique is perfect, even though the new method understands geometry better than its predecessors, we have to be careful with our edits because geometry problems still emerge. Look here. Also, the other issue is that the resolution of the generated hair should match the resolution of the underlying image. I feel that these are usually a bit more pixelated. Now note that these kinds of issues are what typically get improved from one paper to the next. So, you know the deal. A couple of papers down the line, and I am sure these edits will become nearly indistinguishable from reality. As you see, there are still issues. The results are often not even close to perfect, but the pace of progress in AI research is nothing short of amazing. With this we can kind of open an AI-powered hair salon, and some cases will work, but soon it might be that nearly all of our drawings were result in photorealistic results that are indistinguishable from reality. This video has been supported by weights and biases. Look at this. They have a great community forum that aims to make you the best machine learning engineer you can be. You see, I always get messages from you fellow scholars telling me that you have been inspired by the series, but don't really know where to start. And here it is. In this forum, you can share your projects, ask for advice, look for collaborators, and more. Make sure to visit www.me-slash-paper-forum and say hi, or just click the link in the video description. Our thanks to weights and biases for their long-standing support and for helping us make better videos for you. Thanks for watching and for your generous support, and I'll see you next time.
|
Dear Fellow Scholars, this is Two Minute Papers with Dr. Károly Zsolnai-Fehér. Today, we are going to take a bunch of completely unorganized human motion data, give it to an AI, grab a controller, and get an amazing video game out of it, which we can play almost immediately. Why almost immediately? I'll tell you in a moment. So, how does this process work? There are similar previous techniques that took a big soup of motion capture data, and outbade each other on what they could learn from it. And they did it really well. For instance, one of these AI's was able to not only learn from these movements, but even improve them, and even better adapt them to different kinds of terrains. This other work used a small training set of general movements to reinvent a popular high-jump technique, the Fosbury flop by itself. This allows the athlete to jump backward over the bar, thus lowering their center of gravity. And it could also do it on Mars. So cool. In the meantime, Ubisoft also learned to not only simulate, but even predict the future motion of video game characters, thus speeding up this process. You can see here how well its predictions line up with the real reference footage. And it could also stand its ground when previous methods failed. Ouch. So, are we done here? Is there nothing else to do? Well, of course, there is. Here, in goes this soup of unorganized motion data, which is given to train a deep neural network, and then this happens. Yes, the AI learned how to weave together these motions so well that we can even grab a controller and start playing immediately. Or almost immediately. And with that, here comes the problem. Do you see it? There is still a bit of a delay between the button press and the motion. A simple way of alleviating that would be to speed up the underlying animations. Well, that's not going to be it because this way the motions lose their realism. Not good. So, can we do something about this? Well, hold onto your papers and have a look at how this new method addresses it. We press the button and yes, now that is what I call quick and fluid motion. Yes, this new AI promises to learn time critical responses to our commands. What does that mean? Well, see the white bar here. By this amount of time, the motions have to finish and the blue is where we are currently. So, the time critical part means that it promises that the blue bar will never exceed the white bar. This is wonderful, just as the gamer does. Has it happened to you that you already pressed the button, but the action didn't execute in time. They will say that it happens all the time. But with this, we can perform a series of slow U turns and then progressively decrease the amount of time that we give to the character and see how much more agile it becomes. Absolutely amazing. The motions really changed and I find all of them quite realistic. Maybe this could be connected to a game mechanic where the super quick time critical actions deplete the stamina of our character quicker, so we have to use them sparingly. But that's not all it can do, not even close. You can even chain many of these crazy actions together and as you see our character does these amazing motions that look like they came straight out of the Witcher series. Loving it. What you see here is a teacher network that learns to efficiently pull off these moves and then we fire up a student neural network that seeks to become as proficient as the teacher, but with a smaller and more compact neural network. This is what we call policy distillation. So I hear you asking, is the student as good as its teacher? Let's have one more look at the teacher and the student, they are very close. Actually, wait a second, did you see it? The student is actually even more responsive than his teacher was. This example showcases it more clearly. It can even complete this slalom course and we might even be able to make a parkour game with it. And here comes the best part. The required training data is not a few days, not even a few hours, only a few minutes. And the training time is in the order of just a few hours and this we only have to do once and then we are free to use the train neural network for as long as we please. Now, one more really cool tidbit in this work is that this training data mustn't be realistic, it can be a soup of highly stylized motions and the new technique can steal weave them together really well. Is it possible that, yes, my goodness, it is possible. The input motions don't even have to come from humans, they can come from quadrupeds too. This talk only five minutes of motion capture data and the horse AI became able to transition between these movement types. This is yet another amazing tool in democratizing character animation. Absolutely amazing. What a time to be alive. Wates and biases provide tools to track your experiments in your deep learning projects. What you see here is their tables feature and the best part about it is that it is not only able to handle pretty much any kind of data you can throw at it, but it also presents your experiments to you in a way that is easy to understand. It is used by many prestigious labs including OpenAI, Toyota Research, GitHub and more. And the best part is that Wates and Biasis is free for all individuals, academics and open source projects. Make sure to visit them through wnb.com slash papers or just click the link in the video description and you can get a free demo today. Our thanks to Wates and Biasis for their long standing support and for helping us make better videos for you. Thanks for watching and for your generous support and I'll see you next time.
|
Dear Fellow Scholars, this is Two Minute Papers with Dr. Karo Zsolnai-Fehir. Welcome to Episode 600. And today you will see your own drawings come to life as beautiful photorealistic images. And it turns out you can try it too. I'll tell you about it in a moment. This technique is called Gauguin II, and yes, this is really happening. In goes a rough drawing and out comes an image of this quality. That is incredible. But here there is something that is even more incredible. What is it? Well, drawing is an iterative process. But once we are committed to an idea, we need to refine it over and over, which takes quite a bit of time. And let's be honest here, sometimes things come out differently than we may have imagined. But this, this is different. Here you can change things as quickly as you can think of the change. You can even request a bunch of variations on the same theme and get them right away. But that's not all, not even close. Get this, with this you can draw even without drawing. Yes, really. How is that even possible? Well, if we don't feel like drawing, instead we can just type what we wish to see. And my goodness, it not only generates these images according to the written description, but this description can get pretty elaborate. For instance, we can get ocean waves. That's great. But now, let's add some rocks. And a beach too. And there we go. We can also use an image as a starting point, then just delete the undesirable parts and have it impainted by the algorithm. Now, okay, this is nothing new. Computer graphics researchers were able to do this for more than 10 years now. But hold on to your papers because they couldn't do this. Yes, we can fill in these gaps with a written description. Couldn't witness the Northern Lights in person. No worries, here you go. And wait a second, did you see that? There are two really cool things to see here. Thing number one, it even redraws the reflections on the water, even if we haven't highlighted that part for impainting. We don't need to say anything and it will update the whole environment to reflect the new changes by itself. That is amazing. Now, I am a light transport researcher by trade and this makes me very, very happy. Thing number two, I don't know if you call this, but this is so fast, it doesn't even wait for your full request, it updates after every single keystroke. Drawing is an inherently iterative process and iterating with this is an absolute breeze. Not will be a breeze, it is a breeze. Now, after nearly every two minute paper episode where we showcase an amazing paper, I get a question saying something like, okay, but when do I get to see or use this in the real world? And rightfully so, that is a good question. The previous Gauguan paper was published in 2019 and here we are, just a bit more than two years later and it has been transferred into a real product. Not only that, but the resolution has improved a great deal, about four times of what it was before, plus the new version also supports more materials. And we are at the point where this is finally not just a cool demo, but a tool that is useful for real artists. What a time to be alive. Now I noted earlier, I did not say that iterating with this will be a breeze, but it is a breeze. Why? Well, great news because you can try it right now in two different ways. One, it is not part of a desktop application called Nvidia Canvas. With this, you can even export the layers to Photoshop and continue your work there. This will require a relatively recent Nvidia graphics card. And two, there is a web app too that you can try right now. The link is available in the video description and if you try it, please scroll down and make sure to read the instructions and watch the tutorial video to not get lost. And remember, all this tech transfer from paper to product took place in a matter of two years. Bravo Nvidia! The pace of progress in AI research is absolutely amazing. This episode has been supported by CoHear AI. CoHear builds large language models and makes them available through an API so businesses can add advanced language understanding to their system or app quickly with just one line of code. You can use your own data, whether it's text from customer service requests, legal contracts, or social media posts to create your own custom models to understand text or even generated. For instance, it can be used to automatically determine whether your messages are about your business hours, returns, or shipping. Or it can be used to generate a list of possible sentences you can use for your product descriptions. Make sure to go to CoHear.ai slash papers or click the link in the video description and give it a try today. It's super easy to use. Thanks for watching and for your generous support and I'll see you next time.
|
Dear Fellow Scholars, this is Two Minute Papers with Dr. Karo Zonai-Fahir. You can immediately start holding onto your papers, because today we are going to take a journey around the globe and simulate the weather over the mountainous coast of Yucatan, a beautiful day in the Swiss Alps, and a searing day in the Sahara Desert. Yes, from now on, all of this is possible through the power of this new paper. Just to showcase how quickly things are improving in the wondrous land of computer graphics, first we were able to simulate a tree. And then researchers learned to simulate an entire forest, but that's not all. Then, Morrison papers explore simulating an entire ecosystem for thousands of years. Now that's something. So, are we done here? What else is there to do? Well, this new work finally helps us simulating waterscapes, and it can go from a light rain to a huge snowstorm. Now, previous methods were also able to do something like this, but something was missing. For instance, this looks great, but the buoyancy part is just not there. Why? Well, let's look at the new method and find out together. Yes, the new work simulates the microphysics of water, which helps modeling phase changes and buoyancy properly. You are seeing the buoyancy part here. That makes a great difference. So, what about phase changes? Well, look at this simulation. This is also missing something. What is missing? Well, the simulation of the phone effect. What this means is a phenomenon where condensed warm air approaches one side of the mountain and something happens. Or, to be more exact, something doesn't happen. In reality, this should not get through. So, let's see the new simulation. Oh, yes, it is indeed stuck. Now, that's what I call a proper simulation of heat transfer and phase changes. Bravo! And here, you also see another example of a proper phase change simulation where the rainwater flows down the mountain to the ground and the new system simulates how it evaporates. Wait for it. Yes, it starts to form new clouds. So good. And it can simulate many more amazing natural phenomena beyond these in a way that resembles reality quite well. My favorites here are the Mammothas clouds, a phenomenon where the air descends and warms, creating differences in temperature in the cloud. And this is what the instability it creates looks like. And, oh my, the whole punch. This can emerge due to air cooling rapidly, often due to a passing aircraft. All of these phenomena require the simulations of the microphysics of water. So, microphysics, huh? That sounds computationally expensive. So, let's pop the question. How long do we have to wait for a simulation like this? Whoa! Are you seeing what I am seeing? How is that even possible? This runs easily, interactively, with approximately 10 frames per second. It simulates this and this. And all of these variables, and it can do all these 10 times every second. That is insanity. Sinvia. Now, one more thing. Views are not everything. Not even close. However, I couldn't ignore the fact that as of the making of this video, only approximately 500 people watched the original paper video. I think this is such a beautiful work. Everyone has to see it. Before two-minute papers, I was just running around at the university with these papers in my hand trying to show them to the people I know. And today, this is why two-minute papers exist, and it can only exist with such an amazing and receptive audience as you are. So, thank you so much for coming with us on this glorious journey, and make sure to subscribe and hit the bell icon. If you wish to see more outstanding works like this, we have a great deal more coming up for you. Percepti Labs is a visual API for TensorFlow carefully designed to make machine learning as intuitive as possible. This gives you a faster way to build out models with more transparency into how your model is architected, how it performs, and how to debug it. And it even generates visualizations for all the model variables, and gives you recommendations both during modeling and training, and does all this automatically. I only wish I had a tool like this when I was working on my neural networks during my PhD years. Visit perceptilabs.com, slash papers, and start using their system for free today. Our thanks to perceptilabs for their support, and for helping us make better videos for you. Thanks for watching, and for your generous support, and I'll see you next time.
|
Dear Fellow Scholars, this is Two Minute Papers with Dr. Karojona Ifehir. Today we are going to spice up this family video with a new stylized environment. And then we make a video of a boat trip much more memorable, up the authenticity of a parkour video, decorate a swan, enhance our clothing a little, and conveniently forget about a speeding motorcycle. Only for scientific purposes, of course. So, these are all very challenging tasks to perform, and of course none of these should be possible. And this is a new AIB solution that can pull off all of these. But how? Well, in goes an input video and this AI decomposes it into a foreground and background, but in a way that it understands that this is just a 2D video that represents a 3D world. Clearly humans understand this, but does Adobe's new AI do it too? And I wonder how much it understands about that. Well, let's give it a try together. Put those flowers on the address and... What? The flowers look like they are really there as they wrinkle, as the dress wrinkles, and it catches shadows just as the dress catches shadows too. That is absolutely incredible. It supports these high frequency movements so well that we can even stylize archite-sealing videos with it where there is tons of tiny water droplets flying about. No problems at all. We can also draw on this dog and remember we mentioned that it understands the difference between foreground and background and look. The scribble correctly travels behind objects. Aha, and this is also the reason why we can easily remove a speeding motorbike from this security footage. Just cut out the foreground layer. Nothing to see here, but I wonder can we go a little more extreme here? And it turns out these are really nothing compared to what this new AI can pull off. Look, we can not only decorate this one, but here is the key. And, oh yes, the one is fine, yes, but the reflection of this one is also computed correctly. Now, this feels like black magic and we are not even done yet. Now, hold on to your papers because here come my two favorite examples. Example number one, biking in Wonderland. I love this. Now, you see here that not even this technique is perfect. If you look behind the spokes of the wheel, you see a fair amount of warping. I still think this is a miracle that it can be pulled off at all. Example number two, a picturesque trip. Here not only the background has been changed, but even the water has changed as chunks of ice have also been added. And with that, I wonder how easy it is to do this. I'll tell you in a moment after we look at this. Yes, there is also one issue here with the warping of the water in the wake of the boat, but this is an excellent point for us to invoke the first law of papers, which says, do not look at where we are, look at where we will be, two more papers down the line. So, how much work do we have to put in to pull all this off? A day of work maybe? Nope, not even close. If you have been holding onto your paper so far, now squeeze that paper and look at this. We can unwrap the background of the footage and stylize it as we please. All this takes is just editing one image. This goes in, this comes out. Wow, absolutely anyone can do this. So, this is a huge step in democratizing video stylization and bringing it to everyone. What a time to be alive. This video has been supported by weights and biases. Being a machine learning researcher means doing tons of experiments and, of course, creating tons of data. But I am not looking for data, I am looking for insights. And weights and biases helps with exactly that. They have tools for experiment tracking, data set and model versioning and even hyper parameter optimization. No wonder this is the experiment tracking tool choice of open AI, Toyota Research, Samsung and many more prestigious labs. Make sure to use the link WNB.me slash paper intro or just click the link in the video description and try this 10 minute example of weights and biases today to experience the wonderful feeling of training a neural network and being in control of your experiments. After you try it, you won't want to go back. Thanks for watching and for your generous support and I'll see you next time.
|
Dear Fellow Scholars, this is Two Minute Papers with Dr. Karo Zonai-Fehir. Today, we are going to take a collection of photos like these and magically create a video where we can fly through these photos. And it gets better, we will be able to do it quickly and get this no AI is required. So, how is this even possible? Especially that the input is only a handful of photos. Well, typically we give it to a learning algorithm and ask it to synthesize a photo realistic video where we fly through the scene as we please. Of course, that sounds impossible. Especially that some information is given about the scene, but this is really not that much. And as you see, this is not impossible at all through the power of learning-based techniques. This previous AI is already capable of pulling off this amazing trick. And today, I am going to show you that through this incredible no paper, something like this can even be done at home on our own machines. Now, the previously showcased technique and its predecessors are building on gathering training data and training a neural network to pull this off. Here you see the training process of one of them compared to the reference results. Well, it looks like we need to be really patient as this process is quite lengthy and for the majority of the time, we don't get any usable results for nearly a day into this process. Now, hold on to your papers because here comes the twist. What is the twist? Well, these are not reference results. No, no, these are the results from the new technique. Yes, you heard it right. It doesn't require a neural network and thus trains so quickly that it almost immediately looks like the final result while the original technique is still unable to produce anything usable. That is absolutely insane. Okay, so it's quick, real quick. But how good are the results? Well, the previous technique was able to produce this after approximately one and a half days of training. And what about the new technique? All it needs is 8.8. 8.8 what? Days hours? No, no. 8.8 minutes. And the result looks like this. Not only as good, but even a bit better than what the previous method could do in one and a half days. Whoa! So, I mentioned that the results are typically even a bit better, which is quite remarkable. Let's take a closer look. This is the previous technique after more than a day. This is the new method after 18 minutes. Now, it says that the new technique is 0.3 decibels better. That does not sound like much, does it? Well, note that the decibel scale is not linear, it is logarithmic. What does this mean? It means this. Look, a small numerical difference in the numbers can mean a big difference in quality. And it is really close to the real results. All this after 20 minutes of processing, bravo. And it does not stop there. The technique is also quite robust. It works well on forward-facing scenes, 360-degree rotations, and we can even use it to disassemble a scene into its foreground and background elements. Note that the previous nerve technique we compared to was published just about a year and a half ago. Such incredible improvement in so little time. Here comes the kicker. All this is possible today with a handcrafted technique. No AI is required. What a time to be alive! Perceptilebs is a visual API for TensorFlow, carefully designed to make machine learning as intuitive as possible. This gives you a faster way to build out models with more transparency into how your model is architected, how it performs, and how to debug it. And it even generates visualizations for all the model variables and gives you recommendations both during modeling and training and does all this automatically. I only wish I had a tool like this when I was working on my neural networks during my PhD years. Visit perceptilebs.com, slash papers, and start using their system for free today. Our thanks to perceptilebs for their support and for helping us make better videos for you. Thanks for watching and for your generous support and I'll see you next time.
|
Dear Fellow Scholars, this is two-minute papers with Dr. Karo Zsolnai-Fehir. Today we are going to design a crazy baseball bat, a hockey stick that's not working well, springs that won't stop bouncing, and letters that can kind of stop bouncing. And yes, you see it correctly, this is a paper from 2017. Why? Well, there are some good works that are timeless. This is one of them. You will see why in a moment. When I read this paper, I saw that it is from Professor Doug James's group at Stanford University, and at this point I knew that crazy things are to be expected. Our season-fellow scholars know that these folks do some absolutely amazing things that shouldn't even be possible. For instance, one of their earlier papers takes an animation and the physics data for these bubbles, and impossible as it might appear, synthesizes the sound of these virtual bubbles. So, video goes in, sound comes out. And hold onto your papers because that's nothing compared to their other work, which takes not the video, but the sound that we recorded. So, the sound goes in, that's easy enough, and what does it do? It creates a video animation that matches these sounds. That's a crazy paper that works exceptionally well. We love those around here. And to top it off, both of these were handcrafted techniques, no machine learning was applied. So, what about today's paper? Well, today's paper is about designing things that don't work, or things that work too well. What does that mean? Well, let me explain. Here is a map that shows the physical bounce in a parameter for a not too exciting old simulation. Everything and every part does the same. And here is the new method. Red means bouncy, blue means stiff. And now on to the simulation. The old one with the fixed parameters, well, not bad, but it is not too exciting either. And here is the new one. Now that is a simulation that has some personality. So, with this, we can reimagine as if parts of things were made of rubber and other parts of wood were steel. Look, here the red color on the knob of the baseball bat means that this will be made bouncy while the end will be made stiff. What does this do? Well, let's see together. This is the reference simulation where every part has the same bounce in us. And now, let's see that bouncing knob. There we go. With this, we can unleash our artistic vision and get a simulation that works properly given these crazy material parameters. Now, let's design a crazy hockey stick. This part of the hockey stick is very bouncy. This part too, however, this part will be the sweet spot, at least for this experiment. Let's hit that pack and see how it behaves. Yes, from the red bouncy regions, indeed, the pack rebounds a great deal. And now, let's see the sweet spot. Yes, it rebounds much less. Let's see all of them side by side. Now, creating such an algorithm is quite a challenge. Look, it has to work on smooth geometry built from hundreds of thousands of triangles. And one of the key challenges is that the duration of the context can be what? Are you seeing what I am seeing? The duration of some of these contexts is measured in the order of tens of microseconds. And it still works well and it's still accurate. That is absolutely amazing. Now, of course, even though we can have detailed geometry made of crazy new materials, this is not only a great tool for artists, this could also help with contact analysis and other cool engineering applications where we manufacture things that hit each other. A glorious. So, this is an amazing, timeless work from 2017 and I am worried that if we don't talk about it here on two minute papers, almost no one will talk about it. And these works are so good. People have to know. Thank you very much for watching this and let's spread the word together. This episode has been supported by Lambda GPU Cloud. If you're looking for inexpensive Cloud GPUs for AI, check out Lambda GPU Cloud. They've recently launched Quadro RTX 6000, RTX 8000, and V100 instances. And hold onto your papers because Lambda GPU Cloud can cost less than half of AWS and Asia. Plus, they are the only Cloud service with 48GB RTX 8000. Join researchers at organizations like Apple, MIT, and Caltech in using Lambda Cloud instances, workstations, or servers. Make sure to go to lambdaleps.com slash papers to sign up for one of their amazing GPU instances today. Our thanks to Lambda for their long-standing support and for helping us make better videos for you. Thanks for watching and for your generous support and I'll see you next time.
|
Dear Fellow Scholars, this is Two Minute Papers with Dr. Karojona Ifehir. This new learning-based method is capable of generating new videos, creating lava from water, enhancing your dance videos, adding new players to a football game, and more. This one will feel like it can do absolutely anything. For instance, have a look at these videos. Believe it or not, some of these were generated by an AI from this new paper. Let's try to find out together which ones are real and which are generated. Now, before we start, many of the results that you will see here will be flawed, but I promise that there will be some gems in there too. This one doesn't make too much sense. One down. Two, the bus appears to be changing length from time to time. Three, the bus pops out of existence. Four, wait, this actually looks pretty good until we find out that someone is about to get rear-ended. This leaves us with two examples. Are they real or are they fixed? Now, please stop the video and let me know in the comments what you think. So, this one is real footage. Okay, what about this guy? Well, get this, this one is actually one of the generated ones. Just this one was real and this is what this new technique is capable of. We give it a sample video and it re-imagines variations of it. Of course, the results are hit or miss. For instance, here, people and cars just pop in and out of existence, but if we try it on the billiard balls, wow, now this is something. Look at how well it preserved the specular highlights and also the shadows move with the balls in the synthetic variations too. Once again, the results are generally hit or miss, but if we look through a few results, we often find some jumps out there. But this is nothing compared to what is to come. Let's call this first feature video synthesis and we have four more really cool applications with somewhat flawed but sometimes amazing results. Now, check this out. Here comes number two video analogies. This makes it possible for us to mix two videos that depict things that follow a similar logic. For instance, here only four of the 16 videos are real, the rest are all generated. Now, here comes feature number three time retargeting. For instance, we can lengthen or shorten videos. Well, that sounds simple enough, no need for an AI for that, but here's the key. This does not mean just adding new frames to the end of the video. Look, they are different. Yes, the entirety of the video is getting redesigned here. We can use this to lengthen videos without really adding new content to it. Absolutely amazing. We can also use it to shorten a video. Now, of course, once again, this doesn't mean that it just chops off the end of the video. No, no, it means something much more challenging. Look, it makes sure that all of your killer moves make it into the video, but in a shorter amount of time. It makes the video tighter if you will. Amazing feature number four, sketch to video. Here, we can take an input video, add a crew drawing, and the video will follow this drawing. The result is, of course, not perfect, but in return, this can handle many number to number transitions. And now, feature number five, video in painting on steroids. Previous techniques can help us delete part of an image or even a video and generate data in these holes that make sense given their surroundings. But this one does something way better. Look, we can cut out different parts of the video and mark the missing region with the color, and then what happens? Oh, yes. The blue regions will contain a player from the blue team, the white region, a player from the white team, or if we wish to get someone out of the way, we can do that too. Just mark them green. Here, the white regions will be impainted with clouds and the blues with birds, loving it. But, wait a second, we noted that there are already good techniques out there for image impainting. There are also good techniques out there for video time retargeting. So, what is so interesting here? Well, two things. One, here, all of these things are being done with just one technique. Normally, we would need a collection of different methods to perform all of these. But here, just one AI. Two, now hold on to your papers and look at this. Whoa! Previous techniques take forever to generate these results. 144p means an extremely crude, pixelated image, and even then they take from hours to my goodness, even days. So, what about this new one? It can generate much higher resolution images and in a matter of minutes. That is an incredible leap in just one paper. Now, clearly, most of these results aren't perfect. I would argue that they are not even close to perfect. But some of them already show a lot of promise. And, as always, dear fellow scholars, don't not forget to apply the first law of papers. Which says, don't look at where we are, look at where we will be, two more papers down the line. And two more papers down the line, I am sure that not two, but five, or maybe even six out of these six videos, will look significantly more real. This video has been supported by weights and biases. They have an amazing podcast by the name Gradient Descent, where they interview machine learning experts who discuss how they use learning based algorithms to solve real world problems. They've discussed biology, teaching robots, machine learning in outer space, and a whole lot more. Perfect for a fellow scholar with an open mind. Make sure to visit them through wnb.me slash gd or just click the link in the video description. Our thanks to weights and biases for their long standing support and for helping us make better videos for you. Thanks for watching and for your generous support. Thanks for watching and for your generous support. And I'll see you next time.
|
Dear Fellow Scholars, this is Two Minute Papers with Dr. Karo Ejolene Fehir. Today, we are going to see how easily we can transfer our motion onto a virtual character. And, even play virtual table tennis as if we were a character in a computer game. Hmm, this research work proposes to not use the industry standard motion capture sensors to do this. Instead, they promise a neural network that can perform full body motion capture from six inertial measurement units. These are essentially gyroscopes that report accelerations and orientations. And this is the key. This presents us with two huge advantages. Hold on to your papers for advantage number one, which is, no cameras are needed. Yes, that's right. Why is that great news? Well, a camera is a vision-based system, and if people are far away from the camera, it might have some trouble making out what they are doing, of course, because it can barely see them. But not with the inertial measurement units and this neural network. They can also be further away, maybe even a room or two away, and no problem. So good. And if it doesn't even need to see us, we can hide behind different objects and look. It can still reconstruct where we are, loving it. Or we can get two people to play table tennis. We can only see the back of this player and the occlusion situation is getting even worse as they turn and jump around a great deal. Now you made me curious, let's look at the reconstruction together. Wow, that is an amazing reconstruction. In the end, if the players agree that it was a good match, they can hug it out. Or, wait a second, maybe not so much. In any case, the system still works. And advantage number two, ah, of course, since it thinks in terms of orientation, it still doesn't need to see you. So, oh yes, it also works in the dark. Note that so do some infrared camera-based motion capture systems, but here no cameras are needed. And let's see that reconstruction. There is a little jitter in the movement, but otherwise, very cool. And if you have been holding onto your paper so far, now squeeze that paper, because I am going to tell you how quickly this runs. And that is 90 frames per second, easily in real time. Now, of course, not even this technique is perfect, I am sure you noticed that there is a bit of a delay in the movements, and often some jitter too. And, as always, do not think of this paper as the final destination. Think of this as an amazingly forward, and always, always apply the first law of papers. What is that? Well, just imagine how much this could improve just two more papers down the line. I am sure there will be no delays and no jitter. So, from now on, we are one step closer to be able to play virtual table tennis, or even work out with our friends in a virtual world. What a time to be alive! This video has been supported by weights and biases. Check out the recent offering fully connected, a place where they bring machine learning practitioners together to share and discuss their ideas, learn from industry leaders, and even collaborate on projects together. You see, I get messages from you fellow scholars telling me that you have been inspired by the series, but don't really know where to start. And here it is. Fully connected is a great way to learn about the fundamentals, how to reproduce experiments, get your papers accepted to a conference, and more. Make sure to visit them through wnb.me slash papers, or just click the link in the video description. Our thanks to weights and biases for their longstanding support, and for helping us make better videos for you. Thanks for watching and for your generous support, and I'll see you next time.
|
Dear Fellow Scholars, this is two minute papers with Dr. Karojola Ifehir. None of these faces are real, and today we are going to find out whether these synthetic humans can, in a way, pass for real humans, but not quite in the sense that you might think. Through the power of computer graphics algorithms, we are able to create virtual worlds and, of course, within those virtual worlds, virtual humans too. So here is a wacky idea. If we have all this virtual data, why not use these instead of real photos to train new neural networks? Hmm, wait a second. Maybe this idea is not so wacky after all, especially because we can generate as many virtual humans as we wish, and all this data is perfectly annotated. The location and shape of the eyebrows is known even when they are occluded, and we know the depth and geometry of every single hair strand of the beard. If done well, there will be no issues about the identity of the subjects or the distribution of the data. Also, we are not limited by our wardrobe or the environments we have access to. In this virtual world, we can do anything we wish. So good. And of course, here is the ultimate question that decides the fate of this project, and that question is, does this work? Now we can use all this data to train a neural network, and the crazy thing about this is that this neural network never saw a real human. So here is the ultimate test, videos of real humans. Now hold onto your papers, and let's see if the little AI can label the image and find the important landmarks. Wow! I can hardly believe my eyes. It not only does it for a still image, but for even a video, and it is so accurate from frame to frame that no flickering artifacts emerge. That is outstanding. And get this, the measurement say that it can stand up to other state-of-the-art detector neural networks that were trained on real human faces. And... Oh my! Are you seeing what I am seeing? Can this really be? If we use the same neural network to learn on real human faces, it won't do better at all. In fact, it will do worse than the same AI with the virtual data. That is insanity. The advantages of this practically infinitely flexible synthetic dataset shows really well here. It is also possible to compare also discusses in detail that this only holds if we use the synthetic dataset well and include different rotations and lighting environments for the same photos. Something that is not always so easy in real environments. Now this test was called face parsing, and now comes landmark detection. This also works remarkably well, but wait a second, once again, are you seeing what I am seeing? The landmarks can rotate all they want, and it will know where they should be even if they are occluded by headphones, the hair, or even when they are not visible at all. Now of course, not even this technique is perfect, tracking the eyes correctly requires additional considerations and real data, but only for that. Which is relatively easy to produce. Also, the simulator at this point can only generate the head and the neck regions and nothing else. But, you know the drill, a couple papers down the line, and I am sure that this will be able to generate full human bodies. This episode has been supported by Lambda GPU Cloud. If you are looking for inexpensive Cloud GPUs for AI, check out Lambda GPU Cloud. They recently launched Quadro RTX 6000, RTX 8000, and V100 instances, and hold onto your papers because Lambda GPU Cloud can cost less than half of AWS and Asia. Plus they are the only Cloud service with 48GB RTX 8000. Join researchers at organizations like Apple, MIT, and Caltech in using Lambda Cloud instances workstations or servers. Make sure to go to lambdaleps.com slash papers to sign up for one of their amazing GPU instances today. Our thanks to Lambda for their long-standing support and for helping us make better videos for you. Thanks for watching and for your generous support and I'll see you next time.
|
Dear Fellow Scholars, this is two-minute papers with Dr. Karo Zsolnai-Fehir. Today, we are not going to create selfies, we are going to create Nerfis instead. What are those? Well, Nerfis are selfies from the future. The goal here is to take a selfie video and turn it into a portrait that we can rotate around in complete freedom. The technique appeared in November 2020 a little more than a year ago, and as you see, easily outperformed its predecessors. It shows a lot of strength in these examples and seems nearly invincible. However, there is a problem. What is the problem? Well, it still did not do all that well on moving things. Don't believe it. Let's try it out. This is the input video. Aha, there we go. As soon as there is a little movement in time, we see that this looks like a researcher who is about to have a powerful paper deadline experience. I wonder what the Nerfis technique will do with this. Uh-oh, that's not optimal. And now, let's see if this new method called HyperNurf is able to salvage the situation. Oh my, one moment perfectly frozen in time and we can also move around with the camera. Kind of like the bullet time effect from the matrix. Sensational. What is also sensational is that, of course, you season fellow scholars immediately ask, okay, but which moment will get frozen in time? The one with the mouth closed or open? And HyperNurf says, well, you tell me. Yes, we can even choose which moment we should freeze in time by exploring this thing that they call the HyperSpace. Hence the name HyperNurf. The process looks like this. So, if this can handle animations, well then, let's give it some real tough animations. This is one of my favorite examples where we can even make a video of coffee being made. Yes, that is indeed the true paper deadline experience. And a key to creating these nerfies correctly is getting this depth map right. Here, the colors describe how four things are from the camera. That is challenging in and of itself and now imagine that the camera is moving all the time. And not only that, but the subject of the scene is also moving all the time too. This is very challenging and this technique does it just right. The previous nerfie technique from just a year ago had a great deal of trouble with this chocolate melting scene. Look, tons of artifacts and deformations as we move the camera. Now, hold on to your papers and let's see if the new method can do even this. Now that would be something. And wow, outstanding. I also loved how the authors, when the extra mile with the presentation of their website, look, all the authors are there animated with this very method, putting their money where their papers are, loving it. So with that, there you go. These are nerfies, selfies from the future. And finally, they really work. Such amazing improvements in approximately a year. The pace of progress in AR research is nothing short of amazing. What a time to be alive. This video has been supported by weights and biases. Look at this. They have a great community forum that aims to make you the best machine learning engineer you can be. You see, I always get messages from you fellow scholars telling me that you have been inspired by the series, but don't really know where to start. And here it is. In this forum, you can share your projects, ask for advice, look for collaborators and more. Make sure to visit www.me-slash-paper-forum and say hi or just click the link in the video description. Our thanks to weights and biases for their long-standing support and for helping us make better videos for you. Thanks for watching and for your generous support. And I'll see you next time.
|
Dear Fellow Scholars, this is Two Minute Papers with Dr. Karo Zsolnai-Fehir. Today we are going to take an AI and use it to synthesize these beautiful, crisp movement types. And you will see in a moment that this can do much, much more. So, how does this process work? There are similar previous techniques that took a big soup of motion-captured data and outweighed each other on what they could learn from it. And they did it really well. For instance, one of these AI's was able to not only learn these movements, but even improve them and even better adapt them to different kinds of terrains. This other work used a small training set of general movements to reinvent a popular high-jump technique, the Fosbury flop, by itself. This allows the athlete to jump backward over the bar, thus lowering their center of gravity. And it could also do it on Mars. How cool is that? But this new paper takes a different vantage point. Instead of asking for more training videos, it seeks to settle with less. But first, let's see what it does. Yes, we see that this new AI can match reference movements well, but that's not all of it. Not even close. The hallmark of a good AI is not being restricted to just a few movements, but being able to synthesize a great variety of different motions. So, can it do that? Wow, that is a ton of different kinds of motions, and the AI always seems to match the reference motions really well across the board. Bravo, we'll talk more about what that means in a moment. And here comes the best part. It does not only generalize to a variety of motions, but to a variety of body types as well. And we can bring these body types to different virtual environments too. This really seems like the whole package. And it gets better because we can control them in real time. My goodness. So, how does all this work? Let's see. Yes, here the green is the target movement we would like to achieve, and the yellow is the AI's result. Now, here the trails represent the past. So, how close are they? Well, of course, we don't know exactly yet. So, let's line them up. And now we're talking. They are almost the same. But wait, does this even make sense? Aren't we just inventing a copying machine here? What is so interesting about being able to copy an already existing movement? Well, no, no, no. We are not copying here, not even close. What this new work does instead is that we give it an initial pose and ask it to predict the future. In particular, we ask what is about to happen to this model? And the result is a message trail. So, what does this mess mean? Well, actually, the mess is great news. This means that the true physics results and the AI predictions line up so well that they almost completely cover each other. This is not the first technique to attempt this. What about previous methods? Can they also do this? Well, these are all doing pretty good. Maybe this new work is not that big of an improvement. Wait a second. Oh boy, one contestant is down. And now, two have failed. And I love how they still keep dancing while down. A plus for effort, little AI's, but the third one is still in the game. Careful, Ouch. Yup, the new method is absolutely amazing. No question about it. And, of course, do not be fooled by these mannequins. These can be mapped to real characters in real video games too. So, this amazing new method is able to create higher quality animations, let us grab a controller and play with them and also requires a shorter training time. Not only that, but the new method predicts more and hence relies much less on the motion dataset we feed it and therefore it is also less sensitive to its flaws. I love this. A solid step towards democratizing the creation of superb computer animations. What a time to be alive. This episode has been supported by Lambda GPU Cloud. If you're looking for inexpensive Cloud GPUs for AI, check out Lambda GPU Cloud. They've recently launched Quadro RTX 6000, RTX 8000 and V100 instances. And hold onto your papers because Lambda GPU Cloud can cost less than half of AWS and Azure. Plus, they are the only Cloud service with 48GB, RTX 8000. Join researchers at organizations like Apple, MIT and Caltech in using Lambda Cloud instances, workstations or servers. Make sure to go to LambdaLabs.com slash papers to sign up for one of their amazing GPU instances today. Our thanks to Lambda for their longstanding support and for helping us make better videos for you. Thanks for watching and for your generous support. And I'll see you next time.
|
Dear Fellow Scholars, this is Two Minute Papers with Dr. Karo Zonai Fahir. Today we are going to absolutely destroy this virtual bunny, and then inflate this cat until it becomes a bit of a chunker. And I hope you like dumplings because we'll also be able to start a cooking show in a virtual world. For instance, we can now squash and wrap and pinch and squeeze. And this is a computer graphics paper, so let's smash these things for good measure. There we go. Add some more, and the meal is now ready. Enjoy! This is all possible through this new paper that is capable of creating physics simulations with drastic geometry changes. And when I say drastic, I really mean it. And looking through the results, this paper feels like it can do absolutely anything. It promises a technique called injective deformation processing. Well, what does all this mean? It means great news. Finally, when we do these crazy experiments, things don't turn inside out, and the geometry does not overlap. Wanna see what those overlaps look like? Well, here it is. And luckily, we don't have to worry about this phenomenon with this new technique. Not only when inflating, but it also works correctly when we start deflating things. Now, talking about overlaps, let's see this animation sequence simulated with a previous method. Oh no, the belly is not moving, and my goodness, look at that. It gets even worse. What is even worse than a belly that does not move? Of course, intersection artifacts. Now, what you will see is not a resimulation of this experiment from scratch with this new method. No, no, even better. We give this flat simulation to the no technique, and yes, it can even repair it. Wow, an absolute miracle. No more self-intersections, and finally, the belly is moving around in a realistic manner. And while we find out whether this armadillo simulated with the new method is dabbing, or if it is just shy, let's talk about how long we have to wait for a simulation like this. All nighters? No, not at all. The font inflation examples roughly take half a second per frame, that is unbelievable, and it goes up to 12 minutes per frame, which is required for the larger deformation experiments. And, repairing an already existing flat simulation also takes a similar amount of time. So, what about this armadillo? Yes, that is definitely a shy armadillo. So, from now on, we can apply drastic geometry changes in our virtual worlds, and I'm sure that two more papers down the line, all this will be possible in real time. Real time dumplings in a virtual world. Yes, please, sign me up. What a time to be alive. Wates and biases provide tools to track your experiments in your deep learning projects. What you see here is their amazing sweeps feature, which helps you find and reproduce your best runs, and even better, what made this particular run the best. It is used by many prestigious labs, including OpenAI, Toyota Research, GitHub, and more. And, the best part is that Wates and Biasis is free for all individuals, academics, and open source projects. Make sure to visit them through wnb.com slash papers, or just click the link in the video description, and you can get a free demo today. Our thanks to Wates and Biasis for their long-standing support, and for helping us make better videos for you. Thanks for watching and for your generous support, and I'll see you next time.
|
Dear Fellow Scholars, this is Two Minute Papers with Dr. Karojona Ifehir. Today we are going to see how Nvidia's crazy new AI is able to understand basically any human movement. So, how is this even possible? And even more importantly, what is pose estimation? Well, simple, a video of people goes in and the posture they are taking comes out. Now, you see here that previous techniques can already do this quite well. What's more, if we allow an AI to read the Wi-Fi signals bouncing around in the room, it can perform pose estimation even through walls. Kind of. Now, you may be wondering what is pose estimation good for? By the end of this video, you will see that this can help us move around in virtual worlds the metaverse, if you will. But, let's not rush there yet because still, there are several key challenges here. One, we need to be super accurate to even put a dent into this problem. Why? Well, because if we have a video and we are off by just a tiny bit from frame to frame, this kind of flickering may happen. That's a challenge. Two, foot sliding. Yes, you heard it right, previous methods suffer from this phenomenon. You can see it in action here. And also here too. So, why does this happen? It happens because the technique has no knowledge of the physics of real human movements. So, scientists at Nvidia, the University of Toronto and the Vector Institute fired up a collaboration and when I first heard about their concept, I thought you are doing what? But, check this out. First, they perform a regular pose estimation. Of course, this is no good as it has the dreaded temporal inconsistency or in other words flickering. And in other cases, often, foot sliding too. Now, hold on to your papers because here comes the magic. They transfer the motion to a video game character and embed that character in a physics simulation. In this virtual world, the motion can be corrected to make sure they are physically correct. Now, remember, foot sliding happens because of the lack of knowledge in physics. So, perhaps this idea is not that crazy after all. Let's have a look. Now this will be quite a challenge, explosive sprinting motions. And... Whoa! This is amazing. This, dear fellow scholars, is superb pose estimation and tracking. And, how about this? A good tennis serve includes lots of dynamic motion. And, just look at how beautifully it reconstructs this move. Apparently, physics works. Now, the output needn't be stickman. We can retarget these to proper textured virtual characters built from a triangle mesh. And, that is just one step away from us being able to appear in the metaphors. No head mounted displays are required, no expensive studio, and no motion capture equipment is required. So, what is required? Actually, nothing. Just the raw input video of us. That is insanity. And, all this produces physically correct motion. So, that crazy idea about taking people and transforming them into video game characters is not so crazy after all. So, now we are one step closer to be able to work and even have some coffee together in a virtual world. What a time to be alive. This video has been supported by weights and biases. And, being a machine learning researcher means doing tons of experiments and, of course, creating tons of data. But, I am not looking for data, I am looking for insights. And, weights and biases helps with exactly that. They have tools for experiment tracking, data set and model versioning, and even hyper parameter optimization. No wonder this is the experiment tracking tool choice of open AI, Toyota Research, Samsung, and many more prestigious labs. Make sure to use the link wnb.me slash paper intro, or just click the link in the video description. And, try this 10 minute example of weights and biases today to experience the wonderful feeling of training a neural network and being in control of your experiments. After you try it, you won't want to go back. Thanks for watching and for your generous support, and I'll see you next time.
|
Dear Fellow Scholars, this is Two Minute Papers with Dr. Karojona Ifehir. This is not some hot-nulled Two Minute Papers merchandise. This is what a new class simulation paper is capable of doing today. So good! And today we are also going to see how to put the holes back into Holy Geometry and do it quickly. But, wait a second, that is impossible. We know that yarn-based class simulations take forever to compute. Here is an example. We showcased this previous work approximately 60 episodes ago. And curiously, get this, some of the simulations here were nearly 60 times faster than the yarn-based reference simulation. That is amazing. However, we noted that even though this technique was a great leap forward, of course it wasn't perfect, there was a prize to be paid for this amazing speed, which was, look, the pulling effects on individual yarns were neglected. That amazing Holy Geometry is lost. I noted that I'd love to get that back. You were seen a moment that maybe my wish comes true today. Hmm, so quick rundown. We either go for a mesh-based class simulation. These are really fast, but no Holy Geometry. Or we choose the yarn-level simulations. These are the real deal. However, they take forever. Unfortunately, still, there seems to be no way out here. Now, let's have a look at this new technique. It promises to try to marry these two concepts. Or in other words, use a fast mesh simulator. And add the yarn-level cloth details on top of it. Now of course, that is easier said than done. So let's see how this new method can deal with this challenging problem. So this is the fast mesh simulation to start with. And now hold onto your papers and let's add those yarns. Oh my, that is beautiful. I absolutely love how at different points you can even see through these garments. Beautiful. So, how does it do all this magic? Well, look at this naive method. This neglects the proper tension between the yarns and look at how beautifully they tighten as we add the new method on top of it. This technique can do it properly. And not only that, but it also simulates how the garment interacts with a virtual body in a simulated world. Once again, note how the previous naive method neglects the tightening of the yarns. So I am super excited. Now let's see how long we have to wait for this. Are we talking fast mesh simulation timings or slow yarn simulation timings? My goodness, look at that. The mesh part runs on your processor while the yarn part of the simulation is implemented on the graphics card. And whoa, the whole thing runs in the order of milliseconds easily in real time. Even if we have a super detailed garment with tens of millions of vertices in the simulation, the timings are in the order of tens of milliseconds at worst. That's also super fast. Wow. So, yes, part of this runs on our graphics card and in real time. Here we have nearly a million vertices, not a problem. Here nearly two million vertices, two million points, if you will, and it runs like a dream. And it only starts to slow down when we go all the way to a super detailed piece of garment with 42 million vertices. And here there is so much detail that finally not the simulation technique is the bottleneck, but the light simulation algorithm is. Look, there are so many high frequency details that we would need higher resolution videos or more advanced anti-aliasing techniques to resolve all these details. All this means that the simulation technique did its job really well. So, finally, yarn level simulations in real time. Not a time to be alive. Huge congratulations to the entire team and the first author, Georg Spell, who is still a PhD student making important contributions like this. And get this, Georg's short presentation was seen by... Oh my, 61 people. Views are not everything. Not even close. But once again, if we don't talk about this work here, I am worried that almost no one will. This is why two minute papers exist. Subscribe if you wish to see more of these miracle papers we have some great ones coming up. This video has been supported by weights and biases. They have an amazing podcast by the name Gradient Descent where they interview machine learning experts who discuss how they use learning based algorithms to solve real world problems. They've discussed biology, teaching robots, machine learning in outer space and a whole lot more. Perfect for a fellow scholar with an open mind. Make sure to visit them through wnb.me slash gd or just click the link in the video description. Our thanks to weights and biases for their long standing support and for helping us make better videos for you. Thanks for watching and for your generous support and I'll see you next time.
|
Dear Fellow Scholars, this is two minute papers with Dr. Kato Jona Ifehir. Today is a glorious day, because we are going to witness the first tableclothpull simulation I have ever seen. Well, I hoped that it would go a little more glorious than this. Maybe if we pull a bit quicker? Yes, there we go. And... we're good, loving it. Now, if you are a seasoned Fellow scholar, you might remember from about 50 videos ago, that we covered the predecessor of this paper called Incremental Potential Contact, IPC in short. So, what could it do? It could perform seriously impressive squishing experiments. And it also passed the tendril test where we threw a squishy ball at a glass wall and washed this process from the other side. A beautiful and rash side indeed. Unless you have a cat and a glass table at home, of course. Outstanding. So, I hear you asking, Karoi, are you trying to say that this new paper tops all that? Yes, that is exactly what I'm saying. The tableclothpulling is one thing, but it can do so much more. You can immediately start holding onto your papers and let's go. This new variant of IPC is capable of simulating super thin materials and all this in a penetration-free manner. Now, why is that so interesting or difficult? Well, that is quite a challenge. Remember this earlier paper with the barbarian ship, tons of penetration artifacts. And that is not even a thin object, not nearly as thin as this stack would be. Let's see what a previous simulation method would do if these are 10 millimeters each. That looks reasonable. Now, let's cut the thickness of the sheets in half. Yes, some bumpy artifacts appear and at one millimeter, my goodness, it's only getting worse. And when we plug in the same thin sheets into the new simulator, all of them look good. And what's more, they can be simulated together with other elastic objects without any issues. And this was a low-stress simulation. If we use the previous technique for a higher-stress simulation, this starts out well until, uh-oh, the thickness of the cloth is seriously decreasing over time. That is not realistic, but if we plug the same scene into the new technique, now that is realistic. So, what is all this good for? Well, if we wish to simulate a ball of noodles, tons of thick objects, let's see if we can hope for an intersection-free simulation. Let's look under the hood and there we go. All of the noodles are separated. But wait a second, I promised you thin objects. These are not thin. Yes, now these are thin. Still, no intersections. That is absolutely incredible. Other practical applications include simulating hair, braids in particular, granular materials against a thin sheet work too. And if you have been holding onto your paper so far, now squeeze that paper because the authors promise that we can even simulate this in-hand shuffling technique in a virtual world. Well, I will believe it when I see it. Let's see. My goodness, look at that. Love the attention to detail where the authors color-coded the left and right stack so we can better see how they mix and whether they intersect. Spoiler alert, they don't. What a time to be alive. It can also simulate this piece of cloth with a ton of detail, and not only that, with large time steps, which means that we can advance the time after each simulation step in bigger packets, thereby speeding up the execution time of the method. I also love how we get a better view of the geometry changes as the other side of the cloth has a different color. Once again, great attention to detail. Now we are still in the Minutes' Perfume region, and note that this runs on your processor. And therefore, if someone can implement this on the graphics card in a smart way, it could become close to real time in at most a couple of papers down the line. And this is a research paper that the authors give away to all of us free of charge. How cool is that? Thank you so much for creating these miracles and just giving them away for free. What a noble and ever research is. Perceptilebs is a visual API for TensorFlow, carefully designed to make machine learning as intuitive as possible. This gives you a faster way to build out models with more transparency into how your model is architected, how it performs, and how to debug it. And it even generates visualizations for all the model variables, and gives you recommendations both during modeling and training, and does all this automatically. I only wish I had a tool like this when I was working on my neural networks during my PhD years. Visit perceptilebs.com, slash papers, and start using their system for free today. Our thanks to perceptilebs for their support, and for helping us make better videos for you. Thanks for watching and for your generous support, and I'll see you next time.
|
Dear Fellow Scholars, this is two-minute papers with Dr. Karo Ejona Ifehir. Today, we are going to take a bunch of celebrities and imagine what they look like as tiny little babies. And then we will also make them old. And at the end of this video, I'll also step up to the plate and become a baby myself. So, what is this black magic here? Well, what you see here is a bunch of synthetic humans created by a learning-based technique called Stuylegan 3, which appeared this year in June 2021. It is a neural network-based learning algorithm that is capable of synthesizing these eye-poppingly detailed images of human beings that don't exist and even animate them. Now, how does it do all this black magic? Well, it takes walks in a latent space. What is that? A latent space is a made-up place where we are trying to organize data in a way that similar things are close to each other. In our earlier work, we were looking to generate hundreds of variants of a material model to populate this scene. In this latent space, we can concoct all of these really cool digital material models. A link to this work is available in the video description. Stuylegan uses walks in a similar latent space to create these human faces and animate them. And now, hold on to your papers because a latent space can represent not only materials or the head movement and smiles for people, but even better, age 2. You remember these amazing transformations from the intro? So, how does it do that? Well, similarly, to the material example, we can embed the source image into a latent space and take a path therein. It looks like this. Please remember this embedding step because we are going to refer to it in a moment. And now comes the twist. The latent space for this new method is built such that when we take these walks, it disentangles age from other attributes. This means that only the age changes and nothing else changes. This is very challenging to pull off because normally when we change our location in the latent space, not just one thing changes, everything changes. This was the case with the materials. But not with this method which can take photos of well-known celebrities and make them look younger or older. I kind of want to do this to myself too. So, you know what? Now, it's my turn. This is what BB Karoy might look like after reading BB papers. And this is Old Man Karoy complaining that papers were way better back in his day. And this is supposedly BB Karoy from a talk at a NATO conference. Look, apparently they let anybody in these days. Now, this is all well and good, but there is a price to be paid for this. So, what is the price? Let's find out together what that is. Here is the reference image of me. And here is how the transformations came out. Did you find the issue? Well, the issue is that I don't really look like this. Not only because the beard was synthesized onto my face by an earlier AI, but really, I can't really find my exact image in this. Take another look. This is what the input image looked like. Can you find it in the output somewhere? Not really. Same with the conference image. This is the actual original image of me. And this is the output of the AI. So, I can't find myself. Why is that? Now, you remember I mentioned earlier that we embed the source image into the latent space. And this step is, unfortunately, imperfect. We start out from not exactly the same image, but only something similar to it. This is the price to be paid for these amazing results. And with that, please remember to invoke the first law of papers. Which says, do not look at where we are. Look at where we will be. Two more papers down the line. Now, even better news, as of the writing of this episode, you can try it yourself. Now, be warned that our club of fellow scholars is growing rapidly, and you all are always so curious that we usually go over and crash these websites upon the publishing of these episodes. If that happens, please be patient. Otherwise, if you tried it, please let me know in the comments how it went or just tweet at me. I'd love to see some more baby scholars. What a time to be alive! This video has been supported by weights and biases. Check out the recent offering, fully connected, a place where they bring machine learning practitioners together to share and discuss their ideas, learn from industry leaders, and even collaborate on projects together. You see, I get messages from your fellow scholars telling me that you have been inspired by the series, but don't really know where to start. And here it is. Fully connected is a great way to learn about the fundamentals, how to reproduce experiments, get your papers accepted to a conference, and more. Make sure to visit them through wnb.me slash papers or just click the link in the video description. Our thanks to weights and biases for their longstanding support and for helping us make better videos for you. Thanks for watching and for your generous support, and I'll see you next time.
|
Dear Fellow Scholars, this is Two Minute Papers with Dr. Karo Zsolnai-Fehir. Today we are going to simulate thousands of years of vegetation in a virtual world. And I am telling you, this paper is unbelievable. Now, normally, if we are building a virtual world, we don't really think about simulating a physics and biology-based ecosystem. Let's be honest, it's more like, yeah, just throw in some trees, and we are done here. But, in reality, the kinds of vegetation we have in a virtual world should be at the very least a function of precipitation and temperature. And here, at this point, we know that this paper means business. You see, if there is no rain and it's super cold, we get a tundra. With no rain and high temperature, we get a desert. And if we keep the temperature high and add a ton of precipitation, we get a tropical rainforest. And this technique promises to be able to simulate these and everything in between. See these beautiful little worlds here? These are not illustrations. No, no, these are already the result of the simulation program. Nice. Now, let's run a tiny simulation with 400 years and a few hundred plants. Step number one, the first few decades are dominated by these shrubs blocking away the sunlight from everyone else. But, over time, step two, watch these resilient pine trees slowly overtake them and deny them their precious sunlight. And what happens as a result? Look, they're downfall, brings forth a complete ecosystem change. And then, step three, screw trees start to appear. This changes the game. Why? Well, these are more shade tolerant and let's see. Yes, they take over the ecosystem from the pine trees. A beautifully done story in such a little simulation. I absolutely love it. Of course, any suffering-specting virtual world will also contain other objects, not just the vegetation, and the simulator says no problem. Just chuck them in there and we'll just react accordingly and grow around it. Now, let's see a mid-size simulation. Look at that. Imagine the previous story with the pine trees, but with not a few hundred, but a hundred thousand plants. This can simulate that too. And now comes the final boss, a large simulation. Half a million plants and more than a thousand years. Yes, really. Let's see the story here in images first and then you will see the full simulation. Yes, over the first hundred years, fast growing shrubs dominate and start growing everywhere. And after a few hundred more years, the slower growing trees catch up and start overshadowing the shrubs at lower elevation levels. And then more kinds of trees appear at lower elevations and slowly over 1400 years, a beautiful mixed-age forest emerges. I shiver just thinking about the fact that through the power of computer graphics research works, we can simulate all this on a computer. What a time to be alive. And while we look at the full simulation, please note that there is more happening here, make sure to have a look at the paper in the video description if you wish to know more details. Now about the paper. And here comes an additional interesting part. As of the making of this video, this paper has been referred to 20 times. Yes, but this paper is from 2019, so it had years to soak up some citations, which didn't really happen. Now note that one citations are not everything, and two, 20 citations for a paper in the field of computer graphics is not bad at all. But every time I see an amazing paper, I really wish that more people would hear about it, and always I find that almost nobody knows about them. And once again, this is why I started to make two-minute papers. Thank you so much for coming on this amazing journey with me. Perceptilebs is a visual API for TensorFlow carefully designed to make machine learning as intuitive as possible. This gives you a faster way to build out models with more transparency into how your model is architected, how it performs, and how to debug it. And it even generates visualizations for all the model variables, and gives you recommendations both during modeling and training, and thus all this automatically. I only wish I had a tool like this when I was working on my neural networks during my PhD years. Visit perceptilebs.com, slash papers, and start using their system for free today. Thanks to perceptilebs for their support, and for helping us make better videos for you. Thanks for watching and for your generous support, and I'll see you next time.
|
Dear Fellow Scholars, this is Two Minute Papers with Dr. Karojejona Ifeher. Today we are going to see if a robot can learn to play table tennis. Spoiler alert, the answer is yes, quite well in fact. That is surprising, but what is even more surprising is how quickly it learned to do that. Recently we have seen a growing number of techniques where robots learn in a computer simulation and then get deployed into the real world. Yes, that sounds like science fiction. So, does this work in practice? Well, I'll give you two recent examples and you can decide for yourself. What you see here is example number one where open AI's robot hand learned to dexterously rotate this ruby cube to a given target state. How did it do it? Yes, you guessed it right, it learned in a simulation. However, no simulation is as detailed as the real world, so they used a technique called automatic domain randomization in which they create a large number of random environments, each of which are a little different. And the AI is meant to learn how to solve many different variants of the same problem. And the result? Did it learn general knowledge from that? Yes, what's more, this became not only a dexterous robot hand that can execute these rotations, but we can make up creative ways to torment this little machine and it still stood its ground. Okay, so this works, but is this concept good enough for commercial applications? You bet. Example number two, Tesla uses no less than a simulated game world to train their self-driving cars. For instance, when we are in this synthetic video game, it is suddenly much easier to teach the algorithm safely. You can also make any scenario easier, harder, replace a car with a dog or a pack of dogs, and make many similar examples so that the AI can learn from these what if situations as much as possible. Now, all that's great, but today we are going to see whether this concept can be generalized to playing table tennis. And I have to be honest, I am very enthused, but a little skeptical too. This task requires finesse, rapid movement, and predicting what is about to happen in the near future. It really is the whole package, isn't it? Now, let's enter the training simulation and see how it goes. First, we hit the ball over to its side, specify a desired return position, and ask it to practice returning the ball around this desired position. Then, after a quick retraining step against the ball throwing machine, we observe the first amazing thing. You see, the first cool thing here is that it practices against side spin and top spin balls. What are those? These are techniques where the players hit the ball in ways to make their trajectory much more difficult to predict. Okay, enough of this. Now, hold on to your papers and let's see how the final version of the AI fares against a player. And, whoa! It really made the transition into the real world. Look at that. This seems like it could go on forever. Let's watch for a few seconds. Yep, still going. Still going. But, we are not done yet, not even close. We said at the start of the video that this training is quick. How quick? Well, if you have been holding on to your papers, now squeeze that paper because all the robot took was one and a half hours of training. And wait, there are two more mind-blowing numbers here. It can return 98% of the balls and most of them are within 25 centimeters or about 10 inches of the desired spot. And, again, great news. This is also one of those techniques that does not require Google or OpenAI-level resources to make something really amazing. And, you know, this is the way to make an excellent excuse to play table tennis during work hours. They really made it work. Huge congratulations to the team. Now, of course, not even this technique is perfect. We noted that it can handle side spin and top spin balls, but it can deal with backspin balls yet because get this. Quoting, it causes too much acceleration in a robot joint. Yes, a robot with joint pain. What a time to be alive. Now, one more thing, as of the making of this video, this was seen by a grand total of 54 people. Again, there is a real possibility that if we don't talk about this amazing work, no one will. And this is why I started two-minute papers. Thank you very much for coming on this journey with me. Please subscribe if you wish to see more of these. This episode has been supported by Lambda GPU Cloud. If you're looking for inexpensive Cloud GPUs for AI, check out Lambda GPU Cloud. They recently launched Quadro RTX 6000, RTX 8000, and V100 instances. And hold onto your papers because Lambda GPU Cloud can cost less than half of AWS and Azure. Plus, they are the only Cloud service with 48GB RTX 8000. Join researchers at organizations like Apple, MIT and Caltech in using Lambda Cloud instances, workstations or servers. Make sure to go to lambdalabs.com slash papers to sign up for one of their amazing GPU instances today. Thanks to Lambda for their long-standing support and for helping us make better videos for you. Thanks for watching and for your generous support, and I'll see you next time.
|
Dear Fellow Scholars, this is two-minute papers with Dr. Karojola Ifehir. Today, we are going to not only simulate fluids, but even better, we are going to absorb them with sponge-like porous materials too. This paper does not have the usual super-high resolution results that you see with many other simulation papers. However, I really wanted to show you what it can do through four lovely experiments. And at the end of the video, you will see how slowly or quickly it runs. Experiment number one, spongy dumbbells. One side is made of spongy material and as it starts absorbing the water, let's see if it sinks. Well, it slowly starts to descend as it gets heavier and then it gets so heavy that eventually it sinks the other half too. That is a good start. Now, experiment number two, absorption. Here, the green liquid is hard to absorb, therefore we expect it to pass through while the red liquid is easier to absorb and should get stuck in this perforated material. Let's see if that is indeed what happens. The green is coming through and where is all the red glue? Well, most of it is getting absorbed here. But wait, the paper promises to ensure that mass and momentum still gets transferred through the interactions properly. So, if the fluid volumes are simulated correctly, we expect a lot more green down there than red. Indeed we get it. Oh yes, checkmark. And if you think this was a great absorption simulation, wait until you see this. Experiment number three, absorption on steroids. This is a liquid mixture and these are porous materials, each of which will absorb exactly one component of the liquid and let through the rest. Let's see, the first obstacle absorbs the blue component quickly, the second retains all the green component and the red flows through. And look at how gracefully the red fluid feels the last punch. Lovely. Onwards to experiment number four, artistic control. Since we are building our own little virtual world, we make all the rules. So, let's see the first absorption case. Nothing too crazy here. But if this is not in line with our artistic vision, do not despair because this is our world, so we can play with these physical parameters. For instance, let's increase the absorption rate so the fluid enters the solid faster. Or we can increase the permeability. This is the ease of passage of the fluid through the material. Does it transfer quicker into the sponge? Yes, it does. And finally, we can speed up both how quickly the fluid enters the solid, how quickly it travels within, and we have also increased the amount of absorption. Let's see if we end up with a smaller remaining volume. Indeed we do. The beauty of building these virtual worlds is that we can have these simulations under our artistic control. So, how long do we have to wait for the sponges? Well, hold on to your papers, because we not only don't have to sit down and watch an episode of SpongeBob to get these done, but get this. The crazy three sponge experiment used about a quarter of a million particles for the fluid and another quarter-ish million particles for the solids and runs at approximately a second per frame. Yes, it runs interactively and all this can be done on a consumer graphics card. And yes, that's why the resolution of the simulations is not that high. The authors could have posted a much higher resolution simulation and kept the execution time in the minutes per frame domain, but no. They wanted to show us what this can do interactively. And in this case, it is important that you apply the first law of papers which says that research is a process. Do not look at where we are, look at where we will be, two more papers down the line. So, there we go. I really wanted to show you this paper because unfortunately, if we don't talk about it, almost no one will see it. And this is why two minute papers exist. And if you wish to discuss this paper, make sure to drop by on our Discord server. The link is available in the video description. This video has been supported by weights and biases. Look at this. They have a great community forum that aims to make you the best machine learning engineer you can be. You see, I always get messages from you fellow scholars telling me that you have been inspired by the series, but don't really know where to start. And here it is. In this forum, you can share your projects, ask for advice, look for collaborators and more. Make sure to visit wmb.me slash paper forum and say hi or just click the link in the video description. Thanks to weights and biases for their long standing support and for helping us make better videos for you. Thanks for watching and for your generous support and I'll see you next time.
|
Dear Fellow Scholars, this is Two Minute Papers with Dr. Karojona Ifehir. Today we are going to control the fate of liquids in virtual worlds. This footage is from one of my earlier papers where I attempted fluid control. This means that we are not only simulating the movement of a piece of fluid, but we wish to coerce it to flow into a prescribed shape. This was super challenging and I haven't really seen a satisfactory solution that I think artists could use in the industry yet. And now that we have this modern mural network-based algorithms, we are now able to solve problems that we never even dreamed of solving just a few years ago. For instance, they can already perform this kind of style transfer for smoke simulations, which is incredible. So, are you thinking what I am thinking? Can one of those maybe tackle fluid control tool? Well, that's a tough call. Just to showcase how difficult this problem is, if we wish to have any control over our fluid simulations, if we are a trained artist, we can sculpt the fluid directly ourselves. Of course, this requires a great deal of expertise and often hours of work. Can we do better? Well, yes. Kind of. We can use a particle system built into most modern 3D modeling programs with which we can try to guide these particles to a given direction. This took about 20 minutes and it still requires some artistic expertise. So, that's it then. No. Hold on to your papers and check this out. The preparation for this work takes place in virtual reality where we can make these sketches in 3D and look at that. The liquid magically takes the shape of our sketch. So, how long did this take? Well, not one hour and not even 20 minutes. It took one minute. One minute, now we're talking. And even better, we can embed this into a simulation and it will behave like a real piece of fluid should. So, what is all this good for? Well, my experience has been in computer graphics. Is that if we put a powerful tool like this into the hands of capable artists, they are going to create things that we never even thought of creating. With this, they can make a heart from wine or create a milky skirt, several variants even if we wish. Or a liquid butterfly. I am loving these solutions and don't forget all of these can be done embedded into a virtual world and simulated as a real liquid. Now, we talked about 3 solutions and how much time they take, but we didn't see what they look like. Clearly, it is hard to compare these mathematically, so this is going to be, of course, subjective. So, this took an hour. It looks very smooth and is perhaps the most beautiful of the 3 solutions. That is great. However, as a drawback, it does not look like a real world water splash. The particle system took 20 minutes and creates a more lifelike version of our letter, but the physics is still missing. This looks like a trail of particles, not like a physics system. And let's see the new method. This took only a minute and it finally looks like a real splash. Now, make no mistake, all 3 of these solutions can be excellent depending on our artistic vision. So, how does all this magic happen? What is the architecture of this neural network? Well, these behavior emerges not from one, but from the battle of 2 neural networks. The generator neural network creates new splashes and the discriminator finds out whether these splashes are real or fake. Over time, they challenge each other and they teach each other to do better. The technique also goes the extra mile beyond just sketching. Look, for instance, your brushstrokes can also describe velocities. With this, we can not only control the shape, but even the behavior of the fluid too. So, there we go. Finally, a learning based technique gives us a proper solution for fluid control. And here comes the best part. It is not only quicker than previous solutions, but it can also be used by anyone. No artistic expertise is required. What a time to be alive. Wates and biases provides tools to track your experiments in your deep learning projects. Using their system, you can create beautiful reports like this one to explain your findings to your colleagues better. It is used by many prestigious labs, including OpenAI, Toyota Research, GitHub, and more. And the best part is that weights and biases is free for all individuals, academics, and open source projects. Make sure to visit them through wnb.com slash papers, or just click the link in the video description, and you can get a free demo today. Our thanks to weights and biases for their long-standing support and for helping us make better videos for you. Thanks for watching and for your generous support, and I'll see you next time.
|
Dear Fellow Scholars, this is Two Minute Papers with Dr. Karo Jolnai-Fehir. Today, we are going to take a collection of photos like these, and magically create a video where we can fly through these photos. How is this even possible? Especially that the input is only a handful of photos. Well, we give it to a learning algorithm and ask it to synthesize a photorealistic video where we fly through the scene as we please. That sounds impossible. Especially that some information is given about the scene, but this is really not much. And everything in between these photos has to be synthesized here. Let's see how well this new method can perform that, but don't expect too much. And wow, it took a handful of photos and filled in the rest so well that we got a smooth and creamy video out of it. So the images certainly look good in isolation. Now let's compare it to the real-world images that we already have, but have hidden from the algorithm and hold on to your papers. Wow, this is breathtaking. It is not a great deal different, is it? Does this mean, yes, it means that it guesses what reality should look like almost perfectly. Now note that this mainly applies for looking at these images in isolation. As soon as we weave them together into a video and start flying through the scene, we will see some flickering artifacts, but that is to be expected. The AI has to create so much information from so little and tiny inaccuracies that appear in each image are different. And when played abruptly after each other, this introduces these artifacts. So which regions should we look at to find these flaws? Well, usually regions where we have very little information in our set of photos and a lot of variation when we move our head. But for instance, visibility around thin structures is still a challenge. But of course, you know the joke, how do you spot a two-minute paper's viewer? They are always looking behind thin fences. Shiny surfaces are a challenge too as they reflect their environment and change a lot as we move our head around. So how does it compare to previous methods? Well, it creates images that are sharper and more true to the real images. Look, what you see here is a very rare sight. Usually when we see a new technique like this emerge, it almost always does better on some data sets and worse on others. The comparisons are almost always a wash, but here not at all, not in the slightest. Look, here you see four previous techniques, four scenes, and three different ways of measuring the quality of the output images. And almost none of it matters because the new technique reliably outperforms all of them everywhere. Except here, in this one case, depending on how we measure how good a solution is. And even then, it's quite close. Absolutely amazing. Make sure to also have a look at the paper in the video description to see that it can also perform Filmic tone mapping, change the exposure of the output images, and more. So how did they pull this off? What hardware do we need to train such a neural network? Do we need the server warehouses of Google or OpenAI to make this happen? No, not at all. And here comes the best part. If you have been holding onto your paper so far, now squeeze that paper because all it takes is a consumer graphics card and 12 to 24 hours of training. Then after that, we can use the neural network for as long as we wish. So, recreating reality from a handful of photos with the neural network that some people today can train at home themselves, the pace of progress in AI research is absolutely amazing. What a time to be alive! This episode has been supported by Lambda GPU Cloud. If you're looking for inexpensive Cloud GPUs for AI, check out Lambda GPU Cloud. They recently launched Quadro RTX 6000, RTX 8000, and V100 instances. And hold onto your papers because Lambda GPU Cloud can cost less than half of AWS and Azure. Class, they are the only Cloud service with 48GB RTX 8000. Join researchers at organizations like Apple, MIT, and Caltech in using Lambda Cloud instances, workstations, or servers. Make sure to go to lambdalabs.com slash papers to sign up for one of their amazing GPU instances today. Our thanks to Lambda for their long-standing support and for helping us make better videos for you. Thanks for watching and for your generous support, and I'll see you next time.
|
Dear Fellow Scholars, this is Two Minute Papers with Dr. Karojona Ifehir. Today we are going to grow people out of noise of all things. So, I hear you asking, what is going on here? Well, what this work performs is something that we call super-resolution. What is that? Simple, the enhanced thing. Have a look at this technique from last year. In Goals, a course image or video, and this AI-based method is tasked with this. Yes, this is not science fiction, this is super-resolution, which means that the AI synthesized crisp details onto the image. Now, fast forward a year later, and let's see what this new paper from scientists at Google Brain is capable of. First, a hallmark of a good technique is when we can give it a really coarse input, and it can still do something with it. In this case, this image will be 64x64 pixels, which is almost nothing, I'm afraid, and let's see how it fers. This will not be easy. And, well, the initial results are not good. But, don't put too much of a stake in the initial results, because this work iteratively refines this noise, which means that you should hold onto your papers and... Oh, yes, it means that it improves over time. It's getting there. Whoa, still going, and... Wow, I can hardly believe what has happened here. In each case, Ingo's a really coarse input image, where we get so little information, look, the eye color is often given by only a couple pixels, and we get a really crisp and believable output. What's more, it can even deal with glasses too. Now, of course, this is not the first paper on super-resolution, what's more, it is not even the hundredth paper performing super-resolution. So, comparing to previous works is vital here. We will compare this to previous methods in two different ways. One, of course, we are going to look. In previous regression-based methods perform reasonably well, however, if we take a closer look, we see that the images are a little blurry, high-frequency details are missing. And now, let's see if the new method can do any better. Well, this looks great, but we are fellow scholars here, we know that we can only evaluate this result in the presence of the true image. Now, let's see. Nice, we would have to zoom in real close to find out that the two images are not the same. Fantastic! Now, while we are looking at these very convincing, high-resolution outputs, please note that we are only really scratching the surface here. The heart and soul of a good super-resolution paper is proper evaluation and user studies, and the paper contains a ton more details on that. For instance, this part of the study shows how likely people were to confuse the synthesized images with real ones. Previous methods, especially Paul's, which is an amazing technique, reached about 33%, which means that most of the time, people found out the trick, but... Whoa! Look here, the new method is almost at the 50% mark. This is the very first time that I see a super-resolution technique where people can barely tell that these images are synthetic. We are getting one step closer to this technique getting deployed in real-world products. It could improve the quality of your Zoom meetings, video games, online images, and much, much more. Now, note that not even this one is perfect. Look, as we increase the resolution of the output of the image, the users are more likely to find out that these are synthetic images. But still, for now, this is an amazingly forward in just one paper. I can hardly believe that we can take this image and make it into this image using a learning-based method. What a time to be alive! This episode has been supported by Lambda GPU Cloud. If you're looking for inexpensive Cloud GPUs for AI, check out Lambda GPU Cloud. They recently launched Quadro RTX 6000, RTX 8000, and V100 instances, and hold onto your papers because Lambda GPU Cloud can cost less than half of AWS and Asia. Plus, they are the only Cloud service with 48GB, RTX 8000. Join researchers at organizations like Apple, MIT, and Caltech in using Lambda Cloud instances, workstations, or servers. Make sure to go to LambdaLabs.com slash papers to sign up for one of their amazing GPU instances today. Our thanks to Lambda for their long-standing support and for helping us make better videos for you. Thanks for watching and for your generous support, and I'll see you next time!
|
Dear Fellow Scholars, this is Two Minute Papers with Dr. Carlos Zonai-Fehir. Today, we are going to learn how to take any shape in the real world, make a digital copy of it, and place it into our virtual world. These can be done through something that we call differentiable rendering. What is that? Well, simple, we take a photograph, and find a photorealistic material model that we can put in our light simulation program that matches it. What this means is that now we can essentially put this real material into a virtual world. This work did very well with materials, but it did not capture the geometry. This other work is from Vensai Jacob's group, and it jointly found the geometry and material properties. Seeing these images gradually morph into the right solution is an absolutely beautiful site, but as you see high frequency details were not as good. You see here these details are gone. So, where does this put us? If we need only the materials we can do really well, but then no geometry. If we wish to get the geometry and the materials, we can use this, but we lose a lot of detail. There seems to be no way out. Is there a solution for this? Well, Vensai Jacob's amazing light transport simulation group is back with GANS blazing. Let's see what they were up to. First, let's try to reproduce a 3D geometry from a bunch of triangles. Let's see and... Ouch! Not good, right? So, what is the problem here? The problem is that we have tons of self-intersections leading to a piece of tangled mesh geometry. This is incorrect. Not what we are looking for. Now, let's try to improve this by applying a step that we call regularization. This guides the potential solutions towards smoother results. Sounds good? Hoping that this will do the trick. Let's have a look together. So, what do you think? Better, but there are some details that are lost and my main issue is that the whole geometry is inflectuation. Is that a problem? Yes, it is. Why? Because this jumpy behavior means that it has many competing solutions that it can't choose from. Essentially, the algorithm says maybe this, or not, perhaps this instead? No, not this. How about this? And it just keeps going on forever. It doesn't really know what makes a good reconstruction. Now, hold on to your papers and let's see how the new method does with these examples. Oh, my! This converges somewhere, which means that at the end of this process, it settles on something. Finally, this one knows what good is. Fantastic. But now, these geometries were not that difficult. Let's give it a real challenge. Here comes the dragon. Can it deal with that? Just look at how beautifully it grows out of this block. Yes, but look, the details are not quite there. So, are we done? That's it. No, not even close. Do not despair for a second. This new paper also proposes an isotropic remashing step, which does this. On the count of 1, 2, 3. Boom! We are done. Whoa! So good! And get this. The solution is not restricted to only geometry. It also works on textures. Now, make no mistake. This feature is not super useful in its current state, but it may pave the way to even more sophisticated methods to more papers down the line. So, importing real geometry into our virtual worlds? Yes, please. This episode has been supported by CoHear AI. CoHear builds large language models and makes them available through an API so businesses can add advanced language understanding to their system or app quickly with just one line of code. You can use your own data, whether it's text from customer service requests, legal contracts, or social media posts to create your own custom models to understand text, or even generated. For instance, it can be used to automatically determine whether your messages are about your business hours, returns, or shipping, or it can be used to generate a list of possible sentences you can use for your product descriptions. Make sure to go to CoHear.AI slash papers or click the link in the video description and give it a try today. It's super easy to use. Thanks for watching and for your generous support and I'll see you next time.
|
Dear Fellow Scholars, this is two-minute papers with Dr. Karo Zona Ifehir. Today, we are going to have a look at beautiful simulations from a quick paper. But wait, how can a research paper be quick? Well, it is quick for two key reasons. Reason number one, look at this complex soft body simulation. This is not a jumpsuit, this showcases the geometry of the outer tissue of this elephant and is made of 80,000 elements. And now, hold onto your papers away with the geometry and feast your eyes upon this beautiful simulation. My goodness, tons of stretching, moving and deformation. Wow! So, how long do we have to wait for a result like this? All nighters, right? Well, about that quick part I just mentioned, it runs very, very quickly. Eight milliseconds per frame. Yes, that means that it runs easily in real time on a modern graphics card. And this work has some other aspect that is also quick, which we will discuss in a moment. But first, let's see some of the additional advantages it has compared to previous methods. For instance, if you think this was a stretchy simulation, no no, this is a stretchy simulation. Look, this is a dragon. Well, it doesn't look like a dragon, does it? Why is that? Well, it has been compressed and scrambled into a tiny plane. But if we let go of the forces, ah, there it is. It was able to regain its original shape. And the algorithm can withstand even this sort of torture test, which is absolutely amazing. One more key advantage is the lack of volume dissipation. Yes, believe it or not, many previous simulation methods struggle with things disappearing over time. Don't believe it. Let me show you this experiment with GUI dragons and balls. When using a traditional technique, whoa, this guy is gone. So, let's see what a previous method would do in this case. We start out with this block and after a fair bit of stretching, wait a second. Are you trying to tell me that this has the same amount of volume as this? No sir, this is volume dissipation at its finest. So, can the new method be so quick and still retain the entirety of the volume? Yes sir, loving it. Let's see another example of volume preservation. Okay, I am loving this. These transformations are not reasonable. This is indeed a very challenging test. Can it withstand all this? Keep your eyes on the volume of the cubes, which change a little. It's not perfect, but considering the crazy things we are doing to it, this is very respectable. And in the end, look, when we let go, we get most of it back. I'll freeze the appropriate frame for you. So, second quick aspect, what is it? Yes, it is quick to run, but that's not all. It is also quick to implement. For reference, if you wish to implement one of our earlier papers on material synthesis, this is the number of variables you have to remember. And this is the pseudo code for the algorithm itself. What is that? Well, this shows what steps we need to take to implement our technique in a computer program. I don't consider this to be too complex, but now compare it to these simulation algorithms pseudo code. Whoa! Much simpler. I would wager that if everything goes well, a competent computer graphics research scientist could implement this in a day. And that is a rare sight for a modern simulation algorithm, and that is excellent. I think you are going to hear from this technique a great deal more. Who wrote it? Of course, Miles McLean and Matias Müller, two excellent research scientists at Nvidia. Congratulations! And with this kind of progress, just imagine what we will be able to do two more papers down the line. What a time to be alive! This video has been supported by weights and biases. Being a machine learning researcher means doing tons of experiments and, of course, creating tons of data. But I am not looking for data, I am looking for insights. And weights and biases helps with exactly that. They have tools for experiment tracking, data set and model versioning, and even hyper-parameter optimization. No wonder this is the experiment tracking tool choice of OpenAI, Toyota Research, Samsung, and many more prestigious labs. Make sure to use the link WNB.ME-slash-paper-intro, or just click the link in the video description, and try this 10-minute example of weights and biases today to experience the wonderful feeling of training a neural network and being in control of your experiments. After you try it, you won't want to go back. Thanks for watching and for your generous support, and I'll see you next time!
|
Dear Fellow Scholars, this is Two Minute Papers with Dr. Karo Zonai-Fehir. Today we are going to create beautiful, virtual clothes and marvel at how quickly we can simulate them. And the kicker is that these simulations are able to create details that are so tiny they are smaller than a millimeter. And when I first saw the title of this paper, I had two thoughts. First I asked, sub millimeter level, is this really necessary? Well, here is a dress rendered at the level of millimeters. And here is what it would look like with this new simulation technique. Hmm, so many crisp details suddenly appeared. Okay, you got me. I am now converted. So let's proceed to the second issue. Here I also said I will believe this kind of quality in a simulation when I see it. So how does this work? Well, we can give it a piece of course input geometry and this new technique synthesizes and simulates additional wrinkles on it. For instance, here we can add this vertical shirring patterns to it. And not only that, but we can still have collision detection so it can interact with other objects and suddenly the whole piece looks beautifully life like. And as you see here, the direction of these patterns can be chosen by us and it gets better because it can handle other kinds of patterns too. I am a light transport researcher by trade and I am very pleased by how beautifully these specular highlights are playing with the light. So good. Now let's see it in practice. On the left you see the course geometry input and on the right you see the magical new clothes this new method can create from them. And yes, this is it. Finally, I always wondered why virtual characters with simulated clothes always looked a little flat for many years now. These are the details that were missing. Just look at the difference this makes. Loving it. But how does this stack up against previous methods? Well, here is a previous technique from last year and here is the new one. Okay, how do we know which one is really better? Well, we do it in terms of mathematics and when looking at the relative errors from a reference simulation, the new one is more accurate. Great. But hold on to your papers because here comes the best part. Whoa! It is not only more accurate but it is blistering fast, easily 8 times faster than the previous method. So where does this put us in terms of total execution time? Add approximately one second per frame, often even less. What? One second per image for a crisp, sub millimeter level class simulation that is also quite accurate. Wow, the pace of progress in computer graphics research is absolutely amazing. And just imagine what we will be able to do two more papers down the line. This might run in real time easily. Now, drawbacks. Well, not really a drawback, but there are cases where the sub millimeter level simulation results materialize in a bit more crisp creases, but not much more. I really had to go to hunt for differences in this one. If you have spotted stark differences between the two here, please let me know in the comments. And here comes one more amazing thing. This paper was written by Huam in Wang and that's it. A single author paper that has been accepted to the SIGRAF conference, which is perhaps the most prestigious conference in computer graphics. That is a rare sight indeed. You see, sometimes even a single researcher can make all the difference. Huge congratulations. This episode has been supported by Lambda GPU Cloud. If you're looking for inexpensive Cloud GPUs for AI, check out Lambda GPU Cloud. They've recently launched Quadro RTX 6000 RTX 8000 and V100 instances and hold on to your papers because Lambda GPU Cloud can cost less than half of AWS and Asia. Plus, they are the only Cloud service with 48GB RTX 8000. Join researchers at organizations like Apple, MIT and Caltech in using Lambda Cloud instances, workstations or servers. Make sure to go to LambdaLabs.com slash papers to sign up for one of their amazing GPU instances today. Thanks to Lambda for their long-standing support and for helping us make better videos for you. Thanks for watching and for your generous support and I'll see you next time.
|
Dear Fellow Scholars, this is two-minute papers with Dr. Karo Zona Ifehir. Today we are going to do this... and this... and this on a budget. Today, through the power of computer graphics research works, we can simulate all these amazing elastic interactions. If we are very patient, that is, because they take forever to compute. But, if we wish to run these simulations quicker, what we can do is increase something that we call the time step size. Usually, this means that the simulation takes less time, but is also less accurate. Let's see this phenomenon through a previous method from just a year ago. Here, we set the time step size relatively small and drop an elastic barbarian ship onto these rods. This is a challenging scene because the ship is made out of half a million tiny elements and we have to simulate their interactions with the scene. How does this perform? Uh-oh. This isn't good. Did you see the issues? Issue number one is that the simulation is unstable. Look, things remain in motion when they shouldn't. And two, this is also troubling. Penetrations. Now, let's increase the time step size. What do we expect to happen? Well, now we advance the simulation in bigger chunks. So, we should expect to miss even more interactions in between these bigger steps. And, whoa! Sure enough, even more instability, even more penetration. So, what is the solution? Well, let's have a look at this new method and see if it can deal with this difficult scene. Now, hold on to your papers and... Wow! I am loving this. Issue number one, things coming to rest is solved. And issue number two, no penetrations. That is amazing. Now, what is so interesting here? Well, what you see here should not be possible at all because this new technique computes a reduced simulation instead. This is a simulation on a budget. And not only that, but let's increase the time step size a little. This means that we can advance the time in bigger chunks when computing the simulation at the cost of potentially missing important interactions between these steps. In short, expect a bad simulation now, like with the previous one. And... Wow! This is amazing. It still looks fine. But, we don't know that for sure, because we haven't seen the reference simulation yet. So, you know what's coming? Oh yes! Let's compare it to the reference simulation that takes forever to compute. This looks great. And... Let's see... This looks great too. They don't look the same, but if I were asked which one the reference is and which is the cheaper reduced simulation, I am not sure if I would be able to tell. Are you able to tell? Well, be careful with your answer because I have swapped the two. In reality, this is the reference. And this is the reduced simulation. Were you able to tell? Let me know in the comments below. And that is exactly the point. All this means that we got away with only computing the simulation in bigger steps. So, why is that good? Well, of course, because we got through it quicker. OK, but how much quicker? 110 times quicker. What? The two are close to equivalent, but this is more than 100 times quicker. Sign me up right away. Note that this is still not real time, but we are firmly in the second's per frame domain. So, we don't need an all-nighter for such a simulation. Just a coffee break. Now, note that this particular scene is really suited for the new technique. Other scenes aren't typically 100 times faster, but worst-case scenario is when we throw around a bunch of fur balls, but even that is at least 10 to 15 times faster. What does that mean? Well, an all-nighter simulation can be done, maybe not during a coffee break, but during a quick little nap. Yes, we can rest like this tiny dinosaur for a while, and by the time we wake up, the simulation is done, and we can count on it being close to the real deal. So good. Just make sure to keep the friction high while resting here, or otherwise, this happens. So, from now on, we get better simulations up to 100 times faster. What a time to be alive! This video has been supported by weights and biases. They have an amazing podcast by the name Gradient Descent where they interview machine learning experts who discuss how they use learning-based algorithms to solve real-world problems. They've discussed biology, teaching robots, machine learning in outer space, and a whole lot more. Perfect for a fellow scholar with an open mind. Make sure to visit them through wnb.me-gd or just click the link in the video description. Our thanks to weights and biases for their longstanding support and for helping us make better videos for you. Thanks for watching and for your generous support, and I'll see you next time.
|
Dear Fellow Scholars, this is Dominic Papers with Dr. Karo Zsolnai-Fehir. Today we are going to engage in the favorite pastimes of the computer graphics researcher, which is, well, this. And this. And believe it or not, all of this is simulated through a learning-based technique. Earlier, we marveled at a paper that showed that an AI can indeed learn to perform a fluid simulation. And just one more paper down the line, the simulations it was able to perform, extended to other fields like structural mechanics, incompressible fluid dynamics, and more. And even better, it could even simulate shapes and geometries that it had never seen before. So today, the question is not whether an AI can learn physics, the question is, how well can an AI learn physics? Let's try to answer that question by having a look at our first experiment. Here is a traditional handcrafted technique and a new neural network-based physics simulator. Both are doing fine, so nothing to see here. Whoa! What happened? Well, dear fellow scholars, this is when a simulation blows up. But the new one is still running, even when some traditional simulators blow up. That is excellent. But we don't have to bend over backwards to find other situations where the new technique is better than the previous ones. You see the reference simulation here, and it is all well and good that the new method does not blow up. But how accurate is it on this challenging scene? Let's have a look. The reference shows a large amount of bending where the head is roughly in line with the knees. Let's memorize that. Head in line with the knees. Got it. Now, let's see how the previous methods were able to deal with this challenging simulation. When simulating a system of a smaller size, well, none of these are too promising. When we crank up the simulation domain size, the physical model derivative, PMD in short, does pretty well. So, what about the new method? Both bend quite well. Not quite perfect. Remember, the head would have to go down to be almost in line with the knees. But, amazing progress nonetheless. This was a really challenging scene, and in other cases, the new method is able to match the reference simulator perfectly. So far, this sounds pretty good, but PMD seems to be a contender, and that Dear Fellow Scholars is a paper from 2005. From 16 years ago. So, why showcase this new work? Well, we have forgotten about one important thing. And here comes the key. The new simulation technique runs from 30 to almost 60 times faster than previous methods. How is that even possible? Well, this is a neural network-based technique. And training a neural network typically takes a long time, but we only need to do this once, and when we are done, querying the neural network typically can be done very quickly. Does this mean? Yes, yes it does. All this runs in real time for this dinosaur, bunny, and armadillo scenes, all of which are built from about 10,000 triangles. And we can play with them by using our mouse on our home computer. The cactus and herbal scenes require simulating, not tens, but hundreds of thousands of triangles. So, this took a bit longer as they are running between 1 and a half and 2 and a half frames per second. So, this is not only more accurate than previous techniques, not only more resilient than the previous techniques, but is also 30 to 60 times faster at the same time. Wow! And just think about the fact that just a year ago, an AI could only perform low-resolution fluid simulations, then a few months ago, more kinds of simulations, and then today, just one more paper down the line, simulations of this complexity. Just imagine what we will be able to do just two more papers down the line. What a time to be alive! Perceptilebs is a visual API for TensorFlow carefully designed to make machine learning as intuitive as possible. This gives you a faster way to build out models with more transparency into how your model is architected, how it performs, and how to debug it. Look, it lets you toggle between the visual modeler and the code editor. It even generates visualizations for all the model variables, and gives you recommendations both during modeling and training, and does all this automatically. I only wish I had a tool like this when I was working on my neural networks during my PhD years. Visit perceptilebs.com slash papers to easily install the free local version of their system today. Our thanks to perceptilebs for their support, and for helping us make better videos for you. Thanks for watching and for your generous support, and I'll see you next time.
|
Dear Fellow Scholars, this is Two Minute Papers with Dr. Karo Zsolnai-Fehir. Today we are going to see that some computer graphics simulation techniques are so accurate they can come out to the real world and even teach us something new. And it can do this too. And this too. Today, computer graphics research techniques are capable of conjuring up these beautiful virtual worlds where we can engage in the favorite pastime of the computer graphics researcher which is destroying them in a spectacular manner. Here you see Joshua Wolper's paper who is a returning guest in this series. He was a PhD student at the time and his first work we showcase was about breaking bread and visualizing all the damage that takes place during this process. His later work was about enriching our simulations with anisotropic damage and elasticity. So what does that mean exactly? This means that it supports more extreme topological changes in these virtual objects. And in the meantime he has graduated. So congratulations on all the amazing works Dr. Joshua Wolper. And let's see what he has been up to since he leveled up. Note that all of these previous works are about simulating, fracturing and damage. I wonder what else could all this knowledge be applied for? Hmm, how about simulating glacier fracture? Yes, really. But before we start, why would we do that? Because a technique like this could help assess and identify potential hazards ahead of time and get this maybe even mitigate them. Who knows maybe we could even go full-hurry seldom and predict potential hazards before they happen? Let's see how. To start out we need three things. First we need to simulate ice fracturing. Here is a related earlier work. However, this is on snow. Ice is different. That is going to be a challenge. Two, we need to simulate the ocean. And three, simulate how the two react to each other. Wow, that is going to be quite a challenge because capturing all of these really accurately requires multiple different algorithms. You may remember from this previous work how difficult it is to marry two simulation algorithms. Believe it or not, this is not one but two simulations, one inside the box and one outside. To make all this happen plenty of work had to be done in the transition zones. So this one for ice fractures might be even more challenging. And you may rest assured that we will not let this paper go until we see my favorite thing in all simulation research, which is of course comparing the simulation results to real world footage. For instance, the results would have to agree with this earlier lab experiment by Heller and colleagues that measures how the ocean reacts to a huge block of ice falling into it. Now, hold on for a second. We can't just say that it falls into the ocean. There are multiple kinds of falling into the ocean. For instance, it can either happen due to gravity or to buoyancy or capsizing. So we have two questions. Question number one, does this matter? Well, let's have a look. Oh yes, it does matter a great deal. The generated waves look quite different. Now, here comes the most exciting part. Question number two, do Dr. Wolper simulations agree with this real lab experiment? To start out, we wish to see three experiments. One for gravity, the color coding goes from colder to warmer colors as the velocity of the waves increases. We also have one simulation for buoyancy, and one for capsizing. We could say they look excellent, but we can't say that because we don't yet know how this experiment relates to the lab experiment yet. Before we compare the two, let's also add one more variable. Theory, we expect that the simulations match the theory nearly perfectly and to more or less match the lab experiment. Why only more or less, why not perfectly? Because it is hard to reproduce the exact forces, geometries and materials that were used in the experiment. Now, let's see, the solid lines follow the dash line very well. This means that the simulation follows the theory nearly perfectly. For the simulation, this plot is a little easier to read and shows that the lab experiment is within the error limits of the simulation. Now, at this point, yes, it is justified to say this is an excellent work. Now, let's ramp up the complexity of these simulations and hopefully give it a hard time. Look, now we're talking. Real icebergs, real calving. The paper also shows plots that compare this experiment to the theoretical results and found good agreements there too. Very good. Now, if it can deal with this, hold on to your papers and let's bring forth the final boss. Equip, Sermia. Well, what is that? This was a real glacier fracturing event in Greenland that involved 800,000 metric tons of ice. And at this point, I said, I am out. There are just too many variables, too many unknowns, too complex a situation to get meaningful results. 800,000 metric tons, you can't possibly reproduce this with a simulation. Well, if you have been holding on to your paper so far, now squeeze that paper and watch this. This is the reproduction and even better, we have measured data about wave amplitudes, average wave speed, iceberg sizes involved in this event, and get this. This simulation is able to reproduce all of these accurately. Wow! And we are still not done yet. It can also produce full 3D simulations, which requires the interplay of tens of millions of particles and can create beautiful footage like this. This not only looks beautiful, but it is useful, too. Look, we can even assemble a scene that reenacts what would happen if we were sitting in a boat nearby. Spoiler alert, it's not fun. So, there we go. Some of these computer graphics simulations are so accurate they can come out to the real world and even teach us new things. Reading papers makes me very, very happy and this was no exception. I had a fantastic time reading this paper. If you wish to have a great time, too, make sure to check it out in the video description. The SAP T-Labs is a visual API for TensorFlow carefully designed to make machine learning as intuitive as possible. This gives you a faster way to build out models with more transparency into how your model is architected, how it performs, and how to debug it. Look, it lets you toggle between the visual modeler and the code editor. It even generates visualizations for all the model variables and gives you recommendations both during modeling and training and does all this automatically. I only wish I had a tool like this when I was working on my neural networks during my PhD years. Visit perceptilabs.com slash papers to easily install the free local version of their system today. Our thanks to perceptilabs for their support and for helping us make better videos for you. Thanks for watching and for your generous support and I'll see you next time.
|
Dear Fellow Scholars, this is two minute papers with Dr. Karojona Ifehir. Today, we are going to find out whether a machine can really think like we humans think. The answer is yes, and no. Let me try to explain. Obviously, there are many differences between how we think, but first, let's try to argue for the similarities. First, this neural network looks at an image and tries to decide whether it depicts a dog or not. What this does is that it slices up the image into small pieces and keeps a score on what it had seen in these snippets. Floppy ears, black snout, fur, okay, we're good. We can conclude that we have a dog over here. We humans would also have a hard time identifying a dog without these landmarks, plus one for thinking the same way. Second, this is DeepMind's deep reinforcement learning algorithm. It looks at the screen much like a human would and tries to learn what the controls do and what the game is about as the game is running. And much like a human, first it has no idea what is going on and loses all of its lives almost immediately. But, over time, it gets a bit of the feel of the game. Improvements, good. But, if we wait for longer, it sharpens its skill set so much that, look, it found out that the best way to beat the game is to dig a tunnel through the blocks and just kick back and enjoy the show. Human like, excellent. Again, plus one for thinking the same way. Now, let's look at this new permutation invariant neural network and see what the name means, what it can do, and how it relates to our thinking. Experiment number one, permutations. A permutation means shuffling things around. And we will add shuffling into this card pole balancing experiment. Here, the learning algorithm does not look at the pixels of the game, but instead takes a look at numbers, for instance, angles, velocity and position. And, as you see, with those, it learned to balance the pole super quickly. The permutation part means that we shuffle this information every now and then. That will surely confuse it, so let's try that. There we go, and... Nice! Didn't even lose it. Shuffle again. It lost it due to a sudden change in the incoming data, but, look, it recovered rapidly. And, can it keep it upright? Yes, it can. So, is this a plus one or a minus one? Is this human thinking or robot thinking? Well, over time, humans can get used to input information switching around too. But, not this quickly. So, this one is debatable. However, I guarantee that the next experiment will not be debatable at all. Now, experiment number two, reshuffling on steroids. We already learned that some amount of reshuffling is okay. So, now, let's have our little AI play pong, but with a twist, because this time, the reshuffling is getting real. Yes, we now broke up the screen into small little blocks, and have reshuffled it to the point that it is impossible to read. But, you know what? Let's make it even worse. Instead of just reshuffling, we will reshuffle the reshuffling. What does that mean? We can rearrange these styles every few seconds. A true nightmare situation for even an established algorithm, and especially when we are learning the game. Okay, this is nonsense, right? There is no way anyone can meaningfully play the game from this noise, right? And now, hold on to your papers, because the learning algorithm still works fine. Just fine. Not only on pong, but on a racing game too. Whoa! A big minus one for human thinking. But, if it works fine, you know exactly what needs to be done. Yes, let's make it even harder. Experiment number three, stolen blocks. Yes, let's keep reshuffling, change the reshuffling over time, and also steal 70% of the data. And... Wow! It is still fine. It only sees 30% of the game all jumbled up, and it still plays just fine. I cannot believe what I am seeing here. Another minus one. This does not seem to think like a human would. So all that is absolutely amazing. But, what is it looking at? Aha! See the white blocks? It is looking at the sides of the road, likely to know what the curvature is, and how to drive it. And, look, only occasionally it peeps at the green patches too. So, does this mean what I think it means? Experiment number four. If you have been holding onto your paper so far, now squeeze that paper, shuffling, and let's shovel in some additional useless complexity, which will take the form of this background. And... My goodness! It still works just fine, and the minus ones just keep on coming. So, this was quite a ride. But, what is the conclusion here? Well, learning algorithms show some ways in which they think like we think, but the answer is no, do not think of a neural network or a reinforcement learner as a digital copy of the brain. Not even close. Now, even better, this is not just a fantastic thought experiment, all this has utility. For instance, in his lecture, one of the authors, David Ha, notes that humans can also get upside down goggles, or bicycles where the left and right directions are flipped. And, if they do, it takes a great deal of time for the human to adapt. For the neural network, no issues whatsoever. What a time to be alive! This episode has been supported by Lambda GPU Cloud. If you're looking for inexpensive Cloud GPUs for AI, check out Lambda GPU Cloud. They've recently launched Quadro RTX 6000, RTX 8000, and V100 instances. And, hold onto your papers, because Lambda GPU Cloud can cost less than half of AWS and Azure. Plus, they are the only Cloud service with 48GB RTX 8000. For current researchers at organizations like Apple, MIT, and Caltech in using Lambda Cloud instances, workstations, or servers. Make sure to go to LambdaLabs.com slash papers to sign up for one of their amazing GPU instances today. Our thanks to Lambda for their long-standing support and for helping us make better videos for you. Thanks for watching and for your generous support, and I'll see you next time.
|
And dear fellow scholars, this is two minute papers with Dr. Karo Ejona Ifehir. Today we are going to see the incredible piece of progress in research works on making our photos come to life. How do we do that? Well, of course, through view synthesis. To be more exact, we do that through this amazing technique that is referred to as a Nerf variant, which means that it is a learning based algorithm that tries to reproduce real world scenes from only a few views. In goal, a few photos of a scene, and it has to be able to synthesize new photorealistic images in between these photos. This is view synthesis in short. As you see here, it can be done quite well with the previous method. So, our question number one is, why bother publishing a new research paper on this? And question number two, there are plenty of view synthesis papers sloshing around, so why choose this paper? Well, this new technique is called Dio Nerf, Nerf via depth oracles. One of the key contributions here is that it is better at predicting how far things are from our camera, and it also takes less time to evaluate than its predecessors. It still makes thin structures a little murky, look at the tree here, but otherwise the rest seems close to the reference. So, is there value in this? Let's look at the results and see for ourselves. We will compare to the original Nerf technique first, and the results are comparable. What is going on? Is this not supposed to be better? Do these depth oracles help at all? Why is this just comparable to Nerf? Now, hold on to your papers, because here comes the key. The output of the two techniques may be comparable, but the input isn't. What does that mean? It means that the new technique was given 50 times less information. Whoa! 50 times less. That's barely anything. And when I read the paper, this was the point where I immediately went from a little disappointed to stand. 50 times less information, and it can still create comparable videos. Yes, the answer is yes, the depth oracles really work. And it does not end there, there's more. It was also compared against local lightfield fusion, which is from two years ago, and the results are much cleaner. And it is also compared against... What? This is the Neuro-Basis Expansion technique next in short. We just showcased it approximately two months ago, and not only an excellent follow-up paper appeared in the same year, just a few months' support, but it already compares to the previous work, and it outperforms it handily. My goodness, the pace of progress in machine learning research never disappoints. And here comes the best part. If you have been holding onto your paper so far, now squeeze that paper, because the new technique not only requires 50 times less information, no, no, it is also nearly 50 times cheaper and faster to train the new neural network and to create the new images. So, let's pop the question. Is it real time? Yes, all this runs in real time. Look, if we wish to walk around in a photorealistic virtual scene, normally we would have to write a light simulation program and compute every single image separately, where each image would take several minutes to finish. Now we just need to shoot a few rays, and the neural network will try to understand the data, and give us the rest instantly. What a time to be alive! And interestingly, look, the first author of this paper is Thomas Neff, who wrote this variant on Nerf. No men as stone men, I guess. Congratulations! And if you wish to discuss this paper, make sure to drop by on our Discord server. The link is available in the video description. This episode has been supported by weights and biases. In this post, they show you how to use their tool to check and visualize what your neural network is learning, and even more importantly, a case study on how to find bugs in your system and fix them. During my PhD studies, I trained a ton of neural networks, which were used in our experiments. However, over time, there was just too much data in our repositories, and what I am looking for is not data, but insight. And that's exactly how weights and biases helps you by organizing your experiments. It is used by more than 200 companies and research institutions, including OpenAI, Toyota Research, GitHub, and more. And get this, weight and biases is free for all individuals, academics, and open source projects. Make sure to visit them through wnba.com slash papers, or just click the link in the video description, and you can get a free demo today. Our thanks to weights and biases for their longstanding support, and for helping us make better videos for you. Thanks for watching, and for your generous support, and I'll see you next time.
|
Dear Fellow Scholars, this is Two Minute Papers with Dr. Karo Zsolnai-Fehir. Today we are going to put a fluid simulation in another fluid simulation, and with that create beautiful videos like this one. But how? And more importantly, why? Well, let's take two fluid simulation techniques. One will be the fluid implicit particle, flayp in short, and the other will be the boundary element method, BEM. So, why do we need two, or perhaps even more methods? Why not just one? Well, first, if we wish to simulate turbulence and high frequency splashes near moving objects, flayp is the answer. It is great at exactly that. However, it is not great at simulating big volumes of water. No matter because there are other methods to handle that, for instance, the boundary element method that we just mentioned, BEM in short. It is great in these cases because the BEM variant that's been used in this paper simulates only the surface of the liquid, and for a large ocean, the surface is much, much smaller than the volume. Let's have a look at an example. Here is a pure flip simulation. This should be great for small splashes. Yes, that is indeed true, but look, the waves then disappear quickly. Let's look at what BEM does with this scene. Yes, the details are lacking, but the waves are lovely. Now, we have a good feel of the limitations of these techniques, small splashes flip, oceans BEM. But here is the problem. What if we have a scene where we have both? Which one should we use? This new technique says, well, use both. What is this insanity? Now, hold on to your papers and look. This is the result of using the two simulation techniques together. Now, if you look carefully, yes, you guessed it right. Within the box, there is a flip simulation, and outside of the boxes, there is a BEM simulation. And the two are fused together in a way that the new method really takes the best of both worlds. Just look at all that detail. What a beautiful simulation. My goodness. Now, this is not nearly as easy as slapping together two simulation domains. Look, there is plenty of work to be done in the transition zone. And also, how accurate is this? Here is the reference simulation for the water droplet scene from earlier. This simulation would take forever for a big scene, and is here for us to know how it should look. And let's see how close the new method is to it. Whoa! Now we're talking. Now, worry not about the seams. They are there for us to see the internal workings of the algorithm. The final composition will look like this. However, not even this technique is perfect as a potential limitation. Look, here is the new method compared to the reference footage. The waves in the wake of the ship are simulated really well, and so are the waves further away at the same time. That's amazing. However, what we don't get is, look, crisp details in the BEM regions. Those are gone. But just compare the results to this technique from a few years ago and get a feel of how far we have come just a couple papers down the line. The pace of progress in computer graphics research is through the roof, loving it. What a time to be alive. And let's see, yes, the first author is Liboh Wang. Again, if you are a seasoned fellow scholar, you may remember our video on his first paper on Faro Fluids. And this is his third one. This man writes nothing but masterpieces. As a result, this paper has been accepted to the Cigarath Asia Conference, and being a first author there is perhaps the computer graphics equivalent of winning the Olympic gold medal. It is also beautifully written, so make sure to check it out in the video description. Huge congratulations on this amazing work. I cannot believe that we are still progressing so quickly year after year. PerceptiLabs is a visual API for TensorFlow carefully designed to make machine learning as intuitive as possible. This gives you a faster way to build out models with more transparency into how your model is architected, how it performs, and how to debug it. Look, it lets you toggle between the visual modeler and the code editor. It even generates visualizations for all the model variables and gives you recommendations both during modeling and training and does all this automatically. I only wish I had a tool like this when I was working on my neural networks during my PhD years. Visit perceptiLabs.com slash papers to easily install the free local version of their system today. Our thanks to perceptiLabs for their support and for helping us make better videos for you. Thanks for watching and for your generous support and I'll see you next time.
|
Dear Fellow Scholars, this is Two Minute Papers with Dr. Karo Zsolnai-Fehir. Today we are going to build the best virtual toaster that you have ever seen. And it's going to be so easy that it hardly seems possible. All this technique asks for is our input geometry to be a collection of parametric shapes. I'll tell you in a moment what that is, but for now let's see the toaster. Hmm, this looks fine, but what if we feel that it is not quite tall enough? Well, with this technique, look, it can change the height of it, which is fantastic, but something else also happens. Look, it also understands the body's relation to the other objects that are connected to it. We can also change the location of the handle, the slits can be adjusted symmetrically, and when we move the toaster, it understands that it moves together with the handles. This is super useful. For instance, have a look at this training example where if we change the wheels, it also understands that not only the wheels, but the wheel wells also have to change as well. This concept also works really well on this curtain. And all this means that we can not only dream up and execute these changes ourselves without having to ask a trained artist, but we can also do it super quickly and efficiently. And loving it, just to demonstrate how profound and non-trivial this understanding of interrelations is, here is an example of a complex object. Without this technique, if we grab one thing, exactly that one thing moves, which is represented by one of these sliders changing here. However, if we would grab this contraption in the real world, not only one thing would move, nearly every part would move at the same time. So, does this new method know that? Oh wow, it does! Look at that! And at the same time, not one, but many sliders are dancing around beautifully. Now, I mentioned that the requirement was that the input object has to be a parametric shape. This is something that can be generated from intuitive parameters. For instance, we can generate a circle if we say what the radius of the circle should be. The radius would be the parameter here, and the resulting circle is hence a parametric shape. In many domains, this is standard procedure, for instance, many computer-added design systems work with parametric objects. But we are not done yet, not even close. It also understands how the brush sizes that we use relates to our thinking. Don't believe it, let's have a look together. Right after we click, it detects that we have a small brush size, and therefore, in first, that we probably wish to do something with the handle, and there we go! That is really cool! And now, let's increase the brush size, and click nearby, and bam! There we go! Now it knows that we wish to interact with the drawer. Same with the doors. And hold on to your papers, because to demonstrate the utility of their technique, the authors also made a scene just for us. Look! Nice! This is no less than a two-minute paper-s branded chronometer. And here we can change the proportions, the dial, the hands, whatever we wish, and it is so easy and showcases the utility of the method so well. Now, I know what you are thinking, let's see it, taking! And will it be two minutes? Well, close enough. Certainly much closer to two minutes than the length of these videos, that is for sure. Thank you so much for the authors for taking their time off their busy day, just to make this. So, this is a truly wonderful tool, because even a novice artist, without 3D modeling expertise, can apply meaningful changes to complex pieces of geometry. No trained artist is required. What a time to be alive! Now, this didn't quite fit anywhere in this video, but I really wanted to show you this heartwarming message from Mark Chen, a research scientist at OpenAI. This really showcases one of the best parts of my job, and that is, when the authors of the paper come in, and enjoy the results with you fellow scholars. Loving it! Thank you so much again! Also, make sure to check out Yannick's channel for cool, in-depth videos on machine learning works. The link is available in the video description. This video has been supported by weights and biases. Without the recent offering, fully connected, a place where they bring machine learning practitioners together, to share and discuss their ideas, learn from industry leaders, and even collaborate on projects together. You see, I get messages from you fellow scholars telling me that you have been inspired by the series, but don't really know where to start. And here it is! Fully connected is a great way to learn about the fundamentals, how to reproduce experiments, get your papers accepted to a conference, and more. Make sure to visit them through wnbe.me slash papers, or just click the link in the video description. Our thanks to weights and biases for their longstanding support, and for helping us make better videos for you. Thanks for watching and for your generous support, and I'll see you next time!
|
Dear Fellow Scholars, this is two-minute papers with Dr. Karo Jona Ifehir. Today, we are going to see if a virtual AI character can learn or perhaps even invent these amazing signature moves. And this is a paper that was written by Sebastia Staka and his colleagues. He is a recurring scientist on this series, for instance, earlier he wrote this magnificent paper about dribbling AI characters. Look, the key challenge here was that we were given only three hours of unstructured motion capture data. That is, next to nothing and from this next to nothing, it not only learned these motions really well, but it could weave them together even when a specific movement combination was not present in this training data. But as these motions are created by human animators, they may show at least three problems. One, the training data may contain poses that don't quite adhere to the physics of a real human character. Two, it is possible that the upper body does something that makes sense, the lower body also does something that makes sense, but the whole thing put together does not make too much sense anymore. Or three, we may have these foot sliding artifacts that you see here. These are more common than you might first think. Here is an example of it from a previous work. And look, nearly all of the previous methods struggle with it. Now, this no work uses 20 hours of unstructured training data. Remember, the previous one only used three, so we rightfully expect that by using more information it can also learn more. But the previous work was already amazing, so what more can we really expect this new one to do? Well, it can not only learn these motions, weave together these motions like previous works, but hold on to your papers because it can now also come up with novel moves as well. Wow! This includes new attacking sequences and combining already existing attacks with novel footwork patterns, and it does all this spectacularly well. For instance, if we show it how to have it's guard up and how to throw a punch, what will it learn? Get this, it will keep it's guard up while throwing that punch. And it not only does that in a realistic fluid movement pattern, but it also found out about something that has strategic value, same with evading an attack with some head movement and counter attacking. Loving it. But how easy is it to use this? Do we need to be an AI scientist to be able to invoke these amazing motions? Well, if you have been holding on to your papers so far, now squeeze that paper and look here. Wow! You don't need to be an AI scientist to play with this, not at all. All you need is a controller to invoke these beautiful motions, and all this runs in real time. My goodness! For instance, you can crouch down and evade a potential attack by controlling the right stick and launch a punch in the meantime. And remember, not only both halves have to make sense separately, the body motion has to make sense as a whole. And it really does. Look at that! And here comes the best part, you can even assemble your own signature attacks, for instance, perform that surprise spinning backfist, an amazing spin kick. And yes, you can even go full karate kid with that crane kick. And as a cherry on top, the characters can also react to the strikes, clinch, or even try a take down. So with that, there we go. We are, again, one step closer to having access to super realistic motion techniques for virtual characters, and all we need for this is a controller. And remember, all this already runs in real time. Another amazing Cigar of Paper from Sebastian Stalker. And get this, he is currently a fourth year PhD student, and already made profound contributions to the industry. Huge congratulations on this amazing achievement. What a time to be alive! PerceptiLabs is a visual API for TensorFlow carefully designed to make machine learning as intuitive as possible. This gives you a faster way to build out models with more transparency into how your model is architected, how it performs, and how to debug it. Look, it lets you toggle between the visual modeler and the code editor. It even generates visualizations for all the model variables, and gives you recommendations both during modeling and training, and does all this automatically. I only wish I had a tool like this when I was working on my neural networks during my PhD years. Visit perceptiLabs.com slash papers to easily install the free local version of their system today. Our thanks to perceptiLabs for their support, and for helping us make better videos for you. Thanks for watching and for your generous support, and I'll see you next time.
|
Dear Fellow Scholars, this is Two Minute Papers with Dr. Karo Zsolnai-Fehir. Today we are going to see how an AI can learn crazy stunts from just one video clip. And if even that's not enough, it can do even more. This agent is embedded in a physics simulation and first it looks at a piece of reference motion like this one. And then after looking, it can reproduce it. That is already pretty cool, but it doesn't stop there. I think you know what's coming? Yes, not only learning, but improving the original motion. Look, it can refine this motion a bit, and then a bit more, and then some more. And this just keeps on going until... Wait a second, hold on to your papers because this looks impossible. Are you trying to tell me that it's improved the move so much that it can jump through this? Yes, yes it does. Here is the first reproduction of the jump motion and the improved version side by side. Whoa! The difference speaks for itself. Absolutely amazing. We can also give it this reference clip to teach it to jump from one box to another. This isn't quite difficult. And now comes one of my favorites from the paper, and that is testing how much it can improve upon this technique. Let's give it a try. It also learned how to perform a shorter jump, a longer jump, and now, oh yes, the final boss. Wow, it could even pull off this super long jump. It seems that this superbot can do absolutely anything. Well, almost. And it can not only learn these amazing moves, but it can also weave them together so well that we can build a cool little playground and it gets through it with ease. Well, most of it anyway. So, at this point, I was wondering how general the knowledge is that it learns from these example clips. A good sign of an intelligent actor is that things can change a little and it can adapt to that. Now, it clearly can deal with the changing environment that is fantastic, but do you know what acid can deal with? And now, if you have been holding onto your papers, squeeze that paper because it can also deal with changing body proportions. Yes, really. We can put it in a different body and it will still work. This chap is cursed with this crazy configuration and can still pull off a cord wheel. If you haven't been exercising lately, what's your excuse now? We can also ask it to perform the same task with more or less energy or to even apply just a tiny bit of force for a punch or to go full mic Tyson on the opponent. So, how is all this wizardry possible? Well, one of the key contributions of this work is that the author's devised a method to search the space of motions efficiently. Since it does it in a continuous reinforcement learning environment, this is super challenging. At the risk of simplifying the solution, their method solves this by running both an exploration phase to find new ways of pulling off a move. And with blue, you see that when it found something that seems to work, it also keeps refining it. Similar endeavors are also referred to as the exploration exploitation problem and the authors proposed a really cool no way of handling it. Now, there are plenty more contributions in the paper, so make sure to have a look at it in the video description. Especially given that this is a fantastic paper and the presentation is second to none. I am sure that the authors could have worked half as much on this project and this paper would still have been accepted, but they still decided to put in that extra mile. And I am honored to be able to celebrate their amazing work together with UFLO scholars. And for now, an AI agent can look at a single clip of a motion and can not only perform it, but it can make it better, pull it off in different environments, and it can be even put in a different body and still do it well. What a time to be alive! This video has been supported by weights and biases. Being a machine learning researcher means doing tons of experiments and, of course, creating tons of data. But I am not looking for data, I am looking for insights. And weights and biases helps with exactly that. They have tools for experiment tracking, data set and model versioning, and even hyper-parameter optimization. No wonder this is the experiment tracking tool choice of open AI, Toyota Research, Samsung, and many more prestigious labs. Make sure to use the link WNB.ME-SLASH-PAPER-INTROW or just click the link in the video description and try this 10-minute example of weights and biases today to experience the wonderful feeling of training a neural network and being in control of your experiments. After you try it, you won't want to go back. Thanks for watching and for your generous support, and I'll see you next time.
|
Dear Fellow Scholars, this is Two Minute Papers with Dr. Karojona Ifehir. Today we are going to see how crazy good MVD as new system is at simplifying virtual objects. These objects are used to create photorealistic footage for feature length movies, virtual worlds and more. But here comes the problem. Sometimes these geometries are so detailed, they are prohibitively expensive to store and render efficiently. Here are some examples from one of our papers that were quite challenging to iterate on and render. This took several minutes to render and always ate all the memory in my computer. So, what can we do if we would still like to get crisp, high quality geometry, but cheaper and quicker? I'll show you in a moment. This is part of a super complex scene. Get this. It is so complex that it takes nearly 100GB of storage space to render just one image of this and is typically used for benchmarking rendering algorithms. This is the nerve-wrong of light transport algorithms if you will. Well, hold on to your papers because I said that I'll show you in a moment what we can do to get all this at a more affordable cost, but in fact, you are looking at the results of the new method right now. Yes, parts of this image are the original geometry and other parts have already been simplified. So, which is which? Do you see the difference? Please stop the video and let me know in the comments below. I'll wait. Thank you. So, let's see together. Yes, this is the original geometry that requires over 5 billion triangles. And this is the simplified one, which... What? Can this really be? This uses less than 1% of the number of triangles compared to this. In fact, it's less than half a percent. That is insanity. This really means that about every 200 triangles are replaced with just one triangle and it still looks mostly the same. That sounds flat out impossible to me. Wow! So, how does this switchcraft even work? Well, now you see this is the power of differentiable rendering. The problem formulation is as follows. We tell the algorithm that here are the results that you need to get, find the geometry and material properties that were resolved in this. It runs all this by means of optimization, which means that it will have a really crude initial guess that doesn't even seem to resemble the target geometry. But then, over time, it starts refining it and it gets closer and closer to the reference. This process is truly a sight to behold. Look at how beautifully it is approximating the target geometry. This looks very close and is much cheaper to store and render. I loved this example too. Previously, this differentiable rendering concept has been used to be able to take a photograph and find a photorealistic material model that we can put in our simulation program that matches it. This work did very well with materials, but it did not capture the geometry. This other work did something similar to this new paper, which means that it jointly found the geometry and material properties. But, as you see, high-frequency details were not as good as with this one. You see here, these details are gone. And now, just two years and one paper later, we can get a piece of geometry that is so detailed that it needs billions of triangles and it can be simplified 200 to 1. Now, if even that is not enough, admittedly, it is still a little rudimentary, but it even works for animated characters. I wonder where we will be two more papers down the line from here. And for now, wow! Scientists at Nvidia knocked it out of the park with this one. Huge congratulations to the team! What a time to be alive! So, there you go! This was quite a ride and I hope you enjoyed it at least half as much as I did. And if you enjoyed it at least as much as I did, and are thinking that this light transport thing is pretty cool and you would like to learn more about it, I held a master-level course on this topic at the Technical University of Vienna. Since I was always teaching it to a handful of motivated students, I thought that the teachings shouldn't only be available for the privileged few who can afford a college education. No, no, the teachings should be available for everyone. Free education for everyone, that's what I want. So, the course is available free of charge for everyone, no strings attached, so make sure to click the link in the video description to get started. We write a full light simulation program from scratch there and learn about physics, the world around us, and more. This episode has been supported by Lambda GPU Cloud. If you're looking for inexpensive Cloud GPUs for AI, check out Lambda GPU Cloud. They've recently launched Quadro RTX 6000, RTX 8000, and V100 instances, and hold onto your papers because Lambda GPU Cloud can cost less than half of AWS and Asia. Plus, they are the only Cloud service with 48GB RTX 8000. Join researchers at organizations like Apple, MIT, and Caltech in using Lambda Cloud instances, workstations, or servers. Make sure to go to LambdaLabs.com slash papers to sign up for one of their amazing GPU instances today. Thanks to Lambda for their long-standing support and for helping us make better videos for you. Thanks for watching and for your generous support, and I'll see you next time.
|
Dear Fellow Scholars, this is two-minute papers with Dr. Karo Ejona Ifaher. I have to say we haven't had a simulation paper in a while, so today's episode is going to be my way of medicating myself. You are more than welcome to watch the process. And this paper is about performing collision detection. You see, when we write a simple space game, detecting whether a collision has happened or not is mostly a trivial endeavor. However, now, instead, let's look at the kind of simulation complexity that you are expecting from a two-minute papers video. First, let's try to screw this bolt in using an industry standard simulation system. And it is......stock. Hmm... Why? Because here, we would need to simulate in detail not only whether two things collide, they collide all the time, but we need to check for and simulate friction too. Let's see what this new simulation method does with the same scene. And... Oh, yes! This one isn't screwing with us and does the job perfectly. Excellent! However, this was not nearly the most complex thing it can do. Let's try some crazy geometry with crazy movements and tons of friction. There we go. This one will do. Welcome to the expanding log box experiment. So, what is this? Look, as we turn the key, the locking pins retract, and the bottom is now allowed to fall. This scene contains tens to hundreds of thousands of contacts, and yet it still works perfectly. Beautiful! I love this one because with this simulation, we can test intricate mechanisms for robotics and more before committing to manufacturing anything. And, unlike with previous methods, we don't need to worry whether the simulation is correct or not, and we can be sure that if we 3D print this, it will behave exactly this way. So good! Also, here come some of my favorite experiments from the paper. For instance, it can also simulate a piston attached to a rotating disk, smooth motion on one wheel leading to intermittent motion on the other one. And, if you feel the urge to build a virtual bike, don't worry for a second, because your chain and sprocket mechanisms will work exactly as you expect to. Loving it! Now, interestingly, look here. The time-step size used with the new technique is a hundred times bigger, which is great. We can advance the time in bigger pieces when computing the simulation. That is good news indeed. However, every time we do so, we still have to compute a great deal more. The resulting computation time is still at least a hundred times slower than previous methods. However, those methods don't count, at least not on these scenes, because they have produced incorrect results. Look at it some other way, this is the fastest simulator that actually works. Still, it is not that slow. The one with the intermittent motion takes less than a second per time-step, which likely means a few seconds per frame, while the bolt-screwing scene is likely in the minutes per frame domain. Very impressive! And, if you're a seasoned fellow scholar, you know what's coming. This is where we invoke the first law of papers, which says that research is a process. Do not look at where we are, look at where we will be two more papers down the line. And, two more papers down the line, I am sure even the more complex simulations will be done in a matter of seconds. What a time to be alive! This episode has been supported by weights and biases. In this post, they show you how to use their tool to visualize confusion matrices and find out where your neural network made mistakes, and what exactly those mistakes were. Weight and biases provides tools to track your experiments in your deep learning projects. Their system is designed to save you a ton of time and money, and it is actively used in projects at prestigious labs, such as OpenAI, Toyota Research, GitHub, and more. And the best part is that weights and biases is free for all individuals, academics, and open-source projects. It really is as good as it gets. If you are going to visit them through wnb.com slash papers, or just click the link in the video description, and you can get the free demo today. Our thanks to weights and biases for their long-standing support and for helping us make better videos for you. Thanks for watching and for your generous support, and I'll see you next time.
|
Dear Fellow Scholars, this is Two Minute Papers with Dr. Karo Zona Ifehir. Today we are going to look at a paper with two twists. You know what? I'll give you a twist number one right away. This human isn't here. This human isn't here either. And neither is this human here. Now you're probably asking Karo what are you even talking about? Now hold on to your papers because I am talking about this. Look, this is the geometry of this virtual human inside a virtual world. Whoa! Yes, all of these people are in a synthetic video and in a virtual environment that can be changed with a simple click. And more importantly, as we change the lighting or the environment, it also simulates the effect of that environment on the character making it look like they are really there. So, that sounds good, but how do we take a human and make a digital copy of them? Well, first we place them in a capture system that contains hundreds of LED lights and an elaborate sensor for capturing depth information. Why do we need these? Well, all this gives the system plenty of data on how the skin, hair and the clothes reflect light. And, at this point, we know everything we need to know and can now proceed and place our virtual copy in a computer game or even a telepresence meeting. Now, this is already amazing, but two things really stick out here. One, you will see when you look at this previous competing work. This had a really smooth output geometry, which means that only few high frequency details were retained. This other work was better at retaining the details, but look, tons of artifacts appear when the model is moving. And, what does the new one look like? Is it any better? Let's have a look. Oh my, we get tons of fine details and the movements have improved significantly. Not perfect by any means, look here, but still, this is an amazing leap forward. Two, the other remarkable thing here is that the results are so realistic that objects in the virtual scene can cast a shadow on our model. What a time to be alive. And now, yes, you remember that I promised two twists. So, where is twist number two? Well, it has been here all along for the entirety of this video. Have you noticed? Look, all this is from 2019, from two years ago. Two years is a long time in machine learning and computer graphics research, and I cannot wait to see how it will be improved two more papers down the line. If you are excited too, make sure to subscribe and hit the bell icon to not miss it when it appears. This video has been supported by weights and biases. They have an amazing podcast by the name, Gradient Descent, where they interview machine learning experts who discuss how they use learning based algorithms to solve real world problems. They have discussed biology, teaching robots, machine learning in outer space, and a whole lot more. Perfect for a fellow scholar with an open mind. Make sure to visit them through wmb.me slash gd or just click the link in the video description. Our thanks to weights and biases for their long standing support and for helping us make better videos for you. Thanks for watching and for your generous support, and I'll see you next time.
|
Dear Fellow Scholars, this is two-minute papers with Dr. Karo Zsolnai-Fehir, testing modern computer games by using an AI is getting more and more popular these days. This earlier work showcased how we can use an automated agent to test the integrity of the game by finding spots where we can get stuck. And when we fixed the problem, we could easily ask the agent to check whether the fix really worked. In this case, it did. And this new work also uses learning algorithms to test our levels. Now this chap has been trained on a fixed level, mastered it, and let's see if it has managed to obtain general knowledge from it. How? Well, by testing how it performs on a different level. It is very confident, good, but... oh oh, as you see, it is confidently incorrect. So, is it possible to train an agent to be able to beat these levels more reliably? Well, how about creating a more elaborate curriculum for them to learn on? Yes, let's do that. But with a twist. In this work, the authors chose not to feed the AI a fixed set of levels. No, no, they created another AI that builds the levels for the player AI. So, both the builder and the player are learning algorithms who are tasked to succeed together in getting the agent to the finish line. They have to collaborate to succeed. Building the level means choosing the appropriate distance, height, angle, and size for these blocks. Let's see them playing together on an easy level. Okay, so far so good. But let's not let them build a little cartel where only easy levels are being generated. So, they get a higher score. I want to see a challenge. To do that, let's force the builder AI to use a larger average distance between the blocks and thereby creating levels of a prescribed difficulty. And with that, let's ramp up the difficulty a little. Things get a little more interesting here because... whoa! Do you see what I see here? Look! It even found a shortcut to the end of the level. And let's see the harder levels together. While many of these chaps failed, some of them are still able to succeed. A very cool. Let's compare the performance of the new technique with the previous FixTrack agent. This is the chap that learned by mastering only a FixTrack. And this one learned in the wilderness. Neither of them have seen these levels before. So, who is going to be scraperer? Of course, the wilderness guy described in the new technique. Excellent! So, all this sounds great, but I hear you asking the key question here. What do we use this for? Well, one, the player AI can test the levels that we are building for our game and give us feedback on whether it is possible to finish, is it too hard or too easy, and more. This can be a godsend when updating some levels because the agent will almost immediately tell us whether it has gotten easier or harder or if we have broken the level. No human testing is required. Now, hold on to your papers because the thing runs so quickly that we can even refine a level in a real time. Loving it. Or, too, the builder can also be given to a human player who might enjoy a level being built in real time in front of them. And here comes the best part. The whole concept generalizes well for other kinds of games, too. Look, the builder can build racetracks and the player can try to drive through them. So, do these great results also generalize to the racing game. Let's see what the numbers say. The agent that trained on a fixed track can succeed on an easy level, about 75% of the time, while the newly proposed agent can do it nearly with a 100% chance. A bit of an improvement. OK, now look at this. The fixed track agent can only beat a hard level, about 2 times out of 10, while the new agent can do it about 6 times out of 10. That is quite a bit of an improvement. Now, note that in a research paper, choosing a proper baseline to compare to is always a crucial question. I would like to note that the baseline here is not the state of the art, and with that it is a little easier to make the new solution pop. No matter the solutions are still good, but I think this is worth a note. So, from now on, whenever we create a new level in a computer game, we can have hundreds of competent AI players testing it in real time. So good. What a time to be alive! This episode has been supported by Wades and Biasis. In this post, they show you how to use their tool to interpret the results of your neural networks. For instance, they tell you how to look if your neural network has even looked at the dog in an image before classifying it to be a dog. If you work with learning algorithms on a regular basis, make sure to check out Wades and Biasis. Their system is designed to help you organize your experiments, and it is so good it could shave off weeks or even months of work from your projects, and is completely free for all individuals, academics, and open source projects. This really is as good as it gets, and it is hardly a surprise that they are now used by over 200 companies and research institutions. Make sure to visit them through wnbe.com slash papers, or just click the link in the video description, and you can get a free demo today. Our thanks to Wades and Biasis for their longstanding support, and for helping us make better videos for you. Thanks for watching, and for your generous support, and I'll see you next time.
|
Dear Fellow Scholars, this is Two Minute Papers with Dr. Karo Zsolnai-Fehir. Today we are going to see if an AI can become a good software engineer. Spoiler alert, the answer is yes, kind of. Let me explain. Just one year ago, scientists at OpenAI published a technique by the name GPT-3 and it is an AI that was unleashed to read the internet with the sole task of finishing your sentences. So, what happened then? Well, now we know that of course it learned whatever it needed to learn to perform the sentence completion properly. And to do this, it would need to learn English by itself, and that's exactly what it did. It also learned about a lot of topics to be able to discuss them well. We gave it a try and I was somewhat surprised when I saw that it was able to continue a two minute paper script, even though it seems to have turned into a history lesson. It also learned how to generate properly formatted plots from a tiny prompt written in plain English, not just one kind, many kinds. And remember, this happened just about a year ago, and this AI was pretty good at many things. But soon after a newer work was published by the name Image GPT. What did this do? Well, this was a GPT variant that could not finish your sentences, but your images. Yes, really. The problem statement is simple, we give it an incomplete image, and we ask the AI to fill in the missing pixels. Have a look at this water droplet example. We humans know that since we see the remnants of some ripples over there too, there must be a splash, but does the AI know? Oh yes, yes it does. Amazing. And this is the true image for reference. So, what did they come out with now? Well, the previous GPT was pretty good at many things, and this new work OpenAI Codex is a GPT language model that was fine tuned to be excellent at one thing. And that is writing computer programs or finishing your code. Sounds good. Let's give it a try. First, please write a program that says Hello World 5 times. It can do that. And we can also ask it to create a graphical user interface for it. No coding skills required. That's not bad by any means, but this is OpenAI we are talking about, so I am sure it can do even better. Let's try something a tiny bit more challenging. For instance, writing a simple space game. First, we get an image of a spaceship that we like, then instruct the algorithm to resize and crop it. And here comes one of my favorites, start animating it. Look, it immediately wrote the appropriate code where it will travel with a prescribed speed. And yes, it should get flipped as soon as it hits the wall. Looks good. But will it work? Let's see. It does. And all this from a written English description. Outstanding. Of course, this is still not quite the physics simulation that you all see and love around here, but I'll take it. But this is still not a game, so please add a moving asteroid, check for collisions and infuse the game with a scoring system. And there we go. So how long did all this take? And now hold on to your papers because this game was written in approximately 9 minutes. No coding knowledge is required. Wow. What a time to be alive. Now, in these 9-ish minutes, most of the time was not spent by the AI thinking, but the human typing. So still, the human is the bottleneck. But today, with all the amazing voice recognition systems that we have, we don't even need to type these instructions. Just say what you want and it will be able to do it. So, what else can it do? For instance, it can also deal with similar requests to what software engineers are asked in interviews. And I have to say the results indicate that this AI would get hired to some places. But that's not all. It can also nail the first grade math test. An AI. Food for thought. Now, this open AI codex work has been out there for a few days now, and I decided not to cover it immediately, but wait a little and see where the users take it. This is, of course, not great for views, but no matter we are not maximizing views, we are maximizing meaning. In return, there are some examples out there in the world. Let's look at three of them. One, it can be asked to explain a piece of code, even if it is written in assembly. Two, it can create a pong game in 30 seconds. Remember, this used to be a blockbuster Atari game, and now an AI can write it in half a minute. And yes, again, most of the half minute is taken by waiting for the human for instructions. Wow! It can also create a plugin for Blender, an amazing, free 3D modeler program. These things used to take several hours of work at the very least. And with that, I feel that what I said for GPT3 rings even more true today. I am replacing GPT3 with codex and quoting. The main point is that working with codex is a really peculiar process, where we know that a vast body of knowledge lies within, but it only emerges if we can bring it out with properly written prompts. It almost feels like a new kind of programming that is open to everyone, even people without any programming or technical knowledge. If a computer is a bicycle for the mind, then codex is a fighter jet. And all this progress in just one year. I cannot wait to see where you follow scholars will take it, and what open AI has in mind for just one more paper down the line. And until then, software coding might soon be a thing anyone can do. What a time to be alive! PerceptiLabs is a visual API for TensorFlow, carefully designed to make machine learning as intuitive as possible. This gives you a faster way to build out models with more transparency into how your model is architected, how it performs, and how to debug it. Look, it lets you toggle between the visual modeler and the code editor. It even generates visualizations for all the model variables, and gives you recommendations both during modeling and training, and does all this automatically. I only wish I had a tool like this when I was working on my neural networks during my PhD years. Visit perceptilabs.com slash papers to easily install the free local version of their system today. Our thanks to perceptilabs for their support, and for helping us make better videos for you. Thanks for watching and for your generous support, and I'll see you next time.
|
Dear Fellow Scholars, this is two-minute papers with Dr. Karo Zsolnai-Fehir. Today, we are going to see how Tesla uses no less than a simulated game world to train their self-driving cars. And more. In their AI Day presentation video, they really put up a clinic of recent AI research results and how they apply them to develop self-driving cars. And, of course, there is plenty of coverage of the event, but as always, we are going to look at it from a different angle. We are doing it paper style. Why? Because after nearly every two-minute paper's episode, where we showcase an amazing paper, I get a question saying something like, okay, but when do I get to see or use this in the real world? And rightfully so, that is a good question. And in this presentation, you will see that these papers that you see here get transferred into real-world products so fast it really makes my head spin. Let's see this effect demonstrated by looking through their system. Now, first, their cars have many cameras, no depth information, just the pixels from these cameras, and one of their goals is to create this vector space view that you see here. That is almost like a map or a video game version of the real roads and objects around us. That is a very difficult problem. Why is that? Because the car has many cameras. Is that a problem? Yes, kind of. I'll explain in a moment. You see, there is a bottom layer that processes the raw sensor data from the cameras mounted on the vehicle. So, here, in-go the raw pixels and outcomes more useful, high-level information that can be used to determine whether this clump of pixels is a car or a traffic light. Then, in the upper layers, this data can be used for more specific tasks, for instance, trying to estimate where the lanes and curbs are. So, what papers are used to accomplish this? Looking through the architecture diagrams, we see transformer neural networks, BIFPNs, and Ragnet. All papers from the last few years, for instance, Ragnet is a neural network variant that is great at extracting spatial temporal information from the raw sensor data. And that is a paper from 2020, from just one year ago, already actively used in training self-driving cars. That is unreal. Now, we mentioned that having many cameras is a bit of a problem. Why is that? Isn't that supposed to be a good thing? Well, look, each of the cameras only sees parts of the truck. So, how do we know where exactly it is and how long it is? We need to know all this information to be able to accurately put the truck into the vector space view. What we need for this is a technique that can fuse information from many cameras together intelligently. Note that this is devilishly difficult due to each of the cameras having a different calibration, location, view directions, and other properties. So, who is to tell that the point here corresponds to which point in a different camera view? And this is accomplished through, yes, a transformer neural network, a paper from 2017. So, does this multi-camera technique work? Does this improve anything? Well, let's see. Oh, yes, the yellow predictions here are from the previous single camera network, and as you see, unfortunately, things flicker in and out of existence. Why is that? It is because a passing car is leaving the view of one of the cameras and as it enters the view of the next one, they don't have this correspondance technique that would say where it is exactly. And, look, the blue objects show the prediction of the multi-camera network that can do that, and things aren't perfect, but they are significantly better than the single camera network. That is great. However, we are still not taking into consideration time. Why is that important? Let's have a look at two examples. One, if we are only looking at still images and not taking into consideration how they change over time, how do we know if this car is stationary? Is it about to park somewhere or is it speeding? Also, two, this car is now occluded, but we saw it a second ago, so we should know what it is up to. That sounds great, and what else can we do if our self-driving system has a concept of time? Well, much like humans do, we can make predictions. These predictions can take place both in terms of mapping what is likely to come, an intersection around about, and so on. But, perhaps even more importantly, we can also make predictions about vehicle behavior. Let's see how that works. The green lines show how far away the next vehicle is, and how fast it is going. The green line tells us the real true information about it. Do you see the green? No, that's right, it is barely visible because it is occluded by a blue line, which is the prediction of the new video network. That means that its predictions are barely off from the real velocities and distances, which is absolutely amazing. And as you see with orange, the old network that was based on single images is off by quite a bit. So now, a single car can make a rough map of its environment wherever it drives, and they can also stage the readings of multiple cars together into an even more accurate map. Putting this all together, these cars have a proper understanding of their environment, and this makes navigation much easier. Look at those crisp, temporally stable labelling. It has very little flickering. Still not perfect by any means, but this is remarkable progress in so little time. And we are at the point where predicting the behaviors of other vehicles and pedestrians can also lead to better decision making. But we are not done yet. Not even close. Look, the sad truth of driving is that unexpected things happen. For instance, this truck makes it very difficult for us to see, and the self-driving system does not have a lot of data to deal with that. So, what is a possible solution to that? There are two solutions. One is fetching more training data. One car can submit an unexpected event and request that the entire Tesla fleet sends over if they have encountered something similar. Since there are so many of these cars on the streets, tens of thousands of similar examples can be fetched from them and added to the training data to improve the entire fleet. That is mind-blowing. One car encounters a difficult situation, and then every car can learn from it. How cool is that? That sounds great. So, what is the second solution? Not fetching more training data, but creating more training data. What? Just make stuff up? Yes, that's exactly right. And if you think that is ridiculous, and are asking, how could that possibly work? Well, hold on to your papers because it does work. You are looking at it right now. Yes, this is a photorealistic simulation that teaches self-driving cars to handle difficult corner cases better. In the real world, we can learn from things that already happened, but in a simulation, we can make anything happen. This concept really works, and one of my favorite examples is OpenAI's robot hand that we have showcased earlier in this series. This also learns the rotation techniques in a simulation, and it does it so well that the software can be uploaded to a real robot hand, and it will work in real situations too. And now, the same concept for self-driving cars, loving it. With these simulations, we can even teach these cars about cases that would otherwise be impossible, or unsafe to test. For instance, in this system, the car can safely learn what it should do if it sees people and dogs running on the highway. A capable artist can also create miles and miles of these virtual locations within a day of work. This simulation technique is truly a treasure trove of data because it can also be procedurally generated, and the moment the self-driving system makes an incorrect decision, a Tesla employee can immediately create an endless set of similar situations to teach it. Now, I don't know if you remember, we talked about a fantastic paper, a couple months ago, that looked at real-world videos, then took video footage from a game, and improved it to look like the real world. Convert video games to reality if you will. This had an interesting limitation. For instance, since the AI was trained on the beautiful lush hills of Germany and Austria, it hasn't really seen the dry hills of LA. So, what does it do with them? Look, it redrewed the hills the only way it saw hills exist, which is covered with trees. So, what does this have to do with Tesla's self-driving cars? Well, if you have been holding onto your paper so far, now squeeze that paper because they went the other way around. Yes, that's right. They take video footage of a real unexpected event where the self-driving system failed, use the automatic labeler used for the vector space view, and what do they make out of it? A video game version. Holy matter of papers. And in this video game, it is suddenly much easier to teach the algorithm safely. You can also make it easier, harder, replace a car with a dog or a pack of dogs, and make many similar examples so that the AI can learn from this what if situations as much as possible. So, there you go. Full-tech transfer into a real AI system in just a year or two. So, yes, the papers you see here are for real, as real as it gets. And, yes, the robot is not real, just a silly joke. For now. And two more things that make all this even more mind-blowing. One, remember, they don't showcase the latest and greatest that they have, just imagine that everything you heard today is old news compared to the tech they have now. And two, we have only looked at just one side of what is going on. For instance, we haven't even talked about their amazing dojo chip. And if all this comes to fruition, we will be able to travel cheaper, more relaxed, and also, perhaps most importantly, safer. I can't wait. I really cannot wait. What a time to be alive. This video has been supported by weights and biases. Check out the recent offering fully connected, a place where they bring machine learning practitioners together to share and discuss their ideas, learn from industry leaders, and even collaborate on projects together. You see, I get messages from you fellow scholars telling me that you have been inspired by the series, but don't really know where to start. And here it is. Fully connected is a great way to learn about the fundamentals, how to reproduce experiments, get your papers accepted to a conference, and more. Make sure to visit them through wnb.me slash papers or just click the link in the video description. Our thanks to weights and biases for their longstanding support and for helping us make better videos for you. Thanks for watching and for your generous support, and I'll see you next time.
|
Dear Fellow Scholars, this is two-minute papers with Dr. Karo Zona Ifehir. Today, we are going to see how we can use our hands, but not our fingers to mingle with objects in virtual worlds. The promise of virtual reality VR is indeed truly incredible. If one day it comes to fruition, doctors could be trained to perform surgery in a virtual environment, expose astronauts to virtual zero-gravity simulations, work together with telepresence applications, you name it. The dream is getting closer and closer, but something is still missing. For instance, this previous work uses a learning-based algorithm to teach a head-mounted camera to tell the orientation of our hands at all times. One more paper down the line, this technique appeared that can deal with examples with challenging hand-hand interactions, deformations, lots of self-contact, and self-occlusion. This was absolutely amazing because these are not gloves. No, no. This is the reconstruction of the hand by the algorithm. Absolutely amazing. However, it is slow, and mingling with other objects is still quite limited. So, what is missing? What is left to be done here? Let's have a look at today's paper and find out together. This is its output. Yes, mingling that looks very natural. But, what is so interesting here? The interesting part is that it has realistic finger movements. Well, that means that it just reads the data from the sensors on the fingers, right? Now, hold on to your papers and we'll find out once we look at the input. Oh my, is this really true? No sensors on the fingers anywhere. What kind of black magic is this? And with that, we can now make the most important observation in the paper, and that is that it reads information from only the wrist and the objects in the hand. Look, the sensors are on the gloves, but none are on the fingers. Once again, the sensors have no idea what we are doing with our fingers. It only reads the movement of our wrist and the object, and all the finger movement is synthesized by it automatically. Whoa! And with this, we can not only have a virtual version of our hand, but we can also manipulate virtual objects with very few sensor readings. The rest is up to the AI to synthesize. This means that we can have a drink with a friend online, use a virtual hammer, too, depending on our mood, fix, or destroy virtual objects. This is very challenging because the finger movements have to follow the geometry of the object. Look, here, the same hand is holding different objects, and the AI knows how to synthesize the appropriate finger movements for both of them. This is especially apparent when we change the scale of the object. You see, the small one requires small and precise finger movements to turn around. These are motions that need to be completely resynthesized for bigger objects. So cool. And now comes the key. So does this only work on objects that it has been trained on? No, not at all. For instance, the method has not seen this kind of teapot before, and still it knows how to use its handle, and to hold it from the bottom, too, even if both of these parts look different. Be careful, though, who knows, maybe virtual teapots can get hot, too. Once more, it handles the independent movement of the left and right hands. Now, how fast is all this? Can we have coffee together in virtual reality? Yes, absolutely. All this runs in close to real time. There is a tiny bit of delay, though, but a result like this is already amazing, and this is typically the kind of thing that can be fixed one more paper down the line. However, not even this technique is perfect, it might still miss small features on an object, for instance, a very thin handle might confuse it. Or if it has an inaccurate reading of the hand pose and distances, this might happen. But for now, having a virtual coffee together, yes, please, sign me up. This episode has been supported by Lambda GPU Cloud. If you're looking for inexpensive Cloud GPUs for AI, check out Lambda GPU Cloud. They've recently launched Quadro RTX 6000, RTX 8000, and V100 instances, and hold onto your papers because Lambda GPU Cloud can cost less than half of AWS and Azure. Plus, they are the only Cloud service with 48GB RTX 8000. And researchers at organizations like Apple, MIT, and Caltech in using Lambda Cloud instances, workstations or servers. Make sure to go to LambdaLabs.com slash papers to sign up for one of their amazing GPU instances today. Our thanks to Lambda for their long-standing support and for helping us make better videos for you. Thanks for watching and for your generous support, and I'll see you next time.
|
Dear Fellow Scholars, this is Two Minute Papers with Dr. Karo Jona Ifeher. Today we are going to make some crazy, synthetic music videos. In machine learning research, view synthesis papers are under eyes these days. These techniques are also referred to as Nerf variants, which is a learning based algorithm that tries to reproduce real-world scenes from only a few views. It is very challenging, look, in-go, a bunch of photos of a scene, and the method has to be able to synthesize new photorealistic images between these photos. But this is not the only paper in this area. Researchers are very aware of the potential here, and thus, a great number of Nerf variants are appearing every month. For instance, here is a recent one that extends the original technique to handle shiny and reflective objects better. So, what else is there to do here? Well, look here. This new one demands not a bunch of photos from just one camera, no, no, but from 16 different cameras. That's a big ask. But, in return, the method now has tons of information about the geometry and the movement of these test subjects, so is it intelligent enough to make something useful out of it? Now, believe it or not, this in return can not only help us look around in the scene, but even edit it in three new ways. For instance, one, we can change the scale of these subjects, add and remove them from the scene, and even copy-paste them. Excellent for creating music videos. Well, talking about music videos, do you know what is even more excellent for those, retiming movements? That is also possible. This can, for instance, improve an OK dancing performance into an excellent one. And, three, because now we are in charge of the final footage, if the original footage is shaky, well, we can choose to eliminate that camera shake. Game changer. It's not quite the hardware requirement where you just whip out your smartphone and start nerfing and editing, but for what it can do, it really does not ask for a lot, especially given that if we wish to, we can even remove some of these cameras and still expect reasonable results. We lose roughly a decibel of signal per camera. Here is what that looks like. Not too shabby. And, all this progress just one more paper down the line. And, I like the idea behind this paper a great deal, because typically, what we are looking for in a follow-up paper is trying to achieve similar results while asking for less data from the user. This paper goes into the exact other direction and asks what amazing things could be done if we had more data instead. Loving it. We do not have to worry about that, not only mirror view synthesis, but mirror scene editing is also possible. What a time to be alive. This video has been supported by weights and biases. Being a machine learning researcher means doing tons of experiments and, of course, creating tons of data. But, I am not looking for data, I am looking for insights. And, weights and biases helps with exactly that. We have tools for experiment tracking, data set and model versioning, and even hyper parameter optimization. No wonder this is the experiment tracking tool choice of open AI, Toyota Research, Samsung, and many more prestigious labs. Make sure to use the link WNB.ME slash paper intro, or just click the link in the video description, and try this 10 minute example of weights and biases today to experience the wonderful feeling of training a neural network and being in control of your experiments. After you try it, you won't want to go back. Thanks for watching and for your generous support, and I'll see you next time.
|
Dear Fellow Scholars, this is Two Minute Papers with Dr. Kato Jean-Eiffahir. Today we are going to see an AI learn boxing and even mimic gorillas during this process. Now, in an earlier work, we saw a few examples of AI agents playing two player sports. For instance, this is the You Shall Not Pass game where the red agent is trying to hold back the blue character and not let it cross the line. Here you see two regular AI's dooking it out, sometimes the red wins, sometimes the blue is able to get through. Nothing too crazy here. Until this happens. Look, what is happening? It seems that this agent started to do nothing and still won. Not only that, but it suddenly started winning almost all the games. How is this even possible? Well, what the agent did is perhaps the AI equivalent of hypnotizing the opponent if you will. The more rigorous term for this is that it induces of distribution activations in its opponent. This adversarial agent is really doing nothing, but that's not enough. It is doing nothing in a way that reprograms its opponent to make mistakes and behave close to a completely randomly acting agent. Now, this no paper showcases AI agents that can learn boxing. The AI is asked to control these joint-actuated characters which are embedded in a physics simulation. Well, that is quite a challenge. Look, for quite a while, after 130 million steps of training, it cannot even hold it together. And yes, these folks collapse, but this is not the good kind of hypnotic adversarial collapsing. I am afraid this is just passing out without any particular benefits. That was quite a bit of training and all this for nearly nothing. Right? Well, maybe. Let's see what they did after 200 million training steps. Look, they can not only hold it together, but they have a little footwork going on and can circle each other and try to take the middle of the ring. Improvements. Good. But this is not dancing practice. This is boxing. I would like to see some boxing today and it doesn't seem to happen. Until we wait for a little longer, which is 250 million training steps. Now, is this boxing? Not quite. This is more like two drunkards trying to do it out when neither of them knows how to throw a real punch, but their gloves are starting to touch the opponent and they start getting rewards for it. What does that mean for an intelligent agent? Well, it means that over time it will learn to do that a little better. And hold on to your papers and see what they do after 420 million steps. Oh, wow. Look at that. I am seeing some punches and not only that, but I also see somebody and head movement to evade the punches. Very cool. And if we keep going for longer, Whoa. These guys can fight. They now learned to perform feints, jabs, and have some proper knockout power too. And if you have been holding onto your papers, now squeeze that paper because all they looked at before starting the training was 90 seconds of motion capture data. And this is a general framework that also works for fencing as well. Look, the agents learn to lunge, deflect, evade attacks, and more. Absolutely amazing. What a time to be alive. So this was approximately a billion training steps, right? So how long did that take to compute? It took approximately a week. And you know what's coming? Of course, we invoke the first law of papers which says that research is a process. Do not look at where we are. Look at where we will be two more papers down the line. And two more papers down the line. I bet this will be possible in a matter of hours. And this is the part with the gorillas. It is also interesting that even though there were plenty of reasons to, the researchers didn't quit after 130 million steps. They just kept on going and eventually succeeded. Especially in the presence of not so trivial training curves where the blocking of the other player can worsen the performance and it's often not as easy to tell where we are. That is a great life lesson right there. PerceptiLabs is a visual API for TensorFlow carefully designed to make machine learning as intuitive as possible. This gives you a faster way to build out models with more transparency into how your model is architected, how it performs, and how to debug it. Look, it lets you toggle between the visual modeler and the code editor. It even generates visualizations for all the model variables and gives you recommendations both during modeling and training and does all this automatically. I only wish I had a tool like this when I was working on my neural networks during my PhD years. Visit perceptiLabs.com slash papers to easily install the free local version of their system today. Thanks for watching and for your generous support and I'll see you next time.
|
Dear Fellow Scholars, this is Two Minute Papers with Dr. Karo Zsolnai-Fehir. Today, we are going to see magical things that open up when we are able to automatically find the foreground and the background of a video. Let's see why that matters. This new technique leans on a previous method to find the boy and the dog. Let's call this level 1 segmentation. So far so good, but this is not the state of the art. Yet, now comes level 2. It also found the shadow of the boy and the shadow of the dog. Now we are talking, but it doesn't stop there. It gets even better. Level 3, this is where things get out of hand. Look, the dog is occluding the boy's shadow and it is able to deal with that too. So if we can identify all of the effects that are attached to the boy and the dog, what can we do with all this information? Well, for instance, we can even remove them from the video. Nothing to see here. Now, a common problem is that still the silhouette of the subject still remains in the final footage, so let's take a close look together. I don't see anything at all. Wow! Let me know in the comments below. Just to showcase how good this removal is, here is a good technique from just one year ago. Do you see it? This requires the shadows to be found manually, so we have to work with that. And still, in the outputs you can see the silhouette we mentioned. And how much better is the new method? Well, it finds the shadows automatically that is already mind-blowing and the outputs are… Yes, much cleaner. Not perfect, there is still some silhouette action, but if I were not actively looking for it, I might not have noticed it. It can also remove people from this trampoline scene and not only the bodies, but it also removes their effect on the trampolines as well. Wow! And as this method can perform all this reliably, it opens up the possibility for new, magical effects. For instance, we can duplicate this test subject and even feed it in and out. Note that it has found its shadows as well. Excellent! So, it can deal with finding not only the shape of the boy and the dog, and it knows that it's not enough to just find their silhouettes, but it also has to find additional effects they have on the footage. For instance, their shadows. That is wonderful, and what is even more wonderful is that this was only one of the simpler things it could do. Those are not the only potential correlated effects, look, a previous method was able to find this one here, but it's not enough to remove it because it also has additional effects on the scene. What are those? Well, look, it has reflections, and it creates ripples too. This is so much more difficult than just finding shadows. And now, let's see the new method. And it knows about the reflections and ripples, finds both of them and gives us this beautifully clean result. Nothing to see here. And also, look at this elephant. Removing just the silhouette of the elephant is not enough, it also has to find all the dust around it, and it gets worse, the dust is changing rapidly over time. And believe it or not, wow, it can find a dust tool and remove the elephant. Again, nothing to see here. And if you think that this dust was the new algorithm at its best, then have a look at this drifting car. Previous method? Yes, that is the car, but you know what I want. I want the smoke gone too. So that's probably impossible, right? Well, let's have a look. Wow, I can't believe it. It grabbed and removed the car and the smoke together, and once again, nothing to see here. So what are those more magical things that this opens up? Watch carefully. It can make the colors pop here. And remember, it can find the reflections of the flamingo, so it keeps not only the flamingo, but the reflection of the flamingo in color as well. Absolutely amazing. And if we can find the background of the video, we can even change the background. This works even in the presence of a moving camera, which is a challenging problem. Now of course, not even this technique is perfect. Look here. The reflections are copied off of the previous scene and it shows on the new one. So, what do you think? What would you use this technique for? Let me know in the comments or if you wish to discuss similar topics with other fellow scholars in a warm and welcoming environment, make sure to join our Discord channel. Also, I would like to send a big thank you to the mods and everyone who helps running this community. The link to the server is available in the video description. You are invited. What you see here is a report of this exact paper we have talked about, which was made by Wades and Biasis. I put a link to it in the description. Make sure to have a look. I think it helps you understand this paper better. If you work with learning algorithms on a regular basis, make sure to check out Wades and Biasis. Their system is designed to help you organize your experiments and it is so good it could shave off weeks or even months of work from your projects and is completely free for all individuals, academics and open source projects. This really is as good as it gets and it is hardly a surprise that they are now used by over 200 companies and research institutions. Make sure to visit them through wnb.com slash papers or just click the link in the video description and you can get a free demo today. Our thanks to Wades and Biasis for their longstanding support and for helping us make better videos for you. Thanks for watching and for your generous support and I'll see you next time.
|
Dear Fellow Scholars, this is Two Minute Papers with Dr. Karo Zsolnai-Fehir. Today, we are going to see how an AI can win a complex game that it has never seen before. Zero prior training on that game. Yes, really. Now, before that, for context, have a look at this related work from 2019, where scientists at OpenAI built a super-fun high-density game for their AI agents to play. And boy, did they do some crazy stuff. Now, these agents learned from previous experiences and to the surprise of no one, for the first few million rounds we start out with Pandemonium. Everyone just running around aimlessly. Then, over time, the hiders learned to lock out the seekers by blocking the doors off with these boxes and started winning consistently. I think the coolest part about this is that the map was deliberately designed by the OpenAI scientists in a way that the hiders can only succeed through collaboration. But then, something happened. Did you notice this pointy door-stop-shaped object? Are you thinking what I am thinking? Well, probably. And not only that, but later, the AI also discovered that it can be pushed near a wall and be used as a ramp and... Tada! Got him! Then, it was up to the hiders again to invent something new. So, did they do that? Can this crazy strategy be defeated? Well, check this out. These resourceful little critters learned that since there is a little time at the start of the game when the seekers are frozen, apparently, during this time, they cannot see them, so why not just sneak out and steal the ramp and lock it away from them? Absolutely incredible. Look at those happy eyes as they are carrying that ramp. But, today is not 2019, it is 2021, so I wonder what scientists at the other amazing AI lab deep-mind have been up to? Can this paper be topped? Well, believe it or not, they have managed to create something that is perhaps even crazier than this. This new paper proposes that these AI agents look at the screen just like a human wood and engage in open-ended learning where the tasks are always changing. What does this mean? Well, it means that these agents are not preparing for an exam. They are preparing for life. Hopefully, they learn more general concepts and, as a result, maybe excel at a variety of different tasks. Even better, these scientists at deep-mind claim that their AI agents not only excel at a variety of tasks, but they excel at new ones they have never seen before. Those are big words, so let's see the results. The red agent here is the Heider and the blue is the Seeker. They both understand their roles, the red agent is running, and the blue is seeking. Look, its viewing direction is shown with this lightsaber-looking line pointing at the red agent. No wonder it is running away. And look, it manages to get some distance from the Seeker and finds a new, previously unexplored part of the map and hides there. Excellent. And you would think that the Star Wars references and here, no, not even close. Look, in a more advanced variant of the game, this green Seeker lost the two other Heiders, and what does he do? Ah, yes, of course, grabs his lightsaber and takes the high ground. Then, it spuds the red agent and starts chasing it, and all this without ever having played this game before. That is excellent. In this cooperative game, the agents are asked to get as close to the purple pyramid as they can. Of course, to achieve that, they need to build a ramp, which they successfully realize. Excellent. But it gets better. Now, note that we did not say that the task is to build a ramp, the task is to get as close to the purple pyramid as we can. Does that mean... yes, yes it does. Great job bending the rules, little AI. In this game, the agent is asked to stop the purple ball from touching the red floor. At first, it tries its best to block the rolling of the ball with its body. Then, look, it realizes that it is much better to just push it against the wall. And it gets even better, look, it learned that best is to just chuck the ball behind this slab. It is completely right, this needs no further energy expenditure, and the ball never touches the red floor again. Great. And finally, in this king of the here game, the goal is to take the white floor and get the other agent out of there. As they are playing this game for the first time, they have no idea where the white floor is. As soon as the blue agent finds it, it stays there, so far so good. But this is not a cooperative game. We have an opponent here, look, boom, a quite potent opponent indeed who can take the blue agent out, and it understands that it has to camp in there and defend the region. Again, awesome. So, the goal here is not to be an expert in one game, but to be a journeyman in many games. And these agents are working really well at a variety of games without ever having played them. So, in summary, open AI's agent, expert in an narrower domain, deep minds agent, journeyman in a broader domain. Two different kinds of intelligence, both doing amazing things. Loving it. What a time to be alive. Scientists at DeepMind have knocked it out of the park with this one. They have also published AlphaFull this year a huge breakthrough that makes an AI-predict protein structures. Now, I saw some of you asking why we didn't cover it. Is it not an important work? Well, quite the opposite. I am spellbound by it, and I think that paper is a great gift to humanity. However, I try my best to educate myself on this topic, however, I don't feel that I am qualified to speak about it. Not yet, anyway. So, I think it is best to let the experts who know more about this take the stage. This is, of course, bad for views, but no matter, we are not maximizing views here, we are maximizing meaning. This episode has been supported by Lambda GPU Cloud. If you are looking for inexpensive Cloud GPUs for AI, check out Lambda GPU Cloud. You recently launched Quadro RTX 6000, RTX 8000, and V100 instances, and hold onto your papers because Lambda GPU Cloud can cost less than half of AWS and Asia. Plus, they are the only Cloud service with 48GB RTX 8000. Join researchers at organizations like Apple, MIT, and Caltech in using Lambda Cloud instances, workstations, or servers. Make sure to go to lambdaleps.com, slash papers to sign up for one of their amazing GPU instances today. Our thanks to Lambda for their long-standing support, and for helping us make better videos for you. Thanks for watching, and for your generous support, and I'll see you next time.
|
Dear Fellow Scholars, this is Two Minute Papers with Dr. Karo Zsolnai-Fehir. Today, we are going to see that virtual bones make everything better. This new paper is about setting up bones and joints for our virtual characters to be able to compute deformations. Deformations are at the heart of computer animation, look, all of these sequences require a carefully designed technique that can move these joints around and simulate its effect on the entirety of the body. Things move around, stretch and bulge. But, there is a problem. What's the problem? Well, even with state-of-the-art deformation techniques, sometimes this happens. Did you catch it? There is the problem. Look, the hip region unfortunately bulges inward. Is this specific to this technique? No, no, not in the slightest. Pretty much all of the previous techniques showcase that to some effect. This is perhaps our most intense case of this inward bulging. So, let's have a taste of the new method. How does it deal with a case like this? Perfectly. That's how. Loving it. Now, hold onto your papers because it works by creating something that the authors refer to as virtual bones. Let's look under the hood and locate them. There they are. These red dots showcase these virtual bones. We can set them up as a parameter and the algorithm distributes them automatically. Here we have a hundred of them, but we can go to 200, or if we so desire, we can request even 500 of them. So, what difference does this make? With a hundred virtual bones, let's see. Yes. Here you see that the cooler colors like blue showcase the regions that are deforming accurately and the warmer colors, for instance red, showcase the problematic regions where the technique did not perform well. The red part means that these deformations can be off by about 2 centimeters or about 3-quarters of an inch. I would say that is great news because even with only a hundred virtual bones we get an acceptable animation. However, the technique is still somewhat inaccurate around the knee and the hips. If you are one of our really precise fellow scholars and feel that even that tiny mismatch is too much we can raise the number of virtual bones to 500 and let's see. There we go. Still some imperfections around the knees, but the rest is accurate to a small fraction of an inch. Excellent. The hips and knees seem to be a common theme. Look, they show up in this example too. And as in the previous case, even the hundred virtual bone animation is acceptable and most of the problems can be remedied by adding 500 of them. Still some issues around the elbows. So far we have looked at the new solution and marked the good and bad regions with heat maps. So now how about looking at the reference footage and the new technique side by side? Why? Because we'll find out whether it is just good at fooling the human eye or does it really match up. Let's have a look together. This is linear blend skinning, instead of the art method. For now we can accept this as the reference. Note that setting this up is expensive both in terms of computation and it also requires a skilled artist to place these helper joints correctly. This looks great. So how does the new method with the virtual bones look under the hood? These correspond to those. So why do all this? Because the new method can be computed much, much cheaper. So let's see what the results look like. Mmm, yeah. Very close to the reference results. Absolutely amazing. Now let's run a torture test that would make any computer graphics researcher blush. Oh my, there are so many characters here animated at the same time. So how long do we have to wait for these accurate simulations? Minutes to hours? Let me know your guess in the comments below. I'll wait. Thank you. Now hold on to your papers because all this takes about 5 milliseconds per frame. 5 milliseconds. This seems well over 100 characters rocking out and the new technique doesn't even break a sweat. So I hope that with this computer animations are going to become a lot more realistic in the near future. What a time to be alive. Also make sure to have a look at the paper in the video description. I loved the beautiful mathematics and the clarity in there. It clearly states the contributions in a bulleted list, which is a more and more common occurrence that's good. But look, even provides an image of these contributions right there making it even clearer to the reader. Generally, details like this show that the authors went out of their way and spent a great deal of extra time writing a crystal clear paper. It takes much, much more time than many may imagine, so I would like to send a big thank you to the authors for that. Way to go. This episode has been supported by weights and biases. In this post, they show you how to use transformers from the hugging face library and how to track your model performance. During my PhD studies, I trained a ton of neural networks which were used in our experiments. However, over time, there was just too much data in our repositories and what I am looking for is not data, but insight. And that's exactly how weights and biases helps you by organizing your experiments. It is used by more than 200 companies and research institutions, including OpenAI, Toyota Research, GitHub, and more. And get this, weight and biases is free for all individuals, academics, and open source projects. Make sure to visit them through wnba.com slash papers or just click the link in the video description and you can get a free demo today. Our thanks to weights and biases for their long standing support and for helping us make better videos for you. Thanks for watching and for your generous support and I'll see you next time.
|
Dear Fellow Scholars, this is Two Minute Papers with Dr. Karo Zonai-Fehir. Today, we are going to simulate these absolutely beautiful thin film structures. You see, computer graphics researchers have been writing physics simulation programs for decades now, and the pace of progress in this research area is absolutely stunning. Here are three examples of where we are at the moment. One, this work was able to create a breathtaking honey-coiling simulation. I find it absolutely amazing that through the power of computer graphics research, all this was possible four years ago. And the realistic simulation works just kept coming in. This work appeared just one year ago and could simulate not only a piece of viscous fluid, but also deal with glugging and coalescing bubbles. And three, this particular one is blazing fast. So much so that it can simulate this dam-brake scene in about five frames per second, not seconds per frame, while it can run this water drop scene with about seven frames per second. Remember, this simulates quantities like velocity, pressure, and more for several million particles, this quickly. Very impressive. So, are we done here? Is there anything else left to be done in fluid simulation research? Well, hold on to your papers and check this out. This new paper can simulate thin film phenomena. What does that mean? Well, four things. First, here is a beautiful, oscillating soap bubble. Yes, its color varies as a function of devolving film thickness, but that's not all. Let's poke it and then... Did you see that? It can even simulate it bursting into tiny, sparkly droplets. Phew! One more time. Loving it. Second, it can simulate one of my favorites, the Rayleigh Taylor Instability. The upper half of the thin film has a larger density, while the lower half carries a larger volume. Essentially, this is the phenomenon when two fluids of different densities meet. And, what is the result? Turbulence. First, the interface between the two is well defined, but over time, it slowly disintegrates into this beautiful, swirly pattern. Oh, yeah! Look! And it just keeps on going and going. Third, ah, yes! The catanoid experiment. What is that? This is a surface tension driven deformation experiment, where the film is trying to shrink as we move the two rims away from each other, forming this catanoid surface. Of course, we won't stop there. What happens when we keep moving them away? What do you think? Please stop the video and let me know in the comments below. I'll wait a little. Thank you. Now then, the membrane keeps shrinking until, yes, it finally collapses into a small droplet. The authors also went the extra mile and did the most difficult thing for any physics simulation paper, comparing the results to reality. So, is this just good enough to fool the untrained human eye or is this the real deal? Well, look at this. This is an actual photograph of the catanoid experiment. And this is the simulation. Dear Fellow Scholars, that is a clean simulation right there. And fourth, a thin film within a square is subjected to a gravitational pull that is changing over time. And the result is more swirly patterns. So, how quickly can we perform all this? This regard the FPS, this is the inverse of the time-step size and is mainly information for fellow researchers, for now, gives upon the time-perfume column, and my goodness, this is blazing fast too. It takes less than a second per frame for the catanoid experiment, this is one of the cheaper ones. And all this on a laptop. Wow! Now, the most expensive experiment in this paper was the really tailored instability. This took about 13 seconds per frame. This is not bad at all, we can get a proper simulation of this quality within an hour or so. However, note that the authors also used a big honking machine to compute this scene. And remember, this paper is not about optimization, but it is about making the impossible possible. And it is doing all that swiftly. Huge congratulations to the authors. What a time to be alive! PerceptiLabs is a visual API for TensorFlow carefully designed to make machine learning as intuitive as possible. This gives you a faster way to build out models with more transparency into how your model is architected, how it performs, and how to debug it. Look, it lets you toggle between the visual modeler and the code editor. It even generates visualizations for all the model variables and gives you recommendations both during modeling and training, and does all this automatically. I only wish I had a tool like this when I was working on my neural networks during my PhD years. Visit perceptiLabs.com slash papers to easily install the free local version of their system today. Thanks to perceptiLabs for their support and for helping us make better videos for you. Thanks for watching and for your generous support, and I'll see you next time.
|
Dear Fellow Scholars, this is Two Minute Papers with Dr. Karo Zsolnai-Fehir. Today we are going to take a bad choppy video and make a beautiful, smooth and creamy footage out of it. With today's camera and graphics technology, we can create videos with 60 frames per second. Those are really smooth. I also make each of these videos using 60 frames per second. However, it almost always happens that I encounter paper videos that have only from 24 to 30 frames per second. In this case, I put them in my video editor that has a 60 FPS timeline where half or even more of these frames will not provide any new information. That is neither smooth nor creamy. And it gets worse. Look, as we try to slow down the videos for some nice slow motion action, this ratio becomes even worse creating an extremely choppy output video because we have huge gaps between these frames. So, does this mean that there is nothing we can do and have to put up with this choppy footage? No, not at all. Look at this technique from 2019 that we covered in an earlier video. The results truly speak for themselves. In goes a choppy video and out comes a smooth and creamy result. So good. But wait, it is not 2019, it is 2021 and we always say that two more papers down the line and it will be improved significantly. From this example, it seems that we are done here, we don't need any new papers. Is that so? Well, let's see what we have only one more paper down the line. Now, look, it promises that it can deal with 10 to 1 or even 221 ratios, which means that for every single image in the video, it creates 10 or 20 new ones and supposedly we shouldn't notice that. Well, those are big words, so I will believe it when I see it. Let's have a look together. Holy matter of papers, this can really pull this off and it seems nearly perfect. Wow! It also knocked it out of the park with this one and all this improvement in just one more paper down the line. The pace of progress in machine learning research is absolutely amazing. But we are experienced fellow scholars over here, so we will immediately ask, is this really better than the previous 2019 paper? Let's compare them. Can we have side by side comparisons? Of course we can. You know how much I love fluid simulations? Well, these are not simulations, but a real piece of fluid and in this one there is no contest. The new one understands the flow so much better, while the previous method sometimes even seems to propagate the waves backwards in time. A big check mark for the new one. In this case, the previous method assumes linear motion when it shouldn't, thereby introducing a ton of artifacts. The new one isn't perfect either, but it performs significantly better. Don't not worry for a second, we will talk about linear motions some more in a moment. So, how does all this wizardry happen? One of the key contributions of the paper is that it can find out when to use the easy way and the hard way. What are those? The easy way is using already existing information in the video and computing in between states for a movement. That is all well and good, if we have simple linear motion in our video. But look, the easy way fails here. Why is that? It fails because we have a difficult situation where reflections off of this object rapidly change and it reflects something. We have to know what that something is. So look, this is not even close to the true image, which means that here we can't just reuse the information in the video, this requires introducing new information. Yes, that is the hard way and this excels when new information has to be synthesized. Let's see how well it does. My goodness, look at that, it matches the true reference image almost perfectly. And also, look, the face of the human did not require synthesizing a great deal of new information, it did not change over time, so we can easily refer to the previous frame for it, hence the easy way did better here. Did you notice? That is fantastic because the two are complementary. Both techniques work well, but they work well elsewhere. They need each other. So yes, you guessed right, to tie it all together, there is an attention-based averaging step that helps us decide when to use the easy and the hard ways. Now, this is a good paper, so it tells us how these individual techniques contribute to the final image. Using only the easy way can give us about 26 decibels, that would not be the previous methods in this area. However, look, by adding the hard way, we get a premium quality result that is already super competitive, and if we add the step that helps us decide when to use the easy and hard ways, we get an extra decibel. I will happily take it, thank you very much. And if we put it all together, oh yes, we get a technique that really outpaces the competition. Excellent. So, in the near future, perhaps we will be able to record a choppy video of a family festivity and have a chance at making this choppy video enjoyable, or maybe even create slow motion videos with a regular camera. No slow motion camera is required. What a time to be alive! This episode has been supported by Lambda GPU Cloud. If you are looking for inexpensive Cloud GPUs for AI, check out Lambda GPU Cloud. They recently launched Quadro RTX 6000, RTX 8000, and V100 instances, and hold onto your papers because Lambda GPU Cloud can cost less than half of AWS and Azure. Plus, they are the only Cloud service with 48GB RTX 8000. Join researchers at organizations like Apple, MIT, and Caltech in using Lambda Cloud instances, workstations, or servers. Make sure to go to LambdaLabs.com slash papers to sign up for one of their amazing GPU instances today. Our thanks to Lambda for their long-standing support and for helping us make better videos for you. Thanks for watching and for your generous support, and I'll see you next time.
|
Dear Fellow Scholars, this is two-minute papers with Dr. Karo Zsolnai-Fehir. Today, we are going to synthesize beautiful and photorealistic shiny objects. This amazing new technique is a nerve variant, which means that it is a learning-based algorithm that tries to reproduce real-world scenes from only a few views. It ingoses a few photos of a scene, and it has to be able to synthesize new photorealistic images in between these photos. This is view synthesis in short. As you see here, it can be done quite well with the previous method. So, our question number one is, why bother publishing a new research paper on this? And question number two, there are plenty of use synthesis papers sloshing around, so why choose this paper? Well, first, when we tested the original nerve method, I noted that thin structures are still quite a challenge. Look, we have some issues here. Okay, that is a good opening for a potential follow-up paper. And in a moment, we'll see how the new method handles them. Can it do any better? In the green close-up, you see that these structures are blurry and like definition, and, oh-oh, it gets even worse. The red example shows that these structures are completely missing at places. And given that we have zoomed in quite far, and this previous technique is from just one year ago, I wonder how much of an improvement can we expect in just one year? Let's have a look. Wow, that is outstanding. Both problems got solved just like that. But it does more. Way more. This new method also boasts being able to measure reflectance coefficients for every single pixel. That sounds great, but what does that mean? What this should mean is that the reflections and specular highlights in its outputs are supposed to be much better. These are view-dependent effects, so they are quite easy to find. What you need to look for is things that change when we move the camera around. Let's test it. Remember, the input is just a sparse bunch of photos. Here they are, and the AI is supposed to fill in the missing data and produce a smooth video with, hopefully, high quality specular highlights. These are especially difficult to get right, because they change a great deal when we move the camera just a little. Yet, look, the AI can still deal with them really well. So, yes, this looks good. But we are experienced fellow scholars over here, so we'll put this method to the test. Let's try a challenging test for rendering reflections by using this piece of ancient technology called a CD. Not the best for data storage these days, but fantastic for testing rendering algorithms. So, I would like to see two things. One is the environment reflected in the silvery regions. Did we get it? Yes, checkmark. And two, I would like to see the rainbow changing and bending as we move the camera around. Let's see. Oh, look at how beautifully it has done it. I love it. Checkmark. These specular objects are not just a CD thing. As you see here, they are absolutely everywhere around us. Therefore, it is important to get it right for view synthesis. And I must say, these results are very close to getting good enough where we can put on a VR headset and venture into what is essentially a bunch of photos, not even a video, and make it feel like we are really in the room. Absolutely amazing. When I read the paper, I thought, well, all that's great, but we probably need to wait forever to get results like this. And now, hold on to your papers because this technique is not only much better in terms of quality, no, no, it is more than a thousand times faster. A thousand times. In one year, I tried to hold on to my papers, but I have to admit that I have failed and now they are flying about. But that's not all. Not even close. Look, it can also decouple the camera movements from this view-dependent, specular effect. So, what does that mean? It is like a thought experiment where we keep the camera stationary and let the shiny things change as if we moved around. The paper also contains source code and a web demo that you can try yourself right now. It reveals that we still have some more to go until the true real images can be reproduced, but my goodness, this is so much progress in just one paper. Absolutely mind-blowing. Now, this technique also has its limitations beyond not being as good as the real photos, for instance, the reproduction of refracted thin structures. As you see, not so good. But, just think about it. The fact that we need to make up these crazy scenes to be able to give it trouble is a true testament to how good this technique is. And all this improvement in just one year. What a time to be alive. This video has been supported by weights and biases. Check out the recent offering fully connected, a place where they bring machine learning practitioners together to share and discuss their ideas, learn from industry leaders, and even collaborate on projects together. You see, I get messages from you fellow scholars telling me that you have been inspired by the series, but don't really know where to start. And here it is. Fully connected is a great way to learn about the fundamentals, how to reproduce experiments, get your papers accepted to a conference, and more. Make sure to visit them through wnb.me slash papers or just click the link in the video description. Thanks to weights and biases for their long standing support and for helping us make better videos for you. Thanks for watching and for your generous support, and I'll see you next time.
|
Dear Fellow Scholars, this is two-minute papers with Dr. Karojona Ifehir. We had a crazy simulation experiment in a previous episode that we called Ghosts and Chains. What is behind this interesting name? Here, you see Houdini's volume, the industry standard simulator for cloth and the number of other kinds of simulations. It is an absolutely amazing tool, but wait a second. Look, things are a little too busy here, and it is because artificial Ghost forces appeared in a simulation, even on a simple test case, with 35 chain links. And we discussed a new method and wondered whether it could deal with them. The answer was a resounding yes, no Ghost forces. Not only that, but it could deal with even longer chains, let's try 100 links. That works too. Now, we always say that two more papers down the line and the technique will be improved significantly. So here is the two-minute papers moment of truth. This is just one more paper down the line. And what are we simulating today? The physics of chains? No! Give me a break. This new technique helps us computing the physics of chain meals. Now we're talking. We quickly set the simulation frequency to 480 Hz, which means that we subdivide one second into 480 time instants and compute the interactions of all of these chain links. It looks something like this. This compute relatively quickly and looks great too. Until we use this new technique to verify this method and see this huge red blob, what does this mean? It means that over 4000 were destroyed during this simulation. So this new method helps us verify an already existing simulation. You experienced fellow scholars know that this is going to be a ton of fun. Starting flaws in already existing simulations, sign me up right now. Now, let's go and more than double the frequency of the simulation to 1200 Hz, that means we get more detailed computations. Does it make a difference? Let's see if it's any better. Well, it looks different, and it looks great, but we are scholars here just looking is not enough to tell if everything is in order. So what does the new technique say? Uh oh, still not there, look, 16 links are still gone. Okay, now let's go all out and double the simulation frequency again to 2400 Hz. And now the number of violations is, let's see, finally, zero. The new technique is able to tell us that this is the correct version of the simulation. Excellent. Let's see our next opponent. Well, this simulation looks fine until we take a closer look and yes, there are quite a few fade links here. And the fixed simulation. So, does the verification step take as long as the simulation? If so, does this make sense at all? Well, get this, this verification step takes only a fraction of a second for one frame. Let's do two more and then see if we can deal with the final boss simulation. Now we start twisting and some links are already breaking, but otherwise not too bad. And now, oh my, look at that. The simulation broke down completely. In this case, we already know that the verification also takes very little time. So next time, we don't have to pray that the simulation does not break, we only have to simulate up until this point. And when it unveils the first breakages, we can decide to just abort and retry with more detailed simulation. This saves us a ton of time and computation. So good. I love it. It also helps with disnitted cloth simulations, where otherwise it is very difficult to tell for the naked eye if something went wrong. But, not for this method. And now, let's try the final boss. A huge simulation with a thousand rubber bands. Let's see. Yes, the technique revealed that this has tons of failed links right from the start. And let's verify the more detailed version of the same simulation. Finally, this one worked perfectly. And not only that, but it also introduced my favorite guy who wants no part of this and is just sliding away. So from now on, with this tool, we can not only strike a better balance of computation time and quality for linkage-based simulations, but if something is about to break, we don't just have to pray for the simulation to finish successfully, we will know in advance if there are any issues. What a time to be alive. This episode has been supported by Lambda GPU Cloud. If you're looking for inexpensive Cloud GPUs for AI, check out Lambda GPU Cloud. They've recently launched Quadro RTX 6000, RTX 8000, and V100 instances. And hold onto your papers because Lambda GPU Cloud can cost less than half of AWS and Asia. Plus they are the only Cloud service with 48GB RTX 8000. Join researchers at organizations like Apple, MIT and Caltech in using Lambda Cloud instances, workstations or servers. Make sure to go to lambdalabs.com slash papers to sign up for one of their amazing GPU instances today. Thanks to Lambda for their long-standing support and for helping us make better videos for you. Thanks for watching and for your generous support and I'll see you next time.
|
Dear Fellow Scholars, this is two-minute papers with Dr. Karojona Ifehir. Two years ago, in 2019, scientists at OpenAI published an outstanding paper that showcased a robot arm that could dexterously manipulate a Rubik's cube. And how good were the results? Well, nothing short of spectacular. It could not only manipulate and solve the cube, but they could even hamstring the hand in many different ways and it would still be able to do well. And I am telling you, scientists at OpenAI got very creative in tormenting this little hand. They added a rubber glove, tied multiple fingers together, threw a blanket on it, and pushed it around with a plush giraffe and a pen. It still worked, but probably had nightmares about that day in the lab. Robot nightmares, if you will. And now, let's see what is going on over deep-mind side and have a look at this work on manipulating objects. In this example, a human starts operating this contraption. And after that step is done, we leave it alone and our first question is, how do we tell this robot arm what it should do? It doesn't yet know what the task is, but we can tell it by this reward sketching step. Essentially, this works like a video game. Not in the classical setting, but the other way around. Here we are not playing the video game, but we are the video game. And the robot is the character that plays the video game in which we can provide it feedback and tell it when it is doing well or not. But then, from these rewards, it can learn what the task is. So, all that sounds great, but what do we get for all this work? Four things. One, we can instruct it to learn to lift deformables. This includes a piece of cloth, a rope, a soft ball, and more. Two, much like open AI's robot arm, it is robust against perturbations. In other words, it can recover from our evil machinations. Look, we can get creative here too. For instance, we can reorganize the objects on the table, nudge an already grabbed object out of its gripper, or simply just undo the stacks that are already done. This diligent little AI is not phased, it just keeps on trying and trying, and eventually it succeeds. A great life lesson right there. And three, it generalizes to new object types well, and does not get confused by different object colors and geometries. And now, hold on to your papers for number four, because here comes one of the most frustrating tasks for any intelligent being, something that not many humans can perform, inserting a USB key correctly on the first try. Can it do that? Well, does this count as a first try? I don't know for sure, but dear fellow scholars, the singularity is officially getting very, very close, especially given that we can even move the machine around a little, and it will still find the correct port. If only the first machine could pronounce my name correctly, then we would conclude that we have reached the singularity. But wait, we noted that first a human starts controlling the arm. How does the fully trained AI compare to this human's work? And here comes the best part. It learned to perform these tasks faster. Look, by the six second mark, the human started grabbing the green block, but by this time, the AI is already mid-air with it, and by nine seconds, it is done while the human is still at work. Excellent. And we get an 80% success rate on this task with only 8 hours of training. That is within a single working day. One of the key ideas here and the part that I liked best is that we can essentially reprogram the AI with the reward sketching step. And I wonder what else this could be used for if we did that. Do you have some ideas? Let me know in the comments below. This episode has been supported by weights and biases. In this post, they show you how to use their tool to draw bounding boxes for object detection and even more importantly, how to debug them. Weight and biases provides tools to track your experiments in your deep learning projects. Their system is designed to save you a ton of time and money, and it is actively used in projects at prestigious labs such as OpenAI, Toyota Research, GitHub, and more. And the best part is that weights and biases is free for all individuals, academics, and open source projects. It really is as good as it gets. Make sure to visit them through wnb.com slash papers, or just click the link in the video description, and you can get a free demo today. Our thanks to weights and biases for their long-standing support and for helping us make better videos for you.
|
Dear Fellow Scholars, this is Two Minute Papers with Dr. Karo Zsolnai-Fehir. Today, we will see how a small change to an already existing learning-based technique can result in a huge difference in its results. This is Stahlgantu, a technique that appeared in December 2019. It is a neural network-based learning algorithm that is capable of synthesizing these eye-poppingly detailed images of human beings that don't even exist. This is all synthetic. It also supports a cool feature where we can give it a photo, then it embeds this image into a latent space and in this space we can easily apply modifications to it. Okay, but what is this latent space thing? A latent space is a made up place where we are trying to organize data in a way that similar things are close to each other. In our earlier work, we were looking to generate hundreds of variants of a material model to populate this scene. In this latent space, we can concoct all of these really cool digital material models. A link to this work is available in the video description. Stahlgantu uses walks in a similar latent space to create these human faces and animate them. So, let's see that. When we take a walk in the internal latent space of this technique, we can generate animations. Let's see how Stahlgantu does this. It is a true miracle that a computer can generate images like this. However, wait a second. Look closely. Did you notice it? Something is not right here. Don't despair if not, it is hard to pin down what the exact problem is, but it is easy to see that there is some sort of flickering going on. So, what is the issue? Well, the issue is that there are landmarks, for instance, the beard which don't really or just barely move and essentially the face is being generated under it with these constraints. The authors refer to this problem as texture sticking. The AI suffers from a sticky beard, if you will. Imagine saying that 20 years ago to someone, you would end up in a madhouse. Now, this new paper from scientists at Nvidia promises a tiny, but important architectural change. And we will see if this issue, which seems like quite a limitation, can be solved with it or not. And now, hold on to your papers and let's see the new method. Holy Mother of Papers! Do you see what I see here? The sticky beards are a thing of the past and facial landmarks are allowed to fly about freely. And not only that, but the results are much smoother and more consistent to the point that it can not only generate photorealistic images of virtual humans. Come on, that is so 2020. This generates photorealistic videos of virtual humans. So, I wonder did the new technique also inherit the generality of Stargain 2? Let's see, we know that it works on real humans and now paintings and art pieces. Yes, excellent! And of course, cats and other animals as well. The small change that creates these beautiful results is what we call an aquivariant filter design, essentially this ensures that final details move together in the inner thinking of the neural network. This is an excellent lesson on how a small and carefully designed architectural change can have a huge effect on the results. If we look under the hood, we see that the inner representation of the new method is completely different from its predecessor. You see, the features are indeed allowed to fly about, and the new method even seems to have invented a coordinate system of sorts to be able to move these things around. What an incredible idea! These learning algorithms are getting better and better with every published paper. Now, good news. It is only marginally more expensive to train and run than Stargain 2, and the less good news is that training these huge neural networks still requires a great deal of computation. The silver lining is that if it has been trained once, it can be run inexpensively for as long as we wish. So, images of virtual humans might soon become a thing of the past because from now on, we can generate photorealistic videos of them. Absolutely amazing! What a time to be alive! This video has been supported by weights and biases. Check out the recent offering, fully connected, a place where they bring machine learning practitioners together to share and discuss their ideas, learn from industry leaders, and even collaborate on projects together. You see, I get messages from you fellow scholars telling me that you have been inspired by the series, but don't really know where to start. And here it is! Fully connected is a great way to learn about the fundamentals, how to reproduce experiments, get your papers accepted to a conference, and more. Make sure to visit them through wnb.me slash papers or just click the link in the video description. Our thanks to weights and biases for their longstanding support and for helping us make better videos for you. Thanks for watching and for your generous support, and I'll see you next time!
|
Dear Fellow Scholars, this is Two Minute Papers with Dr. Karo Ejorna Ifehir. This image is the result of a light simulation program created by research scientists. It looks absolutely beautiful, but the light simulation algorithm is only part of the recipe here. To create something like this, we also need a good artist who can produce high quality geometry, lighting, and of course, good, lifelike material models. For instance, without the material's part, we would see something like this. Not very exciting, right? Previously, we introduced a technique that learns our preferences and helps filling these scenes with materials. This work can also generate variants of the same materials as well. In a later technique, we could even take a sample image, completely destroy it in Photoshop, and our neural networks would find a photorealistic material that matches these crazy ideas. Links to both of these works are available in the video description. And to improve these digital materials, this new paper introduces something that the authors call a multi-resolution neural material representation. What is that? Well, it is something that is able to put amazingly complex material models in our light transport programs, and not only that, but... Oh my! Look at that! We can even zoom in so far that we see the snagged threads. That is the magic of the multi-resolution part of the technique. The neural part means that the technique looks at lots of measured material reflectance data. This is what describes a real-world material, and compresses this description down into a representation that is manageable. Okay, why? Well, look, here is a reference material. You see, these are absolutely beautiful, no doubt, but are often prohibitively expensive to store directly. This new method introduces these neural materials to approximate the real-world materials, but in a way that is super cheap to compute and store. So, our first question is, how do these neural materials compare to these real reference materials? What do you think? How much worse equality do we have to expect to be able to use these in our rendering systems? Well, you tell me because you are already looking at the new technique right now. I quickly switched from the reference to the result with the new method already. How cool is that? Look, this was the expensive reference material, and this is the fast neural material counterpart for it. So, how hard is this to pull off? Well, let's look at some results side by side. Here is the reference, and here are two techniques from one and two years ago that try to approximate it. And you see that if we zoom in real close, these fine details are gone. Do we have to live with that? Or, maybe, can the new method do better? Hold on to your papers, and let's see. Wow! While it is not 100% perfect, there is absolutely no contest compared to the previous methods. It outperforms them handily in every single case of these complex materials I came across. And when I say complex materials, I really mean it. Look at how beautifully it captures not only the texture of this piece of embroidery, but when we move the light source around, oh wow! Look at the area here, around the vertical black stripe, and how its specular reflections change with the lighting. And note that none of these are real images, all of them come from a computer program. This is truly something else, loving it. So, if it really works so well, where is the catch? Does it only work on cloth-like materials? No, no, not in the slightest. It also works really well on rocks, insulation foam, even turtle shells, and a variety of other materials. The paper contains a ton more examples than we can showcase here, so make sure to have a look in the video description. I guess this means that it requires a huge and expensive neural network to pull off, right? Well, let's have a look. Whoa! Now that's something. It does not require a deep and heavy-duty neural network, just four layers are enough. And this, by today's standard, is a lightweight network that can take these expensive reference materials and compress them down in a matter of milliseconds. And they almost look the same. Materials in our computer simulations straight from reality, yes please. So, from now on, we will get cheaper and better material models for animation movies, computer games, and visualization applications. Sign me up right now. What a time to be alive! This episode has been supported by weights and biases. In this post, they show you how to use their tool to analyze chest x-rays and more. If you work with learning algorithms on a regular basis, make sure to check out weights and biases. Their system is designed to help you organize your experiments, and it is so good it could shave off weeks or even months of work from your projects, and is completely free for all individuals, academics, and open source projects. This really is as good as it gets, and it is hardly a surprise that they are now used by over 200 companies and research institutions. Make sure to visit them through wnb.com slash papers, or just click the link in the video description, and you can get a free demo today. Our thanks to weights and biases for their long-standing support, and for helping us make better videos for you. Thanks for watching, and for your generous support, and I'll see you next time!
|
Dear Fellow Scholars, this is 2 Minute Papers with Dr. Karajon A. Fahir. I am stunned by this no-graphics paper that promises to simulate 3 way coupling and enables beautiful surface tangent simulations like this and this and more. Yes, none of this is real footage. These are all simulated on a computer. I have seen quite a few simulations and I am still baffled by this. How is this even possible? Also 3 way coupling, eh? That is quite peculiar to the point that the term doesn't even sound real. Let's find out why together. So what does that mean exactly? Well, first let's have a look at one way coupling. As the box moves here, it has an effect on the smoke plume around it. This example also showcases one way coupling where the falling plate stirs up the smoke around it. And now on to 2 way coupling. In this case, similarly to previous ones, the boxes are allowed to move the smoke, but the added 2 way coupling part means that now the smoke is also allowed to blow away the boxes. What's more, the vertices here on the right were even able to suspend the red box in the air for a few seconds, an excellent demonstration of a beautiful phenomenon. So coupling means interaction between different kinds of objects and 2 way coupling seems like the real deal. Here it is also required to compute how this fiery smoke trail propels the rocket upward. First way, we just mentioned that the new method performs 3 way coupling. 2 way was solid fluid interactions and it seemed absolutely amazing, so what is the third element then? And why do we even need that? Well, depending on what object is in contact with the liquid, gravity, buoyancy and surface tension forces need additional considerations. To be able to do this, now look carefully. Yes, there is the third element, it simulates this thin liquid membrane tool which is in interaction with the solid and the fluid at the same time. And with that, please meet 3 way coupling. So what can it do? It can simulate this paperclip floating on water. That is quite remarkable because the density of the paperclip is 8 times as much as the water itself and yet it still sits on top of the water. But, how is that possible? It's specially given that gravity wants to constantly pull down a solid object. Well, it has 2 formidable opponents, 2 forces that try to counteract it, 1 is buoyancy, which is an upward force and 2 the capillary force which is a consequence of the formation of a thin membrane. If these 2 friends are as strong as gravity, the object will float. But, this balance is very delicate, for instance, in the case of milk and cherries, this happens. And during that time, the simulator creates a beautiful, bent liquid surface that is truly a sight to behold. Once again, all of this footage is simulated on a computer. The fact that this new work can simulate these 3 physical systems is a true miracle. Absolutely incredible. Now, if you have been holding onto your paper so far, squeeze that paper because we will now do my favorite thing in any simulation paper and that is when we let reality be our judge and compare the simulated results to real footage. This is a photograph. And now comes the simulation. Whoa! I have to say, if no one told me which is which, I might not be able to tell. And I am delighted to know and by this fact so much so that I had to ask the authors to double check if this really is a simulation and they managed to reproduce the illumination of these scenes perfectly. Yes, they did. Fantastic attention to detail. Very impressive. So how long do we have to wait for all this? For a two-dimensional scene, it pretty much runs interactively. That is, great news. And we are firmly in the second-per-frame region for the 3D scenes, but look, the boat and leave scene runs in less than two seconds per time step. That is absolutely amazing. Not real time because one frame contains several time steps, but why would it be real time? This is the kind of paper that makes something previously impossible, possible, and it even does that swiftly. I would wager we are just one or at most two more papers away from getting this in real time. This is unbelievable progress in just one paper. And all handcrafted, no learning algorithms anywhere to be seen. Huge congratulations to the authors. What a time to be alive. Our Saptilabs is a visual API for TensorFlow carefully designed to make machine learning as intuitive as possible. This gives you a faster way to build out models with more transparency into how your model is architected, how it performs, and how to debug it. Look, it lets you toggle between the visual modeler and the code editor. It even generates visualizations for all the model variables and gives you recommendations both during modeling and training and does all this automatically. I only wish I had a tool like this when I was working on my neural networks during my PhD years. Visit perceptilabs.com slash papers to easily install the free local version of their system today. Our thanks to perceptilabs for their support and for helping us make better videos for you. Thanks for watching and for your generous support and I'll see you next time.
|
Dear Fellow Scholars, this is 2 Minute Papers with Dr. Karajol Naifahir. Look at this video of a moving escalator. Nothing too crazy going on, only the escalator is moving. And I am wondering, would it be possible to not record a video for this, just an image, and have one of these amazing learning-based algorithms animated? Well, that is easier said than done. Look, this is what was possible with the research work from two years ago, but the results are? Well, what you see here. So how about a method from one year ago? This is the result, a great deal of improvement, but the water is not animated in this region and is generally all around the place, and we still have a lot of artifacts around the fence. And now hold on to your papers and let's see this new method, and… Whoa! Look at that! What an improvement! Apparently, we can now give this one a still image, and for the things that should move, it makes them move. It is still not perfect by any means, but this is so much progress in just two years. And there's more, and get this for the things that shouldn't move, it even imagines how they should move. It works on this building really well, but it also imagines how my tie would move around, or my beard, which is not mine by the way, but was made by a different AI, or the windows. Thank you very much for the authors of the paper for generating these results only for us. And this can lead to really cool artistic effects, for instance, this moving brick wall, or animating the stairs here, loving it. So how does this work exactly? Does it know what regions to animate? No it doesn't, and it shouldn't. We can specify that ourselves by using a brush to highlight the region that we wish to see animated, and we also have a little more artistic control over the results by prescribing a direction in which things should go. And it appears to work on a really wide variety of images, which is only one of its most appealing features. Here are some of my favorite results. I particularly love the one with the upper rotation here. Very impressive. Now, let's compare it to the previous method from just one year ago, and let's see what the numbers say. Well, they say that the previous method performs better on fluid elements than the new one. My experience is that it indeed works better on specialized cases, like this fire texture, but on many water images, they perform roughly equivalently. Both are doing quite well. So is the new one really better? Well, here comes the interesting part. When presented with a diverse set of images, look, there is no contest here. The previous one creates no results, incorrect results, or if it does something, the new technique almost always comes out way better. Not only that, but let's see what the execution time looks like for the new method. How much do we have to wait for these results? The one from last year took 20 seconds per image and required a big honking graphics card, while the new one only needs your smartphone and runs in… what? Just one second. Loving it. So what images would you try with this one? Let me know in the comments below. Well, in fact, you don't need to just think about what you would try, because you can try this yourself. The link is available in the video description. Make sure to let me know in the comments below if you had some success with it. And here comes the even more interesting part. The previous method was using a learning based algorithm while this one is a bona fide almost completely handcrafted technique. Partly, because training neural networks requires a great deal of training data, and there are very few, if any, training examples for moving buildings and these other surreal phenomena. Ingenious. Huge congratulations to the authors for pulling this off. Now, of course, not even this technique is perfect, there are still cases where it does not create appealing results. However, since it only takes a second to compute, we can easily retry with a different pixel mask or direction and see if it does better. And just imagine what we will be able to do two more papers down the line. What a time to be alive. This video has been supported by weights and biases. They have an amazing podcast by the name Gradient Descent, where they interview machine learning experts who discuss how they use learning based algorithms to solve real world problems. They have discussed biology, teaching robots, machine learning in outer space and a whole lot more. Perfect for a fellow scholar with an open mind. Make sure to visit them through wmb.me slash gd or just click the link in the video description. Our thanks to weights and biases for their longstanding support and for helping us make better videos for you. Thanks for watching and for your generous support and I'll see you next time.
|
Dear Fellow Scholars, this is two minute papers with Dr. Carlos Jona Ifehir. Have you ever got stuck in a video game? Or found a glitch that would prevent you from finishing it? As many of you know, most well-known computer games undergo a ton of playtesting, an important step that is supposed to unveil these issues. So, how is it possible that all these bugs and glitches still make it to the final product? Why did the creators not find these issues? Well, you see, playtesting is often done by humans. That sounds like a good thing, and it often is, but here comes the problem. Whenever we change something in the game, our changes may also have unintended consequences somewhere else away from where we applied them. New oversights may appear elsewhere, for instance, moving a platform may make the level more playable, however, also, and this might happen. The player may now be able to enter a part of the level that shouldn't be accessible, or be more likely to encounter a collision bug and get stuck. Unfortunately, all this means that it's not enough to just test what we have changed, but we have to retest the whole level, or maybe the whole game itself. For every single change, no matter how small. That not only takes a ton of time and effort, but is often flat out impractical. So, what is the solution? Apparently, a proper solution would require asking tons of curious humans to test the game. But wait a second, we already have curious learning-based algorithms. Can we use them for playtesting? That sounds amazing. Well, yes, until we try it. You see, here is an automated agent, but a naive one trying to explore the level. Unfortunately, it seems to have missed half the map. Well, that's not the rigorous testing we are looking for, is it? Let's see what this new AI offers. Can it do any better? Oh, my, now we're talking. The new technique was able to explore not less than 50%, but a whopping 95% of the map. Excellent. But we are experienced fellow scholars over here. So of course, we have some questions. Apparently, this one has great coverage, so it can cruise around, great. But our question is, can these AI agents really find game-breaking issues? Well, look. We just found a bug where it could climb to the top of the platform without having to use the elevator. It can also build a graph that describes which parts of the level are accessible and through what path. Look, this visualization tells us about the earlier issue where one could climb the wall through an unintentional issue and after the level designer supposedly fixed it by adjusting the steepness of the wall. Let's see the new path. Yes. Now, it could only get up there by using the elevator. That is the intended way to traverse the level. Excellent. And it gets better. It can even tell us the trajectories that enabled it to leave the map so we know exactly what issues we need to fix without having to look through hours and hours of video footage. But whenever we apply the fixes, we can easily unleash another bunch of these AI's to search every new cancanny and try these crazy strategies even once that don't make any sense but appear to work well. So, how long does this take? Well, the new method can explore half the map in approximately an hour or two, can explore 90% of the map in about 28 hours and if we give it a couple more days, it goes up to about 95%. That is quite a bit so we don't get immediate feedback as soon as we change something since this method is geared towards curiosity and not efficiency. Note that this is just the first crack at the problem and I would not be surprised if just one more paper down the line, this would take about an hour and two more papers down the line, it might even be done in a matter of minutes. What a time to be alive! This episode has been supported by weights and biases. In this post, they show you how to use sweeps to automate hyper-parameter optimization and explore the space of possible models and find the best one. During my PhD studies, I trained a ton of neural networks which were used in our experiments. However, over time, there was just too much data in our repositories and what I am looking for is not data, but insight. And that's exactly how weights and biases helps you by organizing your experiments. It is used by more than 200 companies and research institutions, including OpenAI, Toyota Research, GitHub and more. And get this, weights and biases is free for all individuals, academics and open source projects. Make sure to visit them through wnba.com slash papers or just click the link in the video description and you can get a free demo today. Our thanks to weights and biases for their long-standing support and for helping us make better videos for you. Thanks for watching and for your generous support and I'll see you next time.
|
Dear Fellow Scholars, this is two-minute papers with Dr. Károly Zsolnai-Fehér. We just hit a million subscribers. I can hardly believe that so many of you fellow scholars are enjoying the papers. Thank you so much for all the love. In the previous episode, we explored an absolutely insane idea. The idea was to unleash a learning algorithm on a dataset that contains images and videos of cities. Then take a piece of video footage from a game and translate it into a real movie. It is an absolute miracle that it works, and it not only works, but it works reliably and interactively. It also works much better than its predecessors. Now we discussed that the input video game footage is pretty detailed. And I was wondering, what if we don't create the entire game in such detail? What about creating just the bare minimum, a draft of the game, if you will, and let the algorithm do the heavy lifting? Let's call this World to World Translation. So is World to World Translation possible or is this science fiction? Fortunately, scientists at Nvidia and Cornell University thought of that problem and came up with a remarkable solution. But the first question is, what form should this draft take? And they say it should be a Minecraft world or in other words, a landscape assembled from little blocks. Yes, there is simple enough indeed. So this goes in and now let's see what comes out. Oh my, it created water, it understands the concept of an island, and it created a beautiful landscape also with vegetation. Insanity. It even seems to have some concept of reflections, although they will need some extra work to get perfectly right. But what about artistic control? Do we get this one solution or can we give more detailed instructions to the technique? Yes, we can. Look at that. Since the training data contains desert and snowy landscapes too, it also supports them as outputs. Whoa, this is getting wild. I like it. And it even supports interpolation, which means that we can create one landscape and ask the AI to create a blend between different styles. We just look at the output animations and pick the one that we like best. Absolutely amazing. What I also really liked is that it also supports rendering fog. But this is not some trivial fog technique. No, no, look at how beautifully it occludes the trees. If we look under the hood. Oh my, I am a light transport researcher by trade and boy, am I happy to see the authors having done their homework. Look, we are not playing games here. The technique contains bona fide volumetric light transmission calculations. Now, this is not the first technique to perform this kind of word-to-word translation. What about the competition? As you see, there are many prior techniques here, but there is one key issue that almost all of them share. So, what is that? Oh yes, much like with the other video game papers, the issue is the lack of temporal coherence, which means that the previous techniques don't remember what they did a few images earlier and may create a drastically different series of images. And the result is this kind of flickering that is often a deal breaker, regardless of how good the technique is otherwise. Look, the new method does this significantly better. This could help level generation for computer games, creating all kinds of simulations, and if it improves some more, these could maybe even become back drops to be used in animated movies. Now, of course, this is still not perfect, some of the outputs are still blocky. But, with this method, creating virtual worlds has never been easier. I cannot believe that we can have a learning-based algorithm where the input is one draft world and it transforms it to a much more detailed and beautiful one. Yes, it has its limitations, but just imagine what we will be able to do two more papers down the line. Especially given that the quality of the results can be algorithmically measured, which is a gutsynt for comparing this to future methods. And for now, huge congratulations to Nvidia and Cornell University for this amazing paper. This episode has been supported by Lambda GPU Cloud. If you're looking for inexpensive Cloud GPUs for AI, check out Lambda GPU Cloud. They recently launched Quadro RTX 6000, RTX 8000, and V100 instances. And hold onto your papers because Lambda GPU Cloud can cost less than half of AWS and Asia. Plus, they are the only Cloud service with 48GB RTX 8000. Join researchers at organizations like Apple, MIT, and Caltech in using Lambda Cloud instances, workstations or servers. Make sure to go to lambdaleps.com slash papers to sign up for one of their amazing GPU instances today. Our thanks to Lambda for their long-standing support and for helping us make better videos for you. Thanks for watching and for your generous support, and I'll see you next time.
|
Dear Fellow Scholars, this is two-minute papers with Dr. Karo Jone Fahir, through the power of computer graphics research, today we can write wondrous programs which can simulate all kinds of interactions in virtual worlds. These works are some of my favorites, and looking at the results one would think that these algorithms are so advanced that it's hardly anything new to invent in this area. But as amazing as these techniques are, they don't come without limitations. Alright, well, what do those limitations look like? Let's have a look at this example. The water is coming out of the nozzle, and it behaves unnaturally. But that's only the smaller problem, there is an even bigger one. What is that problem? Let's slow this down, and look carefully. Oh, where did the water go? Yes, this is the classical numerical dissipation problem in fluid simulations, where due to an averaging step particles disappear into thin air. And now, hold on to your papers and let's see if this new method can properly address this problem. And, oh yes, fantastic. So it dissipates less. Great, what else does this do? Let's have a look through this experiment, where a wheel gets submerged into sand, that's good, but the simulation is a little mellow. You see, the sand particles are flowing down like a fluid, and the wheel does not really roll up the particles in the air. And the new one. So apparently, it not only helps with numerical dissipation, but also with particle separation too. More value. I like it. If this technique can really solve these two phenomena, we don't even need sandy tires and water sprinklers to make it shine. There are so many scenarios where it performs better than previous techniques. For instance, when simulating this non-frictional elastic plate with previous methods, some of the particles get glued to it. And did you catch the other issue? Yes, the rest of the particles also refuse to slide off of each other. And now, let's see the new method. Oh my, it can simulate this phenomena correctly too. And it does not stop there, it also simulates strand strand interactions better than previous methods. In these cases, sometimes the collision of short strands with boundaries was also simulated incorrectly. Look at how all this geometry intersected through the brush. And the new method? Yes, of course, it addresses these issues too. So if it can simulate the movement and intersection of short strands better, does that mean that it can also perform higher quality hair simulations? Oh yes, yes it does. Excellent. So as you see, the pace of progress in computer graphics research is absolutely stunning. Things that were previously impossible can become possible in a matter of just one paper. What a time to be alive. Perceptileps is a visual API for TensorFlow, carefully designed to make machine learning as intuitive as possible. This gives you a faster way to build out models with more transparency into how your model is architected, how it performs, and how to debug it. Look, it lets you toggle between the visual modeler and the code editor. It even generates visualizations for all the model variables and gives you recommendations both during modeling and training and does all this automatically. I only wish I had a tool like this when I was working on my neural networks during my PhD years. Visit perceptileps.com slash papers to easily install the free local version of their system today. Our thanks to perceptileps for their support and for helping us make better videos for you. Thanks for watching and for your generous support and I'll see you next time.
|
Dear Fellow Scholars, this is Two Minute Papers with Dr. Karo Zsolnai-Fehir. Today, it is possible to teach virtual characters to perform highly dynamic motions like a cartwheel or backflips. And not only that, but we can teach an AI to perform this differently from other characters to do it with style, if you will. But, today we are not looking to be stylish, today we are looking to be efficient. In this paper, researchers placed an AI in a physics simulation and asked it to control a virtual character and gave it one task to jump as high as it can. And when I heard this idea, I was elated and immediately wondered, did it come up with popular techniques that exist in the real world? Well, let's see... Yes! Oh, that is indeed a force-bury flop. This allows the athlete to jump backwards over the bar, thus lowering their center of gravity. Even today, this is the prevalent technique in high-jump competitions. With this technique, the take-off takes place relatively late. The only problem is that the AI didn't clear the bar so far, so can it. Well, this is a learning-based algorithm, so with a little more practice it should improve... Yes! Great work! If we lowered the bar just a tiny bit for this virtual athlete, we can also observe it performing the Western role. With this technique, we take off a little earlier and we don't jump backward, but sideways. If it had nothing else, this would already be a great paper, but we are not nearly done yet. The best is yet to come. This is a simulation, a virtual world, if you will, and here we make all the rules. So, the limit is only our imagination. The authors know that very well and you will see that they indeed have a very vivid imagination. For instance, we can also simulate a jump with a weak take-off flag and see that with this condition, the little AI can only clear a bar that is approximately one foot lower than its previous record. What about another virtual athlete with an inflexible spine? It can jump approximately two feet lower. Here's the difference compared to the original. I am enjoying this a great deal and it's only getting better. Next, what happens if we are injured and have a cast on the take-off knee? What results can we expect? Something like this. We can jump a little more than two feet lower. What about organizing the Olympics on Mars? What would that look like? What would the world record be with the weaker gravity there? Well, hold on to your papers and look. Yes, we could jump three feet higher than on Earth and then......out. Well, miss the phone-maring, but otherwise very impressive. And if we are already there, why limit the simulation to high jumps? Why not try something else? Again, in a simulation, we can do anything. Previously, the task was to jump over the bar, but we can also recreate the simulation to include, instead, jumping through obstacles. To get all of these magical results, the authors propose a step they call Bayesian Diversity Search. This helps systematically creating a reach selection of novel strategies and it does this efficiently. The authors also went the extra mile and included a comparison to motion capture footage performed by a real athlete. But note that the AAS version uses a similar technique and is able to clear a significantly higher bar without ever seeing a hijab move. The method was trained on motion capture footage to get used to human-like movements like walking, running, and kicks, but it has never seen any hijab techniques before. Wow! So, if this can invent hijumping techniques that took decades for humans to invent, I wonder what else could it invent? What do you think? Let me know in the comments below. What a time to be alive! This episode has been supported by Lambda GPU Cloud. If you're looking for inexpensive Cloud GPUs for AI, check out Lambda GPU Cloud. They recently launched Quadro RTX 6000, RTX 8000, and V100 instances, and hold onto your papers because Lambda GPU Cloud can cost less than half of AWS and Asia. Plus, they are the only Cloud service with 48GB RTX 8000. Join researchers at organizations like Apple, MIT, and Caltech in using Lambda Cloud instances, workstations, or servers. Make sure to go to LambdaLabs.com slash papers to sign up for one of their amazing GPU instances today. Our thanks to Lambda for their long-standing support and for helping us make better videos for you. Thanks for watching and for your generous support, and I'll see you next time.
|
Dear Fellow Scholars, this is two minute papers with Dr. Karo Zsolnai-Fehir. In a previous episode, not so long ago, we burned down a virtual tree. This was possible through an amazing simulation paper from four years ago, where each leaf has its own individual mass and area. They burn individually, transfer heat to their surroundings, and finally, branches bend and look can eventually even break in this process. How quickly did this run? Of course, in real time. Well, that is quite a paper, so if this was so good, how does anyone improve that? Burn down another virtual tree? No, no, that would be too easy. You know what? Instead, let's set on fire an entire virtual forest. Oh yeah! Here you see a simulation of a devastating fire from a lightning strike in Yosemite National Park. The simulations this time around are typically sped up a great deal to be able to give us a better view of how it spreads, so if you see some flickering, that is the reason for that. But wait, is that really that much harder? Why not just put a bunch of trees next to each other and start the simulation? Would that work? The answer is a resounding no. Let's have a look why, and with that hold onto your papers because here comes the best part. It also simulates not only the fire, but cloud dynamics as well. Here you see how the wildfire creates lots of hot and dark smoke closer to the ground, and wait for it. Yes, there we go. Higher up, the condensation of water creates this lighter, cloudy region. Yes, this is key to the simulation, not because of the aesthetic effects, but this wildfire can indeed create a cloud type that goes by the name Flamagianitis. So, is that good or bad news? Well, both. Let's start with the good news, it often produces rainfall which helps putting out the fire. Well, that is wonderful news, so then what is so bad about it? Well, Flamagianitis clouds may also trigger a thunderstorm and thus create another huge fire. That's bad news, number one. And bad news number two, it also occludes the fire, thereby making it harder to locate and extinguish it. So, got it, add cloud dynamics to the tree fire simulator and we are done. Right? No, not even close. In a forest fire simulation, not just clouds, everything matters. For instance, we first need to take into consideration the wind intensity and direction. This can mean the difference between a manageable or a devastating forest fire. Second, it takes into consideration the density and moisture intensity of different tree types. For instance, you see that the darker trees here are burning down really slowly. Why is that? That is because these trees are denser, birches and oak trees. Third, the distribution of the trees also matter. Of course, the more area is covered by trees, the more degrees of freedom there are for the fire to spread. And fourth, fire can not only spread horizontally from 3 to 3, but vertically too. Look, when a small tree catches fire, this can happen. So, as we established from the previous paper, one tree catching fire can be simulated in real time. So, what about an entire forest? Let's take the simulation with the most number of trees. My goodness, they simulated 120k trees here. And the computation time for one simulation step was 95. So, 95 what? 95 milliseconds. Wow! So, this thing runs interactively, which means that all of these phenomena can be simulated in close to real time. With that, we can now model how a fire would spread in real forests around the world, test different kinds of fire barriers and their advantages, and we can even simulate how to effectively put out the fire. And don't forget, we went from simulating one burning tree to 120k in just one more paper down the line. What a time to be alive! This video has been supported by weights and biases. They have an amazing podcast by the name Gradient Descent, where they interview machine learning experts who discuss how they use learning-based algorithms to solve real-world problems. They've discussed biology, teaching robots, machine learning in outer space, and a whole lot more. Perfect for a fellow scholar with an open mind. Make sure to visit them through wnb.me slash gd or just click the link in the video description. Our thanks to weights and biases for their long-standing support and for helping us make better videos for you. Thanks for watching and for your generous support, and I'll see you next time.
|
Dear Fellow Scholars, this is Two Minute Papers with Dr. Karojona Ifehr. I am a light transport researcher by trade and due to popular request, today I am delighted to show you these beautiful results from a research paper. This is a new light simulation technique that can create an image like this. This looks almost exactly like reality. It can also make an image like this. And have a look at this. This is a virtual object that glitters. Oh my goodness, absolutely beautiful, right? Well, believe it or not, this third result is wrong. Now you see it is not so bad, but there is a flaw in it somewhere. By the end of this video you will know exactly where and what is wrong. So light transport day, how do these techniques work anyway? We can create such an image by simulating the path of millions and millions of light rays. And initially this image will look noisy and as we add more and more rays, this image will slowly clean up over time. If we don't have a well optimized program, this can take from hours to days to compute for difficult scenes. For instance, this difficult scene took us several weeks to compute. Okay, so what makes a scene difficult? Typically, caustics and specular light transport. What does that mean? Look, here we have a caustic pattern that takes many many millions if not billions of light rays to compute properly. This can get tricky because these are light paths that we are very unlikely to hit with randomly generated rays. So how do we solve this problem? Well, one way of doing it is not trusting random light rays, but systematically finding these caustic light paths and computing them. This fantastic paper does exactly that. So let's look at one of those classic close-ups that are the hallmark of any modern light transport paper. Let's see. Yes. On this scene you see beautiful caustic patterns under these glossy metallic objects. Let's see what a simple random algorithm can do with this with an allowance of two minutes of rendering time. Well, do you see any caustics here? Do you see any bright points? These are the first signs of the algorithm finding small point samples of the caustic pattern, but that's about it. It would take at the very least several days for this algorithm to compute the entirety of it. This is what the fully rendered reference image looks like. This is the one that takes forever to compute. Quite different, right? Let's allocate two minutes of our time for the new method and see how well it does. Which one will it be closer to? Can it beat the naive algorithm? Now hold onto your papers and let's see together. But on this part it looks almost exactly the same as the reference. This is insanity. A converged caustic region in two minutes. Whoa! The green close-up is also nearly completely done. Now, not everything is sunshine and rainbows. Look, the blue close-up is still a bit behind, but it still beats the naive algorithm handily. That is quite something. And yes, it can also render these beautiful underwater caustics as well in as little as five minutes. Five minutes. And I would not be surprised if many people would think this is an actual photograph from the real world. Loving it. Now, what about the glittery origami scene from the start of the video? This one. Was that footage really wrong? Yes, it was. Why? Well, look here. These glittery patterns are unstable. The effect is especially pronounced around here. This arises from the fact that the technique does not take into consideration the curvature of this object correctly when computing the image. Let's look at the corrected version and... Oh my goodness. No unnecessary flickering anywhere to be seen, just the beautiful glitter slowly changing as we rotate the object around. I could stare at this all day. Now, note that these kinds of glints are much more practical than most people would think. For instance, it also has a really pronounced effect when rendering a vinyl record and many other materials as well. So, from now on, we can render photorealistic images of difficult scenes with caustics and glitter not in a matter of days, but in a matter of minutes. What a time to be alive. And when watching all these beautiful results, if you are thinking that this light transport thing is pretty cool and you would like to learn more about it, I held a master-level course on this topic at the Technical University of Vienna. Since I was always teaching it to a handful of motivated students, I thought that the teachings shouldn't only be available for the privileged few who can afford a college education, but the teachings should be available for everyone. Free education for everyone, that's what I want. So, the course is available free of charge for everyone, no strings attached, so make sure to click the link in the video description to get started. We write a full-light simulation program from scratch there and learn about physics, the world around us, and more. This episode has been supported by weights and biases. In this post, they show you how you can get an email or select notification when your model crashes. With this, you can check on your model performance on any device. Heavenly. Weight and biases provides tools to track your experiments in your deep learning projects. Their system is designed to save you a ton of time and money and it is actively used in projects at prestigious labs such as OpenAI, Toyota Research, GitHub, and more. And the best part is that weights and biases is free for all individuals, academics, and open source projects. It really is as good as it gets. Make sure to visit them through wnb.com slash papers or just click the link in the video description and you can get a free demo today. Our thanks to weights and biases for their long-standing support and for helping us make better videos for you. Thanks for watching and for your generous support and I'll see you next time.
|
Dear Fellow Scholars, this is two-minute papers with Dr. Karajona Ifehir, with the increased popularity of online meetings, telepresence applications, or on their eyes where we can talk to each other from afar. Today, let's see how these powerful, new neural network-based learning methods can be applied to them. It turns out they can help us do everyone's favorite, which is showing up to a meeting and changing our background to pretend we are somewhere else. Now that is a deceptively difficult problem. Here the background has been changed, that is the easier problem, but look, the lighting of the new environment hasn't been applied to the subject. And now hold on to your papers and check this out. This is the result of the new technique after it recreates the image as if she was really there. Particularly like the fact that the results include high quality specular highlights too, or in other words, the environment reflecting off of our skin. However, of course, this is not the first method attempting this. So, let's see how it performs compared to the competition. These techniques are from one and two years ago, and they don't perform so well. Not only did they lose a lot of detail all across the image, but the specular highlights are gone. As a result, the image feels more like a video game character than a real person. Luckily, the authors also have access to the reference information to make our job comparing the results easier. Roughly speaking, the more the outputs look like this, the better. So now hold on to your papers and let's see how the new method performed. Oh yes, now we're talking. Now of course, not even this is perfect. Clearly, the specularity of clothing was determined incorrectly, and the metting around the thinner parts of the hair could be better, which is notoriously difficult to get right. But this is a huge step forward in just one paper. And we are not nearly done. There are two more things that are found to be remarkable about this work. One is that the whole method was trained on still images, yet it still works on video too. And we don't have any apparent tempero coherence issues, or in other words, no flickering arises from the fact that it processes the video not as a video, but a series of separate images. Very cool. Two, if we are in a meeting with someone and we really like their background, we can simply borrow it. Look, this technique can take their image, get the background out, estimate its lighting, and give the whole package to us too. I think this will be a game changer. People may start to become more selective with these backgrounds, not just because of how the background looks, but because how it makes them look. Remember, lighting off of a well chosen background makes a great deal of a difference in our appearance in the real world, and now with this method, in virtual worlds too. And this will likely happen not decades from now, but in the near future. So, this new method is clearly capable of some serious magic. But how? What is going on under the hood to achieve this? This method performs two important steps to accomplish this. Step number one is matting. This means separating the foreground from the background, and then, if done well, we can now easily cut out the background and also have the subject on a separate layer and proceed to step number two, which is, relighting. In this step, the goal is to estimate the illumination of the new scene and recalor the subject as if she were really there. This new technique performs both, but most of the contributions lie in this step. To be able to accomplish this, we have to be able to estimate the material properties of the subject. The technique has to know one, where the diffuse parts are, these are the parts that don't change too much as the lighting changes, and two, where the specular parts are, in other words, shiny regions that reflect back the environment more clearly. Putting it all together, we get really high quality relighting for ourselves, and, given that this was developed by Google, I expect that this was supercharger meetings quite soon. And just imagine what we will have two more papers down the line. My goodness! What a time to be alive! This video has been supported by weights and biases, check out the recent offering, fully connected, a place where they bring machine learning practitioners together to share and discuss their ideas, learn from industry leaders, and even collaborate on projects together. You see, I get messages from you fellow scholars telling me that you have been inspired by the series, but don't really know where to start. And here it is! Simply connected is a great way to learn about the fundamentals, how to reproduce experiments, get your papers accepted to a conference, and more. Make sure to visit them through wnb.me slash papers, or just click the link in the video description. Our thanks to weights and biases for their longstanding support, and for helping us make better videos for you. Thanks for watching and for your generous support, and I'll see you next time!
|
Dear Fellow Scholars, this is Two Minute Papers with Dr. Karajol Leifahir. Today we are going to generate robots with grammars. Wait a second, grammars of all things. What do grammars have to do with robots? Do we need to teach them grammar to speak correctly? No, no, of course not. To answer these questions, let's invoke the second law of papers, which says that whatever you are thinking about, there is already a Two Minute Papers episode on that. Even on grammars. Let's see if it applies here. In this earlier work, we talked about generating buildings with grammars. So, how does that work? Grammars are a set of rules that tell us how to build up a structure such as a sentence properly from small elements like nouns, adjectives, and so on. My friend Martin Elchick loves to build buildings from grammars. For instance, a shape grammar for buildings can describe rules like a wall can contain several windows. Below a window goes a windows seal. One wall may have at most two doors attached and so on. A later paper also used a similar concept to generate tangle patterns. So, this grammar thing has some power in assembling things after all. So, can we apply this knowledge to build robots? First, the robots in this new paper are built up as a collection of these joint types, links, and wheels, which can come in all kinds of sizes and weights. Now, our question is how do we assemble them in a way that they can traverse a given terrain effectively? Well, time for some experiments. Look at this robot. It has a lot of character and can deal with this terrain pretty well. Now, look at this poor thing. Someone in the lab at MIT had a super fun day with this one I am sure. Now, this can sort of do the job, but now let's see the power of grammars and search algorithms in creating more optimized robots for a variety of the reins. First, a flat terrain. Let's see. Yes, now we're talking. This one is traversing at great speed. And this one works too. I like how it was able to find vastly different robot structures that both perform well here. Now, let's look at a little harder level with gaped terrains. Look. Oh, wow. Loving this. The algorithm recognized that a more rigid body is required to efficiently step through the gaps. And now, I wonder what happens if we add some ridges to the levels, so it cannot only step through the gaps, but has to climb. Let's see. And we get those long limbs that can indeed climb through the ridges. Excellent. Now, add the staircase and see who can climb these well. The algorithm says, well, someone with long arms and a somewhat elastic body. Let's challenge the algorithm some more. Let's add, for instance, a frozen lake. Who can climb a flat surface that is really slippery? Does the algorithm know? Look, it says someone who can utilize a low friction surface by dragging itself through it. Or someone with many legs. Loving this. Now, this is way too much fun. So, let's do two more. What about a world terrain example? What kind of robot would work there? One with a more elastic body, carefully designed to be able to curve sharply, enabling rapid direction changes. But it cannot be too long or else it would bang its head into the wall. This is indeed a carefully crafted specimen for this particular level. Now, of course, real-world situations often involve multiple kinds of terrains, not just one. And of course, the authors of this paper know that very well and also ask the algorithm to design a specimen that can traverse world and rich terrains really well. Make sure to have a look at the paper which even shows graphs for robot archetypes that work on different terrains. It turns out one can even make claims about the optimality, which is a strong statement. I did not expect that at all. So, apparently, grammars are amazing at generating many kinds of complex structures, including robots. And note that this paper also has a follow-up work from the same group, where they took this step further and made figure skating and break-dancing robots. What a time to be alive! The link is also available in the video description for that one. This episode has been supported by Lambda GPU Cloud. If you're looking for inexpensive, Cloud GPUs for AI, check out Lambda GPU Cloud. They've recently launched Quadro RTX 6000, RTX 8000, and V100 instances. And hold onto your papers because Lambda GPU Cloud can cost less than half of AWS and Azure. Plus, they are the only Cloud service with 48GB RTX 8000. Join researchers at organizations like Apple, MIT, and Caltech in using Lambda Cloud instances, workstations, or servers. Make sure to go to LambdaLabs.com slash papers to sign up for one of their amazing GPU instances today. Our thanks to Lambda for their long-standing support and for helping us make better videos for you. Thanks for watching and for your generous support, and I'll see you next time.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.