text
stringlengths 0
782k
|
|---|
Dear Fellow Scholars, this is two minute papers with Dr. Karajone Fahir. Today, we are going to play with a cellular automaton and fuse it together with the neural network. It will be quite an experience. But of course, first, what are these things anyway? You can imagine a cellular automaton as a small game where we have a bunch of cells and a set of simple rules that describe when a cell should be full and when it should be empty. What you see here is a popular example called Game of Life, which simulates a tiny world where each cell represents a live form. So, why is this so interesting? Well, this cellular automaton shows us that even a small set of simple rules can give rise to remarkably complex live forms such as gliders, spaceships, and even John von Neumann's Universal Constructor or in other words, self-replicating machines. Now, this gets more interesting, a later paper fused the cellular automaton with a neural network. It was tasked to grow and even better maintain a prescribed shape. Remember these two words, grow and maintain shape. And the question was if it can recover from undesirable states, can it perhaps regenerate when damaged? Well, here you will see all kinds of damage and then this happens. Nice! The best part is that this thing wasn't even trained to be able to perform this kind of regeneration. The objective for training was that it should be able to perform its task of growing and maintaining shape and it turns out some sort of regeneration is included in that. This sounds very promising and I wonder if we can apply this concept to something where healing is instrumental. Are there such applications in computer science? If so, what could those be? Oh yes, yes there are. For instance, think about texture synthesis. This is a topic that is subject to a great deal of research in computer graphics and those folks have this down to a science. So, what are we doing here? Texture synthesis typically means that we need lots of concrete or gravel road, skin, marble, create unique stripes for zebras, for instance for a computer game, or the post-production of a movie and we really don't want to draw miles and miles of these textures by hand. Instead, we give it to an algorithm to continue this small sample where the output should be a bigger version of this pattern with the same characteristics. So, how do we know if we have a good technique at hand? Well, first it must not be repetitive, checkmark, and it has to be free of seams. This part means that we should not be able to see any lines or artifacts that would quickly give the trick away. Now, get this, this new paper attempts to do the same with neural cellular automaton. What an insane idea! We like those around here, so let's give it a try. How? Well, first by trying to expand this simple checkerboard pattern. The algorithm is starting out from random noise and as it evolves, well, this is a disaster. We are looking for squares, but we have quadrilaterals. They are also misaligned, and they are also inconsistent. But luckily, we are not done yet, and now hold on to your papers and observe how the grid cells communicate with each other to improve the result. First, the misalignment is taken care of, then the quadrilaterals become squares, and then the consistency of displacement is improved. And the end result is, look. In this other example, we can not only see these beautiful bubbles grow out of nowhere, but the density of the bubbles remains roughly the same over the process. Look, as two of them get too close to each other, they coalesce or pop. Damage bubbles can also regrow. Very cool. Okay, it can do proper texture synthesis, but so can a ton of other handcrafted computer graphics algorithms. So, why is this interesting? Why bother with this? Well, first, you may think that the result of this technique is the same as other techniques, but it isn't. The output is not necessarily an image, but can be an animation too. Excellent. Here, it was also able to animate the checkerboard pattern, and even better, it can not only reproduce the weave pattern, but the animation part extends to this too. And now comes the even more interesting part. Let's ask why does this output an animation and not an image? The answer lies within these weaving patterns. We just need to carefully observe them. Let's see. Yes, again, we start out from the noise, where some woven patterns emerge, but then it almost looks like a person who started weaving them until it resembles the initial sample. Yes, that is the key. The neural network learned to create not an image, not an animation, but no less than a computer program to accomplish this kind of texture synthesis. How cool is that? So, aren't with all that knowledge, do you remember the regenerating iguana project? Let's try to destroy these textures too, and see if it can use these computer programs to recover and get us a seamless texture. First, we delete parts of the texture, then it fills in the gap with noise, and now let's run that program. Wow! Resilient, self-healing texture synthesis. How cool is that? And in every case, it starts out from a solution that is completely wrong, improves it to be just kind of wrong, and after further improvement, there you go. Fantastic! What a time to be alive! And note that this is a paper in the wonderful distale journal, which not only means that it is excellent, but also interactive. So, you can run many of these experiments yourself right in your web browser. The link is available in the video description. This episode has been supported by weights and biases. In this post, they show you how to debug and compare models by tracking predictions, hyperparameters, GPU usage, and more. If you work with learning algorithms on a regular basis, make sure to check out weights and biases. Their system is designed to help you organize your experiments, and it is so good it could shave off weeks or even months of work from your projects, and is completely free for all individuals, academics, and open source projects. This really is as good as it gets, and it is hardly a surprise that they are now used by over 200 companies and research institutions. Make sure to visit them through wnb.com slash papers, or just click the link in the video description, and you can get a free demo today. Our thanks to weights and biases for their longstanding support, and for helping us make better videos for you. Thanks for watching and for your generous support, and I'll see you next time.
|
Dear Fellow Scholars, this is two-minute papers with Dr. Karo Zsolnai-Fehir. This paper is called Enhancing Photorealism Enhancement. Hmm, let's try to unpack what that exactly means. This means that we take video footage from a game, for instance GTA V, which is an action game where the city we can play in was modeled after real places in California. Now, as we're living the advent of neural network-based learning algorithms, we have a ton of training data at our disposal on the internet. For instance, the city's capes dataset contains images and videos taken in 50 real cities, and it also contains annotations that describe which object is which. And the authors of this paper looked at this and had an absolutely insane idea. And the idea is, let's learn on the city's capes dataset what cars, cities, and architecture looks like, then take a piece of video footage from the game and translate it into a real movie. So, basically something that is impossible. That is an insane idea, and when I read this paper, I thought that cannot possibly work in any case, but especially not given that the game takes place in California, and the city's capes dataset contains mostly footage of German cities. How would a learning algorithm pull that off? There is no way this will work. Now, there are previous techniques that attempted this. Here you see a few of them. And, well, the realism is just not there, and there was an even bigger issue. And that is the lack of temporal coherence. This is the flickering that you see where the AI processes these images independently, and does not do that consistently. This quickly breaks the immersion and is typically a deal breaker. And now, hold on to your papers and let's have a look together at the new technique. Whoa! This is nothing like the previous ones. It renders the exact same place, the exact same cars, and the badges are still correct, and still refer to real-world brands. And that's not even the best part. Look! The car-peat materials are significantly more realistic, something that is really difficult to capture in a real-time rendering engine. Lots of realistic-looking, specular highlights off of something that feels like the real geometry of the car. Wow! Now, as you see, most of the generated photorealistic images are dimmer and less saturated than the video game graphics. Why is that? This is because computer game engines often create a more stylized world, where the saturation, highs, and bloom effects are often more pronounced. Let's try to fight these bias, where many people consider the more saturated images to be better, and focus our attention to the realism in these image pairs. While we are there, for reference, we can have a look at what the output would be if we didn't do any of the photorealistic magic, but instead we just tried to breathe more life into the video game footage by trying to transfer the color schemes from these real-world videos in the training set. So, only color transfer. Let's see, yes, that helps, until we compare the results with the photorealistic images synthesized by this new AI. Look! The trees don't look nearly as realistic as the new method, and after we see the real roads, it's hard to settle for the synthetic ones from the game. However, no one said that CTscape is the only dataset we can use for this method. In fact, if we still find ourselves yearning for that saturated look, we can try to plug in a more stylized dataset and get this. This is fantastic, because these images don't have many of the limitations of computer graphics rendering systems. Why is that? Because look at the grass here. In the game, it looks like a 2D texture to save resources and be able to render an image quicker. However, the new system can put more real-looking grass in there, which is a fully 3D object where every single blade of grass is considered. The most mind-blowing thing here is that this AI has finally enough generalization capabilities to learn about cities in Germany and still be able to make convincing photorealistic images for California. The algorithm never saw California, and yet it can recreate it from video game footage better than I ever imagined would be possible. That is mind-blowing. Unreal. And if you have been holding onto your paper so far, now squeeze that paper, because here we have one of those rare cases where we squeeze our papers for not a feature, but for a limitation of sorts. You see, there are limits to this technique too. For instance, since the AI was trained on the beautiful lush hills of Germany and Austria, it hasn't really seen the dry hills of LA. So, what does it do with them? Look, it redrew the hills the only way it saw hills exist, which is with trees. Now we can think of this as a limitation, but also as an opportunity. Just imagine the amazing artistic effects we could achieve by playing this trick to our advantage. Also, we don't need to create an 80% photorealistic game like this one and push it up to 100% with the AI. We could draw not 80% but the bare minimum, maybe only 20% for the video game, of course, draft, if you will, and let the AI do the heavy lifting. Imagine how much modeling time we could save for artists as well. I love this. What a time to be alive! Now all of this only makes sense for real world use if it can run quickly. So, can it? How long do we have to wait to get such a photorealistic video? Do we have to wait for minutes to hours? No, the whole thing runs interactively, which means that it is already usable, we can plug this into the game as a post processing step. And remember the first law of papers, which says that two more papers down the line and it will be even better. What improvements do you expect to happen soon? And what would you use this for? Let me know in the comments below. PerceptiLabs is a visual API for TensorFlow carefully designed to make machine learning as intuitive as possible. This gives you a faster way to build out models with more transparency into how your model is architected, how it performs, and how to debug it. Look, it lets you toggle between the visual modeler and the code editor. It even generates visualizations for all the model variables and gives you recommendations both during modeling and training and does all this automatically. I only wish I had a tool like this when I was working on my neural networks during my PhD years. Visit perceptiLabs.com slash papers to easily install the free local version of their system today. Our thanks to perceptiLabs for their support and for helping us make better videos for you. Thanks for watching and for your generous support and I'll see you next time.
|
Dear Fellow Scholars, this is Two Minute Papers with Dr. Karajona Ifeher. If you have been watching this series for a while, you know very well that I love learning algorithms and fluid simulations. But do you know what I like even better? Learning algorithms applied to fluid simulations, so I couldn't be happier with today's paper. We can create wondrous fluid simulations like the ones you see here by studying the laws of fluid motion from physics and writing a computer program that contains these laws. However, I just mentioned learning algorithms. How do these even come to the picture? If we can write a program that simulates the laws, why would we need learning-based algorithms? This doesn't seem to make any sense. You see, in this task, neural networks are applied to solve something that we already know how to solve. However, if we use a neural network to perform this task, we have to train it, which is a long and arduous process. I hope to have convinced you that this is a bad, bad idea. Why would anyone bother to do that? Does this make any sense? Well, it does make a lot of sense. And the reason for that is that this training step only has to be done once, and afterwards, carrying the neural network, that is, predicting what happens next in the simulation runs almost immediately. This takes way less time than calculating all the forces and pressures in the simulation while retaining high-quality results. This earlier work from last year absolutely nailed this problem. Look, this is a scene with the boxes it has been trained on. And now, let's ask it to try to simulate the evolution of significantly different shapes. Wow! It not only does well with these previously unseen shapes, but it also handles their interactions really well. But, there was more. We could also train it on a tiny domain with only a few particles, and then it was able to learn general concepts that we can reuse to simulate a much bigger domain, and also with more particles. Fantastic! This was a simple general model that truly is a force to be reckoned with. Now, this is a great leap in neural network-based physics simulations, but of course, not everything was perfect there. For instance, over longer timeframes, solids became incorrectly deformed. And now, a newer iteration of a similar system just came out from DeepMind's research lab that promises to extend these neural networks for an incredible set of use cases. Aerodynamics, structural mechanics, class simulations, and more. I am very excited to see how far they have come since. So, let's see how well it does first with rollouts, then with generalization experiments. Here is the first rollout experiment, so what does that mean, and what are we seeing here? On the left, you see a verified handcrafted algorithm performing the simulation, we will accept this as the true data, and on the right, the AI is trying to continue the initial simulation. But, there is one problem. And that problem is that the AI was only trained on short simulations with 400 timestabs that's only a few seconds. And unfortunately, this test will be 100 times longer. So, it only learned on short simulations can it manage to run a longer one and remain stable. Well, that will be tough, but so far so good. Still running. My goodness, this is really something. Still running, and it's very close to the ground truth. Okay, that is fantastic, but that was just a piece of cloth. What about interactions with other objects? Well, let's see, I'll stop the process here and there so we can inspect the differences. Again, flying colors, loving it. And apparently, the same can be said for simulations, structural mechanics, and incompressible fluid dynamics. Now that is one more important lesson here, to be able to solve such a wide variety of simulation problems, we need a bunch of different handcrafted algorithms that talk many, many years to develop. But, this one neural network can learn and perform them all, and it can do it 10 to 100 times quicker. And now comes the second half, generalization experiments. This means a simulation scenario which shapes that the algorithm has never seen before. And let's see if it obtained general knowledge of the underlying laws of physics to be able to pull this off. Oh my, look at that. Even the tiny piece that is hanging off of the flag is simulated nearly perfectly. In this one, they gave it different wind speeds and directions that it hadn't seen before, and not only that, but we are varying these parameters in time, and it doesn't even break a sweat. And hold onto your papers, because here comes my favorite. It can even learn on a small scale simulation with a simple rectangular flag. And now we throw at it a much more detailed cylindrical flag with tassels. Surely this will be way beyond what any learning algorithm can do today. And okay, come on. I am truly out of words. Look, so now this is official. We can ask an AI to perform something that we already know how to do, and it will not only be able to reproduce similar simulations, but we can even ask things that were previously quite unreasonably outside of what it had seen, and it handles all these with flying colors. And it does this much better than previous techniques were able to. And it can learn from multiple different algorithms at the same time. Wow! What a time to be alive! This video has been supported by weights and biases. Check out the recent offering fully connected, a place where they bring machine learning practitioners together to share and discuss their ideas, learn from industry leaders, and even collaborate on projects together. You see, I get messages from you fellow scholars telling me that you have been inspired by the series, but don't really know where to start. And here it is. Fully connected is a great way to learn about the fundamentals, how to reproduce experiments, get your papers accepted to a conference, and more. Make sure to visit them through wnb.me slash papers, or just click the link in the video description. Our thanks to weights and biases for their longstanding support, and for helping us make better videos for you. Thanks for watching and for your generous support, and I'll see you next time.
|
Dear Fellow Scholars, this is two minute papers with Dr. Karojejona Ifehid. Through the power of computer graphics research works, today it is possible to simulate honey coiling, water flow with debris, or to even get a neural network to look at these simulations and learn how to continue them. Now if we look under the hood, we see that not all, but many of these simulations contain particles. And our task is to simulate the pressure, velocity, and other physical quantities for these particles and create a surface where we can watch the evolution of their movement. Once again, the simulations are typically based on particles. But not this new technique? Look, it takes a core simulation. Well, this one is not too exciting. So why are we looking at this? Well, look. Whoa! The new method can add this crispy high frequency details to it. And the result is an absolutely beautiful simulation. And it does not use millions and millions of particles to get this done. In fact, it does not use any particles at all. Instead, it uses wave curves. These are curved shaped wave packets that can enrich a core simulation and improve it a great deal to create a really detailed crisp output. And it gets even better because these wave curves can be applied as a simple post-processing step. What this means is that the workflow that you saw here really works like that. When we have a core simulation that is already done and we are not happy with it, with many other existing methods, it is time to simulate the whole thing again from scratch, but not here. This one we can just add all this detail to an already existing simulation. Wow! I love it! Note that the surface of the fluid is made opaque so that we can get a better view of the waves. Of course, the final simulation that we get for production use are transparent like the one you see here. Now another interesting detail is that the execution time is linear with respect to the curve points. So, what does that mean? Well, let's have a look together. In the first scenario, we get a low quality underlying simulation and we add a hundred thousand wave curves. This takes approximately ten seconds and looks like this. This already greatly enhanced the quality of the results, but we can decide to add more. So first case, a hundred k wave curves in tenies seconds. Now comes the linear part. If we decide that we are yearning for a little more, we can run two hundred k wave curves and the execution time will be twenty-each seconds. It looks like this. Better, we are getting there and for four hundred k wave curves, forty seconds and for eight hundred k curves, yes, you guessed it right, eighty seconds. Double the number of curves, double the execution time. And this is what the linear scaling part means. Now of course, not even this technique is perfect, the post-processing nature of the method means that it can enrich the underlying simulation a great deal, but it cannot add changes that are too intrusive to it. It can only add small waves relative to the size of the fluid domain. But even with these, the value proposition of this paper is just out of this world. So from now on, if we have a relatively poor quality fluid simulation that we abandoned years ago, we don't need to despair what we need is to harness the power of wave curves. This episode has been supported by weights and biases. In this post, they show you how to monitor and optimize your GPU consumption during model training in real time with one line of code. During my PhD studies, I trained a ton of neural networks which were used in our experiments. However, over time, there was just too much data in our repositories and what I am looking for is not data, but insight. And that's exactly how weights and biases helps you by organizing your experiments. It is used by more than 200 companies and research institutions, including OpenAI, Toyota Research, GitHub, and more. And get this, weights and biases is free for all individuals, academics, and open source projects. Make sure to visit them through wnba.com slash papers or just click the link in the video description and you can get a free demo today. Our thanks to weights and biases for their long-standing support and for helping us make better videos for you. Thanks for watching and for your generous support and I'll see you next time.
|
Dear Fellow Scholars, this is two-minute papers with Dr. Karo Zsolnai-Fehir. Today, we are going to look at a paper from three years ago, and not any kind of paper, but my kind of paper, which is in the intersection of machine learning, computer graphics, and physics simulations. This work zooms in on reproducing reference motions, but with a twist and adds lots of additional features. So, what does all this mean? You see, we are given this virtual character, a reference motion that we wish to teach it, and here, additionally, we are given a task that needs to be done. So, when the reference motion is specified, we place our AI into a physics simulation, where it tries to reproduce these motions. That is a good thing, because if it would try to learn to run by itself alone, it would look something like this. And if we ask it to mimic the reference motion, oh yes, much better. Now that we have built up confidence in this technique, let's think bigger and perform a backflip. Uh-oh, well, that didn't quite work. Why is that? We just established that we can give it a reference motion, and it can learn it by itself. Well, this chap failed to learn a backflip because it explored many motions during training, most of which resulted in failure. So, it didn't find a good solution and settled for a mediocre solution instead. A proposed technique by the name reference state initialization, RSI in short, remedies this issue by letting the agent explore better during the training phase. Got it, so we add this RSI, and now, all is well, right? Let's see, ouch. Not so much. It appears to fall on the ground and tries to continue the motion from there. A plus for effort, little AI, but unfortunately, that's not what we are looking for. So, what is the issue here? The issue is that the agent has hit the ground, and after that, it still tries to score some additional points by continuing to mimic the reference motion. Again, E plus for effort, but this should not give the agent additional scores. This method we just described is called early termination. Let's try it. Now, we add the early termination and RSI together, and let's see if this will do the trick. And yes, finally, with these two additions, it can now perform that sweet, sweet backflip – rolls, and much, much more with flying colors. So now, the agent has the basics down and can even perform explosive dynamic motions as well. So, it is time. Now hold onto your papers as now comes the coolest part, we can perform different kinds of retargeting as well. What is that? Well, one kind is retargeting the environment. This means that we can teach the AI a landing motion in an idealized case, and then ask it to perform the same, but now, off of a tall ledge. Or we can teach it to run, and then drop it into a computer game level and see if it performs well there. And it really does. Amazing. This part is very important because in any reasonable industry use, these characters have to perform in a variety of environments that are different from the training environment. Two is retargeting not the environment, but the body type. We can have different types of characters learn the same motions. This is pretty nice for the Atlas robot, which has a drastically different way distribution, and you can also see that the technique is robust against perturbations. Yes, this means one of the favorite pastimes of a computer graphics researcher, which is throwing boxes at virtual characters and seeing how well it can take it. Might as well make sure of the fact that in a simulated world, we make up all the rules. This one is doing really well. Oh. Note that the Atlas robot is indeed different than the previous model, and these motions can be retargeted to it, however, this is also a humanoid. Can we ask for non-humanoids as well, perhaps? Oh, yes. This technique supports retargeting to T-Rexes, Dragons, Lions, you name it. It can even get used to the gravity of different virtual planets that we dream up. Bravo. So the value proposition of this paper is completely out of this world. When state initialization, early termination, retargeting to different body types, environments, oh my. To have digital applications like computer games use this would already be amazing and just imagine what we could do if we could deploy these to real world robots. And don't forget, these research works just keep on improving every year. The first law of paper says that research is a process. Do not look at where we are, look at where we will be two more papers down the line. Now, fortunately, we can do that right now. Why is that? It is because this paper is from 2018, which means that follow-up papers already exist. What's more, we even discussed one that teaches these agents to not only reproduce these reference motions, but to do those with style. Its style there meant that the agent is allowed to make creative deviations from the reference motion, thus developing its own way of doing it. An amazing improvement. And I wonder what researchers will come up with in the near future. If you have some ideas, let me know in the comments below. What a time to be alive. This episode has been supported by Lambda GPU Cloud. If you're looking for inexpensive Cloud GPUs for AI, check out Lambda GPU Cloud. They've recently launched Quadro RTX 6000, RTX 8000, and V100 instances. And hold onto your papers because Lambda GPU Cloud can cost less than half of AWS and Azure. Plus they are the only Cloud service with 48GB RTX 8000. And researchers at organizations like Apple, MIT, and Caltech in using Lambda Cloud instances, workstations, or servers. Make sure to go to LambdaLabs.com slash papers to sign up for one of their amazing GPU instances today. Our thanks to Lambda for their long-standing support and for helping us make better videos for you. Thanks for watching and for your generous support, and I'll see you next time.
|
Dear Fellow Scholars, this is two-minute papers with Dr. Karo Zonai-Fehir, this incredible paper promises procedural and physically-based rendering of glittering surfaces. Whoa! Okay, I am sold, provided that it does as well as advertised. We shall see about that. Oh goodness, the results look lovely. And now, while we marvel at these, let's discuss what these terms really mean. One, physically-based means that it is built on a foundation that is based in physics. That is not surprising, as the name Sassal. The surprise usually comes when people use the term without careful consideration. You see, light transport researchers take this term very seriously if you claim that your model is physically-based, you better bring your A-game. We will inspect that. Two, the procedural part means that we, ourselves, can algorithmically generate many of these material models, ourselves. For instance, this earlier paper was able to procedurally generate the geometry of climbing plans and simulate their growth. Or procedural generation can come to the rescue when we are creating a virtual environment and we need hundreds of different flowers, thousands of blades of grass and more. This is an actual example of procedural geometry that was created with the wonderful Tyrogen program. Three, rendering means a computer program that we run on our machine at home and it creates a beautiful series of images like these ones. In the case of photorealistic rendering, we typically need to wait from minutes to hours for every single image. So, how fast is this technique? Well, hold on to your papers because we don't need to wait from minutes to hours for every image. Instead, we only need two and a half milliseconds per frame. Absolute witchcraft. The fact that we can do this in real time blows my mind and yes, this means that we can test the procedural part in an interactive demo too. Here we can play with a set of parameters, look, the roughness of the surface is not set in stone, we can specify it and see as the material changes in real time. We can also change the density of micro facets. These are tiny imperfections in the surface that make these materials really come alive. And if we make the micro facet density much larger, the material becomes more diffuse. And if we make them really small, oh, loving this. So, as much as I love this, I would also like to know how accurate this is. For reference, here is a result from a previous technique that is really accurate. However, this is one of the methods that takes from minutes to hours for just one image. And here is the other end of the spectrum. This is a different technique that is lower in quality. However, in return, it can produce these in real time. So, which one is the new one closer to? The accurate, slow one, or the less accurate, quick one. What? Its quality is as good as the accurate one, and it also runs in real time. The best of both worlds. Wow! Now, of course, not even this technique is perfect. The fact that this particular example worked so well is great, but it doesn't always come out so well. And I know, I know, you're asking, can we try this new method? And the answer is, resounding yes, you can try it right now in multiple places. There is a web demo. And it was even implemented in shader toy. So, now we know what it means to render procedural and physically-based glittering surfaces in real time. Absolutely incredible! What a time to be alive! And if you enjoyed this episode, we may have two more incredible, glint rendering papers coming up in the near future. This... and this. Let me know in the comments below if you would like to see them. Write something like, yes please. This video has been supported by weights and biases. They have an amazing podcast by the name Gradient Descent, where they interview machine learning experts who discuss how they use learning-based algorithms to solve real-world problems. They've discussed biology, teaching robots, machine learning in outer space, and a whole lot more. Perfect for a fellow scholar with an open mind. Make sure to visit them through wnb.me-gd or just click the link in the video description. Our thanks to weights and biases for their long-standing support and for helping us make better videos for you. Thanks for watching and for your generous support and I'll see you next time.
|
Dear Fellow Scholars, this is 2 Minute Paper Sir Dr. Carlos Jona Ifejir. Today we are going to generate human faces and even better we will keep them intact. You will see what that means in a moment. This new neural network based technique can dream up completely new images and more. However, this is not the first technique to do this, but this. This does them better. Let's look at three amazing features that it offers and then discuss how and why it is better than its predecessors. Hold on to your papers for the first example, which is my favorite image tunification. Would you like to see what the AI thinks you would look like if you were a Disney character? Well, here you go. And these are not some rudimentary, first paper in the works kind of results. These are proper tunifications. You could ask for money for some of these and they are done completely automatically by a learning algorithm. At the end of the video, you will also witness as I myself get tunified. And what is even cooler is that we can not only produce these still images, but even compute intermediate images between two input photos and get meaningful results. I'll stop the process here and there to show you how good these are. I am blown away. Two, it can also perform the usual suspects. For instance, it can make us older or younger, or put a smile on our face too. However, three, it works not only on human faces, but cars, animals, and buildings too. So, the results are all great, but how does all this wizardry happen? Well, we take an image and embed it into a latent space and in this space we can easily apply modifications. Okay, but what is this latent space thing? A latent space is a made up place where we are trying to organize data in a way that similar things are close to each other. What you see here is a 2D latent space for generating different fonts. It is hard to explain why these fonts are similar, but most of us would agree that they indeed share some common properties. The cool thing here is that we can explore this latent space with our cursor and generate all kinds of new fonts. You can try this work in your browser. The link is available in the video description. And, luckily, we can build a latent space not only for fonts, but for nearly anything. I am a light transport researcher by trade, so in this earlier paper we were interested in generating hundreds of variants of a material model to populate this scene. In this latent space, we can concoct all of these really cool digital material models. A link to this work is also available in the video description. Now, for the face generator algorithms, this embedding step is typically imperfect, which means that we might lose some information during the process. In the better cases, things may look a little different, but does not even the worst case scenario I'll show you that in a moment. For the milder case, here is an earlier example from a paper by the name Styrofo, where the authors embedded me into a latent space and it indeed came out a little different. But, not so bad. A later work Starclip was able to make me look like Obi-Wan Kenobi, which is excellent. However, the embedding step was more imperfect. The bearded image was embedded like this. You are probably saying that this looks different, but even this is not so bad, if you want to see a much worse example, look at this. My goodness, now this is quite different. Now that we saw what it could do, it is time to ask the big question, how much better is it than previous works? Do we have an AB test for that? And the answer is yes, of course. Let's embed this gentleman and see how he comes out on the other end. Well, without the improvements of this paper, once again, quite different. The beard is nearly gone, and when we tuneify the image, let's see, yep, that beard is gone for good. So, can this paper get that beard back? Let's see. Oh yes, if we refine the embedding with this new method, we get that glorious beard back. That is one heck of a tune image. Congratulations, loving it. And now it's my turn. One of the results was amazing. I really like this one. And how about this? Well, not bad. And I wonder if it can deal with sunglasses. Well, kind of, but not in the way you might think. What do you think? Let me know in the comments below. Note that you can only see these results here on two minute papers and a big thank you to the authors for taking time off their busy day and doing these experiments for us. And here are a few more tests. Let's see how it fares with these. The inputs are a diverse set of images from different classes and the initial embeddings are, well, a bit of a disappointment. But that is kind of the point because this new technique does not let it stop there and iteratively improve them. Yes, getting better. And by the end, my goodness, very close to the input. Don't forget that the goal here is not to implement a copying machine. The key difference is that we can't do too much with the input image. But after the embedding step, we can do all these tunification and other kinds of magic with it and the results are only relevant as long as the two images are close. And they are really close. Bravo. So good. So I hope that now you agree that the pace of progress in machine learning and synthetic image generation is absolutely incredible. What a time to be alive. This episode has been supported by weights and biases. In this post, they show you how to use their reports to explain how your model works, show plots of how model versions improved, discuss bugs and demonstrate progress towards milestones. Weight and biases provides tools to track your experiments in your deep learning projects. Their system is designed to save you a ton of time and money and it is actively used in projects at prestigious labs, such as OpenAI, Toyota Research, GitHub and more. And the best part is that weight and biases is free for all individuals, academics and open source projects. It really is as good as it gets. Make sure to visit them through wnb.com slash papers or just click the link in the video description and you can get a free demo today. Our thanks to weights and biases for their long standing support and for helping us make better videos for you. Thanks for watching and for your generous support and I'll see you next time.
|
Dear Fellow Scholars, this is Two Minute Papers with Dr. Karo Jornai-Fehir. Today, we will see how this AIB technique can help our virtual characters not only learn new movements, but they will even perform them with style. Now, here you see a piece of reference motion. This is what we would like our virtual character to learn. The task is to then enter a physics simulation, where we try to find the correct joint angles and movements to perform that. Of course, this is already a challenge because even a small difference in joint positions can make a great deal of difference in the output. Then, the second more difficult task is to do this with style. No two people perform cartwheel exactly the same way, so would it be possible to have our virtual characters imbued with style so that they, much like people, would have their own kinds of movement. Is that possible somehow? Well, let's have a look at the simulated characters. Nice, so this chap surely learned to at the very least reproduce the reference motion, but let's stop the footage here and there and look for differences. Oh yes, this is indeed different. This virtual character indeed has its own style, but at the same time, it is still faithful to the original reference motion. This is a magnificent solution to a very difficult task, and the authors made it look deceptively easy, but you will see in a moment that this is really challenging. So, how does all this magic happen? How do we imbue these virtual characters with style? Well, let's define style as creative deviation from the reference motion, so it can be different, but not too different, or else this happens. So, what are we seeing here? Here, with green, you see the algorithm's estimation of the center of mass for this character, and our goal would be to reproduce that as faithfully as possible. That would be the copying machine solution. Here comes the key for style, and that key is using space-time bounds. This means that the center of mass of the character can deviate from the original, but only as long as it remains strictly within these boundaries, and that is where the style emerges. If we wish to add a little style to the equation, we can set relatively loose space-time bounds around it, leaving room for the AI to explore. If we wish to strictly reproduce the reference motion, we can set the bounds to be really tight instead. This is a great technique to learn running, jumping, rolling behaviors, and it can even perform a stylish cartwheel, and backflips. Oh yeah, loving it! These space-time bounds also help us retarget the motion to different virtual body types. It also helps us savage really bad quality reference motions and makes something useful out of them. So, are we done here? Is that all? No, not in the slightest. Now, hold on to your papers because here comes the best part. With these novel space-time bounds, we can specify additional stylistic choices to the character moves. For instance, we can encourage the character to use more energy for a more intense dancing sequence, or we can make it sleepier by asking it to decrease its energy use. And I wonder if we can put bounds on the energy use, can we do more? For instance, do the same with body volume use. Oh yeah, this really opens up new kinds of motions that I haven't seen virtual characters perform yet. For instance, this chap was encouraged to use its entire body volume for a walk, and thus looks like someone who is clearly looking for trouble. And this poor thing just finished their paper for a conference deadline and is barely alive. We can even mix multiple motions together. For instance, what could be a combination of a regular running sequence and a band walk? Well, this. And if we have a standard running sequence and a happy walk, we can fuse them into a happy running sequence. How cool is that? So, with this technique, we can finally not only teach virtual characters to perform nearly any kind of reference motion, but we can even ask them to do this with style. What an incredible idea, loving it. Now, before we go, I would like to show you a short message that we got that melted my heart. This I got from Nathan, who has been inspired by these incredible works, and he decided to turn his life around and go back to study more. I love my job, and reading messages like this is one of the absolute best parts of it. Congratulations, Nathan. Thank you so much and good luck. If you feel that you have a similar story with this video series, make sure to let us know in the comments. This episode has been supported by Lambda GPU Cloud. If you're looking for inexpensive Cloud GPUs for AI, check out Lambda GPU Cloud. They've recently launched Quadro RTX 6000, RTX 8000, and V100 instances, and hold onto your papers because Lambda GPU Cloud can cost less than half of AWS and Azure. Plus, they are the only Cloud service with 48GB RTX 8000. Join researchers at organizations like Apple, MIT, and Caltech in using Lambda Cloud instances, workstations, or servers. Make sure to go to lambdalabs.com slash papers to sign up for one of their amazing GPU instances today. Our thanks to Lambda for their long-standing support and for helping us make better videos for you. Thanks for watching and for your generous support, and I'll see you next time.
|
Dear Fellow Scholars, this is two-minute papers with Dr. Karo Zsolnai-Fehir. Today's paper is about creating synthetic human faces and not only that, but it can also make me look like Obi-Wan Kenobi. You will see the rest of this footage in a few minutes. Now, of course, this is not the first paper to generate artificial human faces. For instance, in December 2019, a technique by the name Stagian II was published. This is a neural network based learning algorithm that is capable of synthesizing these eye-poppingly detailed images of human beings that don't even exist. This work answered some questions and, as any good paper, raised many more good ones. For instance, generating images of virtual humans is fine, but what if the results are not exactly what we are looking for? Can we have some artistic control over the outputs? How do we even tell the AI what we are looking for? Well, we are in luck because Stagian II offers somewhat rudimentary control over the outputs where we can give it input images of two people and fuse them together. Now that is absolutely amazing, but I wonder if we can ask for a little more. Can we get even more granular control over these images? What if we could just type in what we are looking for and somehow the AI would understand and execute our wishes? Is that possible or is that science fiction? Well, hold on to your papers and let's see. This new technique works as follows. We type what aspect of the input image we wish to change and what the change should be. Wow, really cool. And we can even play with these sliders to adjust the magnitude of the changes as well. This means that we can give someone a new hairstyle, add or remove makeup, or give them some wrinkles for good measure. Now the original Stagian II method worked on not only humans, but on a multitude of different classes too. And the new technique also inherits this property. Look, we can even design new car shapes, make them a little sportier, or make our adorable cat even cuter. For some definition of cuter, of course. We can even make their hair longer, or change their colors and the results are of super high quality. Absolutely stunning. While we are enjoying some more results here, make sure to have a look at the paper in the video description. And if you do, you will find that we really just scratch the surface here. For instance, it can even add clouds to the background of an image, or redesign the architecture of buildings, and much, much more. There are also comparisons against previous methods in there showcasing the improvements of the new method. And now let's experiment a little on me. Look, this is me here, after I got locked up for dropping my papers. And I spent so long in there that I grew a beard. Or I mean a previous learning based AI called Styrofo gave me one. And since dropping your papers is a serious crime, the sentence is long. Quite long. Ouch, I hereby promise to never drop my papers ever again. So now let's try to move forward with this image and give it to this new algorithm for some additional work. This is the original. And by original, I mean the image with the added algorithmic beard from a previous AI. And this is the embedded version of the image. This image looks a little different. Why is that? It is because Stagen 2 runs an embedding operation on the photo before starting its work. This is its own internal understanding of my image if you will. This is great information and is something that we can only experience if we have hands-on experience with the algorithm. And now let's use this new technique to apply some more magic to this image. This is where the goodness happens. And, oh my, it does not disappoint. You see, it can gradually transform me into Obi-Wan Kenobi, an elegant algorithm for a more civilized age. But that's not all. It can also create a ginger caroy, hippie caroy, caroy who found a shiny new paper on fluid simulations, and caroy who read set paper outside for quite a while and was perhaps disappointed. And now hold on to your papers and please welcome Dr. Karolina Jean-Eiffahier. And one more, I apologize in advance, rockstar caroy with a Mohawk. How cool is that? I would like to send a huge thank you to the authors for taking their time out of their workday to create these images only for us. You can really only see this here on two-minute papers. And as you see, the piece of progress in machine learning research is absolutely stunning. And with this, the limit of our artistic workflow is not going to be our mechanical skills, but only our imagination. What a time to be alive! This video has been supported by weights and biases. Check out the recent offering fully connected, a place where they bring machine learning practitioners together to share and discuss their ideas, learn from industry leaders, and even collaborate on projects together. You see, I get messages from you fellow scholars telling me that you have been inspired by the series, but don't really know where to start. And here it is. Fully connected is a great way to learn about the fundamentals, how to reproduce experiments, get your papers accepted to a conference, and more. Make sure to visit them through wnb.me slash papers or just click the link in the video description. Our thanks to weights and biases for their long-standing support and for helping us make better videos for you. Thanks for watching and for your generous support, and I'll see you next time.
|
Dear Fellow Scholars, this is two-minute papers with Dr. Karo Zonai-Fahir. Imagine that you are a film critic and you are recording a video review of a movie, but unfortunately you are not the best kind of movie critic and you record it before watching the movie. But here is the problem, you don't really know if it's going to be any good. So you record this. I'm gonna give her a territory of B-. So far so good. Nothing too crazy going on here. However, you go in, watch the movie and it turns out to be amazing. So what do we do if we don't have time to re-record the video? Well, we grab this AI, type in the new text and it will give us this. I'm gonna give her a territory of A plus. Whoa! What just happened? What kind of black magic is this? Well, let's look behind the person. On the blackboard you see some delicious partial derivatives. And I am starting to think that this person is not a movie critic. And of course he isn't because this is Yoshua Benjiro, a legendary machine learning researcher. And this was an introduction video where he says this. And what happened is that it has been repurposed by this new D-Fake generator AI where we can type in anything we wish and outcomes a near-perfect result. It synthesizes both the video and audio content for us. But we are not quite done yet. Something is missing. If the movie gets an A plus, the gestures of the subject also have to reflect that this is a favorable review. So what do we do? Maybe add the smile there. Is that possible? I am going to give her a redatory an A plus. Oh yes, there we go. Amazing. Let's have a closer look at one more example where we can see how easily we can drop in new text with this editor. Why you don't worry about city items? Marco movies are not cinema. Now, this is not the first method performing this task. Previous techniques typically required hours and hours of video of a target subject. So how much training data does this require to perform all this? Well, let's have a look together. Look, this is not the same footage copy pasted three times. This is a synthesized video output if we have 10 minutes of video data from the test subject. This looks nearly as good as fewer sharp details, but in return, this requires only two and a half minutes. And here comes the best part. If you look here, you may be able to see the difference. And if you have been holding onto your paper so far, now squeeze that paper because synthesizing this only required 30 seconds of video footage of the target subject. My goodness, but we are not nearly done yet. It can do more. For instance, it can tone up or down the intensity of gestures to match the tone of what is being said. Look, so how does this wizardry happen? Well, this new technique improves two things really well. One is that it can search for phonemes and other units better. Here is an example, we crossed out the word spider and we wish to use the word fox instead and it tries to assemble this word from previous occurrences of individual sounds. For instance, the ox part is available when the test subject orders the word box. And two, it can stitch them together better than previous methods. And surely, this means that since it needs less data, the synthesis must take a great deal longer. Right? No, not at all. The synthesis part only takes 40 seconds. And even if it couldn't do this so quickly, the performance control aspect where we can tone the gestures up or down or at the smile would still be an amazing selling point in and of itself. But no, it does all of these things quickly and with high quality at the same time. Wow, I now invite you to look at the results carefully and give them a hard time. Did you find anything out of ordinary? Did you find this believable? Let me know in the comments below. The authors of the paper also conducted a user study with 110 participants who were asked to look at 25 videos and say which one they felt was real. The results showed that the new technique outperforms previous techniques even if they have access to 12 times more training data. Which is absolutely amazing, but what is even better, the longer the video clips were, the better this method fared. What a time to be alive. Now, of course, beyond the many amazing use cases of deepfakes in reviving deceased actors, creating beautiful visual art, redubbing movies and more, we have to be vigilant about the fact that they can also be used for nefarious purposes. The goal of this video is to let you and the public know that these deepfakes can now be created quickly and inexpensively and they don't require a trained scientist anymore. If this can be done, it is of utmost importance that we all know about it. And beyond that, whenever they invite me, I inform key political and military decision makers about the existence and details of these techniques to make sure that they also know about these and using that knowledge, they can make better decisions for us. You can see me doing that here. Note that these talks and consultations all happen free of charge and if they keep inviting me, I'll keep showing up to help with this in the future as a service to the public. PerceptiLabs is a visual API for TensorFlow carefully designed to make machine learning as intuitive as possible. This gives you a faster way to build out models with more transparency into how your model is architected, how it performs and how to debug it. Look, it lets you toggle between the visual modeler and the code editor. It even generates visualizations for all the model variables and gives you recommendations both during modeling and training and does all this automatically. I only wish I had a tool like this when I was working on my neural networks during my PhD years. Visit perceptiLabs.com slash papers to easily install the free local version of their system today. Thanks for watching and for your generous support and I'll see you next time.
|
Dear Fellow Scholars, this is Two Minute Papers with Dr. Carlos John A. Feher. Today we are going to burn some virtual trees. This is a fantastic computer graphics paper from four years ago. I ask you to hold onto your papers immediately and do not get surprised if it spontaneously lights on fire. Yes, this work is about simulating wood combustion and it is one of my favorite kinds of papers that takes an extremely narrow task and absolutely nails it. Everything we can possibly ask for from such assimilation is there. Each leaf has its own individual mass and area, they burn individually, transfer heat to their surroundings, and finally, branches bend and look can eventually even break in this process. If we look under the hood, we see that these trees are defined as a system of connected particles embedded within a physics simulator. These particles have their own properties, for instance, you see the temperature changes here at different regions of the tree as the fire gradually consumes it. Now, if you have been holding onto your papers, squeeze that paper and look. What do you think is this fire movement pre-programmed? It doesn't seem like it. This seems more like some real-time mouse movement, which is great news indeed. And yes, that means that this simulation and all the interactions we can do with it runs in real time. Here is a list of the many quantities it can simulate. Oh my goodness, there's so much yummy physics here. I don't even know where to start. Let's pick the water content here and see how changing it would look. This is a tree with a lower water content. It catches fire rather easily. And now, let's pour some rain on it. Then afterwards, look, it becomes much more difficult to light on fire and emits huge plumes of dense, dense smoke. And we can even play with these parameters in real time. We can also have a ton of fun by choosing non-physical parameters for the breaking coefficient, which of course can lead to the tree suddenly falling apart in a non-physical way. The cool thing here is that we can either set these parameters to physically plausible values and get a really realistic simulation or we can choose to bend reality in directions that are in line with our artistic vision. How cool is that? I could play with this all day. So, as an experienced scholar, you ask, OK, this looks great, but how good are these simulations really? Are they just good enough to fool the entry in die or are they indeed close to reality? I hope you know what's coming because what is coming is my favorite part in all simulation research and that is when we let reality be our judge and compare the simulation to that. This is a piece of real footage of a piece of burning wood and this is the simulation. Well, we see that the resolution of the fire simulation was a little limited. It was four years ago after all, however, it runs very similarly to the real life footage. Bravo! And all this was done in 2017. What a time to be alive! But we are not even close to be done yet. This paper teaches us one more important lesson. After publishing such an incredible work, it was accepted to the Cigraf Asia 2017 conference. That is, one of the most prestigious conferences in this research field. Getting a paper accepted here is equivalent to winning the Olympic gold medal of computer graphics research. So, with that, we would expect that the authors now revel in eternal glory. Right? Well, let's see. What? Is this serious? The original video was seen by less than a thousand people online. How can that be? And the paper was referred to only ten times by other works in these four years. Now, you see, that it is not so bad in computer graphics at all. It is an order, maybe even orders of magnitude smaller field than machine learning. But I think this is an excellent demonstration of why I started this series. And it is because I get so excited by these incredible human achievements and I feel that they deserve a little more love than they are given. And of course, these are so amazing. Everybody has to know about them. Happy to have you fellow scholars watching this and celebrating these papers with me for more than 500 episodes now. Thank you so much. It is a true honor to have such an amazing and receptive audience. This episode has been supported by weights and biases. In this post, they show you how to use their tool to fool a neural network to look at a pig and be really sure that it is an airliner. If you work with learning algorithms on a regular basis, make sure to check out weights and biases. Their system is designed to help you organize your experiments and it is so good it could shave off weeks or even months of work from your projects and is completely free for all individuals, academics and open source projects. This really is as good as it gets. And it is hardly a surprise that they are now used by over 200 companies and research institutions. Make sure to visit them through wnb.com slash papers or just click the link in the video description and you can get a free demo today. Our thanks to weights and biases for their longstanding support and for helping us make better videos for you. Thanks for watching and for your generous support and I'll see you next time.
|
Dear Fellow Scholars, this is two-minute papers with Dr. Kato Zsolnai-Fehir. I would like to show you some results from the area of 3D printing, a topic which is, I think, a little overlooked and show you that works in this field are improving at an incredible pace. Now, a common theme among research papers in this area is that they typically allow us to design objects and materials by thinking about how they should look. Let's see if this is really true by applying the second law of papers, which says whatever you're thinking about, there is already a two-minute papers episode on that. Let's see if it applies here. For instance, just prescribing a shape for 3D printing is old old news. Here is a previous technique that is able to print exotic materials. These are materials that we can start stretching and if we do, instead of thinning, they get fatter. We can also 3D print filigree patterns with ease. These are detailed thin patterns typically found in jewelry, fabrics and ornaments, and as you may imagine, crafting such motives on objects would be incredibly laborious to do by hand. We can also prescribe an image and 3D print an object that will cast a caustic pattern that shows exactly that image. Beautiful! And printing textured 3D objects in a number of different ways is also possible. This is called hydrographic printing and is one of the most flamboyant ways of doing that. So, what happens here? Well, we place a film in water, use a chemical activator spray on it, and shove the object in the water, and oh yes, there we go. Note that these were all showcased in previous episodes of this series. So, in 3D printing, we typically design things by how they should look. Of course, how else would we be designing? Well, the authors of this crazy paper don't care about locks at all. Well, what else would they care about if not the locks? Get this, they care about how these objects deform. Yes, with this work, we can design deformations, and the algorithm will find out what the orientation of the fibers should be to create a prescribed effect. Okay, but what does this really mean? This means that we can now 3D print really cool, fiber-like microstructures that deform well from one direction. In other words, they can be smashed easily and flattened a great deal during that process. I bet there was a ton of fun to be had at the lab on this day. However, research is not only fun and joy, look, if we turn this object around... Ouch! This side is very rigid and resists deformations well, so there was probably a lot of injuries in the lab that day too. So, clearly, this is really cool. But of course, our question is, what is all this good for? Is this really just an interesting experiment, or is this thing really useful? Well, let's see what this paper has to offer in 5 amazing experiments. Experiment number one, pliers. The jaws and the hand grips are supposed to be very rigid, checkmark. However, there needs to be a joint between them to allow us to operate it. This joint needs to be deformable and not any kind of deformable, but exactly the right kind of deformable to make sure it opens and closes properly. Lovely. Loving this one. 3D printing pliers from fiber-like structures. How cool is that? Experiment number two, structured plates. This shows that not all sides have to have the same properties. We can also print a material which has rigid and flexible parts on the same side, a few inches apart, thus introducing interesting directional bending characteristics. For instance, this one shows a strong collapsing behavior and can grip our finger at the same time. Experiment number three, bendy plates. We can even design structures where one side absorbs deformations while the other one transfers it forward, bending the whole structure. 4. Seat-like structures. The seat surface is designed to deform a little more to create a comfy sensation, but the rest of the seat has to be rigid to not collapse and last a long time. And finally, example number five, knee-like structures. These freely collapse in this direction to allow movement. However, there resist forces from any other direction. And these are really just some rudimentary examples of what this method can do, but the structures showcased here could be used in soft robotics, soft mechanisms, prosthetics, and even more areas. The main challenge of this work is creating an algorithm that can deal with these breaking patterns, which make for an object that is nearly impossible to manufacture. However, this method can not only eliminate these, but it can design structures that can be manufactured on low-end 3D printers, and it also uses inexpensive materials to accomplish that. And hold on to your papers because this work showcases a handcrafted technique to perform all this. Not a learning algorithm insight, and there are two more things that I really liked in this paper. One is that these proposed structures collapse way better than this previous method, and not only the source code of this project is available, but it is available for you to try on one of the best websites on the entirety of the internet, shader toy. So good! So, I hope you now agree that the field of 3D printing research is improving at an incredible pace, and I hope that you also had some fun learning about it. What a time to be alive! This episode has been supported by weights and biases. In this post, they show you how to use their tool to extract text with all kinds of sizes, shapes, and orientations from your images. During my PhD studies, I trained a ton of neural networks which were used in our experiments. However, over time, there was just too much data in our repositories, and what I am looking for is not data, but insight. And that's exactly how weights and biases helps you by organizing your experiments. It is used by more than 200 companies and research institutions, including OpenAI, Toyota Research, GitHub, and more. And get this, weights and biases is free for all individuals, academics, and open source projects. Make sure to visit them through wnba.com slash papers, or just click the link in the video description, and you can get a free demo today. Our thanks to weights and biases for their long-standing support, and for helping us make better videos for you. Thanks for watching and for your generous support, and I'll see you next time.
|
Dear Fellow Scholars, this is two-minute papers with Dr. Kato Jornai-Fahir. Yes, you see it correctly, this is a paper, own paper. The paper paper, if you will. And today you will witness some amazing works in the domain of computer graphics and physics simulations. There is so much progress in this area. For instance, we can simulate honey coiling, baking and melting, bouncy jelly, and many related phenomena. And none of these techniques use any machine learning. These are all good, old-fashioned handcrafted algorithms. And using these, we can simulate stretching and compression to the point that muscle movement simulations are possible. When attaching muscles to bones, as we move the character, the muscles move and contract accurately. What's more, this work can even perform muscle growth simulations. So, are we done here? Did these ingenious computer graphics researchers max out physics simulation where there is nothing else to do? Oh no, of course not. Look, this footage is from an earlier computer graphics paper that simulates viscosity and melting fluids. And what I would like you to look at here is not what it does, but what it doesn't do. It starts melting these armadillos beautifully. However, there is something that it doesn't do, which is mixing. The material starts separate and remain separate. Can we improve upon that somehow? Well, this no paper promises that and so much more that it truly makes my head spin. For instance, it can simulate hyperelastic, elastoplastic, viscous, fracturing and multi-face coupling behaviors. And most importantly, all of these can be simulated within the same framework. Not one paper for each behavior, one paper that can do all of these. That is absolutely insane. So, what does all that mean? Well, I say let's see them all right now through five super fun experiments. Experiment number one. Wet papers. As you see, this technique handles the ball of water. Okay, we've seen that before and what else? Well, it handles the paper too. Okay, that's getting better, but hold on to your papers and look, it also handles the water's interaction with the paper. Now we're talking. And careful with holding onto that paper, because if you do it correctly, this might happen. As you see, the arguments contained within this paper really hold water. Experiment number two. Fracturing. As you know, most computer graphics papers on physics simulation contain creative simulations to destroying armadillos in the most spectacular fashion. This work is of course no different. Yum. Experiment number three. This solution. Here we take a glass of water, add some starch powder. It starts piling up. And then slowly starts to dissolve. A note that the water itself also becomes stickier during the process. Number four. Dipping. We first take a piece of biscuit and dip it into the water. Note that the coupling works correctly here. In other words, the water now moves, but what is even better is that the biscuit started absorbing some of that water. And now when we repeat a part. Oh yes, excellent. And as a light transport researcher by trade, I love watching the shape of biscuits distorted here due to the refraction of the water. This is a beautiful demonstration of that phenomenon. And number five, the dog. What kind of dog you ask? Well, this virtual dog gets a big splash of water, starts shaking it off and manages to get rid of most of it. But only most of it. And it can do all of these using one algorithm. Not one per each of these beautiful phenomena, one technique that can perform all of these. There is absolutely amazing. But it does not stop there. It can also simulate snow and it not only does it well, but it does that swiftly. How swiftly? It simulated this a bit faster than one frame per second. The starch powder experiment was about one minute per frame. And the slowest example was the dog shaking off the bowl of water. The main reason for this is that it required near a quarter million particles of water and for hair. And when the algorithm computes these interactions between them, it can only advance the time in very small increments. It has to do this a hundred thousand times for each second of footage that you see here. Based on how much computation there is to do, that is really, really fast. And don't forget that the first law of paper says that research is a process. Do not look at where we are, look at where we will be, two more papers down the line. And even now, the generality of this system is truly something to behold. Congratulations to the authors on this amazing paper. What a time to be alive! So if you wish to read a beautifully written paper today that does not dissolve in your hands, I highly recommend this one. This episode has been supported by weights and biases. In this post, they show you how to use their tool to perform distributed hyper parameter optimization at scale. Weight and biases provides tools to track your experiments in your deep learning projects. Their system is designed to save you a ton of time and money, and it is actively used in projects at prestigious labs such as OpenAI, Toyota Research, GitHub, and more. And the best part is that weights and biases is free for all individuals, academics, and open source projects. It really is as good as it gets. Make sure to visit them through wnb.com slash papers, or just click the link in the video description, and you can get a free demo today. Our thanks to weights and biases for their long-standing support, and for helping us make better videos for you. Thanks for watching and for your generous support, and I'll see you next time.
|
Dear Fellow Scholars, this is 2 Minute Papers with Dr. Károly Zsolnai-Fehér. This day is the perfect day to simulate the kinematics of yarn and cloth on our computers. As you just saw, this is not the usual intro that we use in every episode, so what could that be? Well, this is a simulation specifically made for us using a technique from today's paper. And it has a super high stitching density, which makes it all the better by the end of this video you will know exactly what that means and why that matters. But first, for context, I would like to show you what researchers were able to do in 2012 and we will see together how far we have come since. This previous work was about creating these highly detailed cloth geometries for digital characters. Here you see one of its coolest results where it shows how the simulated forces pull the entire piece of garment together. We start out by dreaming up a piece of cloth geometry and this simulator gradually transforms it into a real world version of that by subjecting it to real physical forces. This is a step that we call the yarn level relaxation. So this paper was published in 2012 and now nearly 9 years have passed, so I wonder how far we have come since. Well, we can still simulate knitted and woven materials through similar programs that we call direct yarn level simulations. Here's one. I think we can all agree that these are absolutely beautiful, so was the catch. The catch is that this is not free, there is a price we have to pay for these results. Look, whoa, these really take forever. We are talking several hours or for this one almost even an entire day to simulate just one piece of garment. And it gets worse. Look, this one takes more than two full days to compute. Imagine how long we would have to wait for complex scenes in a feature length movie with several characters. Now of course, this problem is very challenging and to solve it we have to perform a direct yarn level simulation. This means that every single strand of yarn is treated as an elastic rod and we have to compute how they react to external forces, bending deformations and more. That takes a great deal of computation. So, our question today is, can we do this in a more reasonable amount of time? Well, the first law of papers says that research is a process. Do not look at where we are, look at where we will be, two more papers down the line. Let's see if the law holds up here. This new paper promises to retain many important characteristics of the full simulation, but takes much less time to compute. That is amazing. Things stretch, bend and even curl up similarly, but the simulation time is cut down a great deal. How much less? Well, this one is five times faster. That's great. This one 20 times, oh my. But it gets better. Now hold onto your papers and look. This one is almost 60 times faster than the full yarn level simulation. My goodness. However, of course it's not perfect. In return, pulling effects on individual yarns is neglected, so we lose the look of this amazing holy geometry. I'd love to get that back. Now these examples were using materials with relatively small stitching density. The previous full yarn level simulation method scales with the number of yarn segments we add to the garment. So, what does this all mean? This means that higher stitching density gives us more yarn strands and the more yarn strands there are, the longer it takes to simulate them. In these cases you can see the knitting patterns, so there aren't that many yarns and even with that it still took multiple days to compute a simulation with one piece of garment. So, I hope you know what's coming. It can also simulate super high stitching densities efficiently. What does that mean? It means that it can also simulate materials like the satin example here. This is not bad by any means, but similar simulations can be done with much simpler simulators, so our question is why does this matter? Well, let's look at the backside here and marvel at this beautiful scene showcasing the second best curl of the day. Loving it. And now, I hope you're wondering what the best curl of the day is. Here goes, this is the one that was showcased in our intro, which is 2 minute papers curling up. You can only see this footage here. Beautiful. Huge congratulations and a big thank you to Gerox Spel, the first author of the paper, who simulated this 2 minute paper scene only for us and with a super high stitching density. That is quite an honor. Thank you so much. As I promised, we now understand exactly why the stitching density makes it all the better. And if you have been watching this series for a while, I am sure you have seen computer graphics researchers destroy armadillos in the most spectacular manner. Make sure to leave a comment if this is the case. That is super fun and I thought there must be a version of that for cloth simulations. And of course there is. And now please meet the yarn modelo. The naming game is very strong here. And just one more thing, Gerox Spel, the first author of this work was a student of mine in 2014 at the Technical University of Vienna where he finished a practical project in light simulation programs and he did excellent work there. Of course he did. And I will note that I was not part of this project in any way, I am just super happy to see him come so far since then. He is now nearing the completion of his PhD and this simulation paper of his was accepted to the CIGRAF conference. That is as good as it gets. Well done Gerox. This episode has been supported by weights and biases. In this post they show you how to use their tool to build the foundations for your own hedge fund and analyze the stock market using learning based tools. If you work with learning algorithms on a regular basis make sure to check out weights and biases. Their system is designed to help you organize your experiments and it is so good it could shave off weeks or even months of work from your projects and is completely free for all individuals, academics and open source projects. This really is as good as it gets and it is hardly a surprise that they are now used by over 200 companies and research institutions. Make sure to visit them through wnbe.com slash papers or just click the link in the video description and you can get a free demo today. Our thanks to weights and biases for their long standing support and for helping us make better videos for you. Thanks for watching and for your generous support and I'll see you next time.
|
Dear Fellow Scholars, this is two minute papers with Dr. Karo Jolai-Fehir. In June 2020, OpenAI published an incredible AI-based technique by the name ImageGPT. The problem here was simple to understand, but nearly impossible to actually do, so here goes, we give it an incomplete image, and we ask the AI to fill in the missing pixels. That is, of course, an immensely difficult task because these images made the picked any part of the world around us. It would have to know a great deal about our world to be able to continue the images, so how did it do? Let's have a look. This is undoubtedly a cat, but look, see that white part that is just starting? The interesting part has been sneakily cut out of the image. What could that be? A piece of paper, something else? Now, let's leave the dirty work to the machine and ask it to finish it. Oh yeah, that makes sense. Now, even better, let's have a look at this water droplet example too. We humans know that since we see the remnants of ripples over there too, there must be a splash, but the question is, does the AI know that? Oh yes, yes it does. Amazing. And the true image for reference. But wait a second, if image GPT could understand that this is a splash and finish the image like this, then here is an absolutely insane idea if a machine can understand that this is a splash, could it maybe not only finish the photo, but make a video out of it? Yes, that is indeed an absolutely insane idea, we like those around here. So, what do you think? Is this a reasonable question or is this still science fiction? Well, let's have a look at what this new learning base method does when looking at such an image. It will do something very similar to what we would do, look at the image, estimate the direction of the motion, recognize that these ripples should probably travel outwards and based on the fact that we've seen many splashes in our lives, if we had the artistic skill, we could surely fill in something similar. So, can the machine do it too? And now, hold on to your papers because this technique does exactly that. Whoa! Please meet Eulerian motion synthesis and it not only works amazingly well, but look at the output video, it even loops perfectly. Yum yum yum, I love it. And it works mostly on fluid and smoke. I like that. I like that a lot because fluids and smoke have difficult, but predictable motion. That is an excellent combination for us, especially given that you see plenty of those simulations on this channel. So, if you are a long time fellow scholar, you already have a key knife for them. Here are a few example images paired with the synthesized motion fields, these define the trajectory of each pixel or in other words regions that the AI thinks should be animated and how it thinks should be animated. Now, it gets better, I have found three things that I did not expect to work, but was pleasantly surprised that they did. One, reflections, kind of work. Two, fire, kind of works. And now, if you have been holding on to your paper so far, now squeeze that paper because here comes the best one. Three, my beard works too. Yes, you heard it right. Now, first things first, this is not any kind of beard, this is an algorithmic beard that was made by an AI. And now, it is animated as if it were a piece of fluid using a different AI. Of course, this is not supposed to be a correct result, just a happy accident, but in any case, this sounds like something straight out of a science fiction movie. I also like how this has a nice Obi-Wan Kenobi quality to it, loving it. Thank you very much to my friend Oliver Wong and the authors for being so kind and generating these results only for us. That is a huge honor. Thank you. This previous work is from 2019 and creates high quality motion, but has a limited understanding of the scene itself. And of course, let's see how the new method fares in these cases. Oh yeah, this is a huge leap forward. And what I like even better here is that no research techniques often provide different trade-offs than previous methods, but are really strictly better than them. In other words, competing techniques usually do some things better and some other things worse than their predecessors, but not this. Look, this is so much better across the board. That is such a rare sight. Amazing. Now, of course, not even this technique is perfect. For example, this part of the image should have been animated, but remains stationary. Also, even though it did well with reflections, reflection is a tougher nut to crack. Finally, thin geometry also still remains a challenge, but this was one paper that made the impossible possible and just think about what we will be able to do two more papers down the line. My goodness. What a time to be alive. This episode has been supported by weights and biases. In this post, they show you how to use their tool to get a neural network to generate captions for your images using an attention model. During my PhD studies, I trained a ton of neural networks which were used in our experiments. However, over time, there was just too much data in our repositories and what I am looking for is not data, but inside. And that's exactly how weights and biases helps you by organizing your experiments. It is used by more than 200 companies and research institutions, including OpenAI, Toyota Research, GitHub, and more. And get this. Weight and biases is free for all individuals, academics, and open source projects. Make sure to visit them through wnba.com slash papers or just click the link in the video description and you can get a free demo today. Our thanks to weights and biases for their long standing support and for helping us make better videos for you. Thanks for watching and for your generous support and I'll see you next time.
|
Dear Fellow Scholars, this is two minute papers with Dr. Kato Zsolnai-Fehir. Virtual Reality VR in short is maturing at a rapid pace and its promise is truly incredible. If one day it comes to fruition, doctors could be trained to perform surgery in a virtual environment, we could train pilots with better flight simulators, teach astronauts to deal with zero gravity simulations, you name it. As you will see that with today's paper we are able to visit nearly any place from afar. Now to be able to do anything in a virtual world we have to put on a head-munted display that can tell the orientation of our head and often hands at all times. So what can we do with this? Oh boy, a great deal. For instance we can type on a virtual keyboard or implement all kinds of virtual user interfaces that we can interact with. We can also organize imaginary boxes and of course we can't leave out the two minute papers favorite going into a physics simulation and playing with it with our own hands. In this previous work hand-hand interactions did not work too well which was addressed one more paper down the line which absolutely nailed the solution. This follow-up work would look at our hands in challenging hand-hand interactions and could deal with deformations lots of self-contact and self-occlusion. Take a look at this footage. And look interestingly they also recorded the real hand model with gloves on. We might think what a curious design decision. What could that be for? Well what you see here is not a pair of gloves, what you see here is the reconstruction of the hand by this follow-up paper. What if we wish to look at a historic landmark from before? Well in this case someone needs to capture a 300-year-old hand-hand. This is all great when we play in a computer game because the entire world around us was previously modeled so we can look and go anywhere, anytime. But what about operating in the real world? What if we wish to look at a historic landmark from before? Well in this case someone needs to capture a 360 degree photo of it because we can turn our head around and lock behind things. And this is what today's paper will be about. This new paper is called Omni Photos and it helps us produce these 360 view synthesis and when we put on that head mounted display we can really get a good feel of a remote place, a group photo or an important family festivity. So clearly the value proposition is excellent but we have two questions. One, what do we have to do for it? Flailing. Yes we need to be flailing. You see we need to attach a consumer 360 camera to a selfie stick and start flailing for about 10 seconds like this. This is a crazy idea because now we created a ton of raw data roughly what you see here. So this is a deluge of information and the algorithm needs to crystallize all this mess into a proper 360 photograph. What is even more difficult here is that this flailing will almost never create a perfect circle trajectory so the algorithm first has to estimate the exact camera positions and view directions. And hold on to your papers because the entirety of this work is handcrafted, no machine learning is inside and the result is a quite general technique or in other words it works on a wide variety of real world scenes you see a good selection of those here. Excellent. Our second question is this is of course not the first method published in this area so how does it relate to previous techniques? Is it really better? Well let's see for ourselves. Previous methods either suffered from not allowing too much motion or the other ones that give us more freedom to move around did it by introducing quite a bit of warping into the outputs. And now let's see if the new method improves upon that. Oh yeah, a great deal. Look, we have the advantages of both methods, we can move around freely and additionally there is much less warping than here. Now of course not even this new technique is perfect. If you look behind the building you see that the warping hasn't been completely eliminated but it is a big step ahead of the previous paper. We will look at some more side by side comparisons, one more bonus question, what about memory consumption? Well, it eats over a gigabyte of memory. That is typically not too much of a problem for desktop computers but we might need a little optimization if we wish to do these computations on a mobile device. And now comes the best part. You can browse these omnifortals online through the link in the video description and even the source code and the Windows-based demo is available that works with and without a VR headset. Try it out and let me know in the comments how it went. So with that we can create these beautiful omnifortals cheaply and efficiently and navigate the real world as if it were a computer game. What a time to be alive. What you see here is a report for a previous paper that we covered in this series which was made by Wades and Biasis. Wades and Biasis provides tools to track your experiments in your deep learning projects. Your system is designed to save you a ton of time and money and it is actively used in projects at prestigious labs such as OpenAI, Toyota Research, GitHub and more. And the best part is that Wades and Biasis is free for all individuals, academics and open source projects. It really is as good as it gets. Make sure to visit them through wnb.com slash papers or just click the link in the video description and you can get a free demo today. Thanks for watching and for your generous support and I'll see you next time.
|
Dear Fellow Scholars, this is two Minute Dippers with Dr. Karojona Ifehir. Today we are going to cover many important questions in life. For instance, who is this? This is Hallibary. Now I'll show you this. Who is it? It is, of course, also Hallibary. And if I show you this piece of text, who does it refer to? Again, Hallibary. So, why are these questions interesting? Well, an earlier paper found out from brain readings that we indeed have person neurons in our brain. These are neurons specialized to recognize a particular human being. That is quite interesting. And what is even more interesting is not that we have person neurons, but that these neurons are multimodal. What does that mean? This means that we understand the essence of what makes Hallibary, regardless of whether it is a photo, a drawing, or anything else. I see. Alright. Well then, our first question today is, do neuron networks also have multimodal neurons? Not necessarily. The human brain is an inspiration for an artificial neuron network that we can simulate on our computers, but do they work like the brain? Well, if we study their inner workings, our likely answer will be no, not in the slightest. But still, no one can stop us from a little experimentation, so let's try this for a common neural network architecture. This neuron responds to human faces and says that this is indeed a human face. So far so good. Now, if we provide it with a drawing of a human face, it won't recognize it to be a face. Well, so much for this multimodal idea, this one is surely not a brain in a jar. But wait, we don't give up so easily around here. This is not the only neural network architecture that exists. Let's grab a different one. This one is called OpenAI's Clip, and it is remarkably good at generalizing concepts. Let's see how it can deal with the same problem. Yes, this neuron responds to spiders and spider-man. That's the easy part. Now please hold on to your papers because now comes the hard part. Drawings and comics of spiders and spider-man. Yes, it responds to that too. Wonderful. Now comes the final boss, which is spider-related writings. And it responds to that too. Now of course, this doesn't mean that this neural network would be a brain in a jar, but it is a tiny bit closer to our thinking than previous architectures. And now comes the best part. This insight opens up the possibility for three amazing experiments. Experiment number one. Essence. So it appears to understand the essence of a concept or a person. That is absolutely amazing. So I wonder if we can turn this problem around and ask what it thinks about different concepts. It would be the equivalent to saying, give me all things spiders and spider-man. Let's do that with Lady Gaga. It says this is the essence of Lady Gaga. We get the smug smile very good. And it says that the essence of Jesus Christ is this and it also includes the crown of thorns. So far, flying colors. Now we will see some images of feelings, some of you might find some of them disturbing. I think the vast majority of you humans will be just fine looking at them, but I wanted to let you know just in case. So what is the essence of someone being shocked? Well, this. I can attest to that this is basically me when reading this paper. My eyes were popping out just like this. Sleepiness. Yes, that is a coffee person before coffee or right. The happiness, crying and seriousness neurons also embody these feelings really well. Experiment number two. Adversarial attacks. We know that open AI's clip responds to photos and drawings of the same thing. So let's try some nasty attacks involving combining the two. When we give it these images, it can classify them with ease. This is an apple. This is a laptop, a mug and so on. Nothing too crazy going on here. However, now let's prepare a nasty adversarial attack. Previously sophisticated techniques were developed to fool a neural network by adding some nearly imperceptible noise to an image. Let's have a look at how this works. First, we present the previous neural network with an image of a bus and it will successfully tell us that yes, this is indeed a bus. Of course it does. Now we show it not an image of a bus, but a bus plus some carefully crafted noise that is barely perceptible that forces the neural network to misclassify it as an ostrich. I will stress that this is not any kind of noise, but the kind of noise that exploits biases in the neural network, which is by no means trivial to craft. So now I hope you are expecting a sophisticated adversarial attack against the wonderful clip neural network. Yes, that will do. Or will it? Let's see together. Yes, indeed I don't know if you knew, but this is not an apple, this is a pizza. And so is this one. The neural network fell for these ones, but it was able to resist this sophisticated attack in the case of the coffee mug and the phone. Perhaps the pizza labels had too small a footprint in an image, so let's try an even more sophisticated version of this attack. Now you may think that this is a Chihuahua, but that is completely wrong because this is a pizza indeed. Not a Chihuahua inside anywhere in this image. No, no. So what did we learn here? Well, interestingly, this clip neural network is more general than previous techniques, however, its superpowers come at a price. And that price says that it can be exploited easily with simple systematic attacks. That is a great lesson indeed. Experiment number three, understanding feelings. Now this will be one heck of an experiment. We will try to answer the age old question, which is how would you describe feelings to a machine? Well, it's hard to explain such a concept, but all humans understand what being bored means. However, luckily, these neural networks have neurons and they can use those to explain to us what they think about different concepts. An interesting idea here is that feelings could sort of emerge as a combination of other more elementary neurons that are already understood. If this sounds a little nebulous, let's go with that example. What does the machine think it means that someone is bored? Well, it says that bored is relaxed plus grumpy. This isn't quite the way I think about it, but not bad at all little machine. I like the way you think. Let's try one more. How do we explain to a machine what a surprise means? Well it says surprise is celebration plus shock. Nice. What about madness? Let's see, evil plus serious plus a hint of mental illness. And when talking about combinations, there are two more examples that I really liked. If we are looking for text that embodies evil, we get something like this. And now give me an evil building. Oh yes, I think that works really well, but there are internet forums where we have black belt experts specializing in this very topic. So if you're one of them, please let me know in the comments what you think. The paper also contains a ton more information. For instance, there is an experiment with the strupe effect. This explores whether the neural network reacts to the meaning of the text or the color of the text. I will only tease this because I would really like you to read the paper which is available below in the video description. So there we go, neural networks are by no means brains in a jar. They are very much computer programs. However, clip has some similarities and we also found that there is a cost to death. What a time to be alive. What you see here is a report of this exact paper we have talked about which was made by weights and biases. I put a link to it in the description. Make sure to have a look. I think it helps you understand this paper better. If you work with learning algorithms on a regular basis, make sure to check out weights and biases. Their system is designed to help you organize your experiments and it is so good it could shave off weeks or even months of work from your projects and is completely free for all individuals, academics and open source projects. This really is as good as it gets and it is hardly a surprise that they are now used by over 200 companies and research institutions. Make sure to visit them through wnb.com slash papers or just click the link in the video description and you can get a free demo today. Our thanks to weights and biases for their long standing support and for helping us make better videos for you. Thanks for watching and for your generous support and I'll see you next time.
|
Dear Fellow Scholars, this is Two Minute Papers with Dr. Karo Zsolnai-Fehir. Today, we are going to talk about video stabilization. A typical application of this is when we record family memories and other cool events and sometimes the footage gets so shaky that we barely know what is going on. In these cases, video stabilization techniques can come to the rescue, which means that in goes a shaky video and out comes a smooth video. Well, that is easier said than done. Despite many years of progress, there is a great selection of previous methods that can do that, however, they suffer from one of two issues. Issue number one is cropping. This means that we get usable results, but we have to pay a great price for it, which is cropping away a great deal of the video content. Issue number two is when we get the entirety of the video. No cropping, however, the price to be paid for this is that we get lots of issues that we call visual artifacts. Unfortunately, today, when we stabilize, we have to choose our poison. It's either cropping or artifacts. Which one would you choose? That is difficult to decide, of course, because none of these two trade-offs are great. So, our question today is, can we do better? Well, the law of papers says that, of course, just one or two more papers down the line and this will be way better. So, let's see. This is what this new method is capable of. Hold on to your papers and notice that this will indeed be a full-size video so we already know that probably there will be artifacts. But, wait a second. No artifacts. Whoa! How can that be? What does this new method do that previous techniques didn't? These magical results are a combination of several things. One, the new method can estimate the motion of these objects better. Two, it removes blurred images from the videos and three, collects data from neighboring video frames more effectively. This leads to a greater understanding of the video it is looking at. Now, of course, not even this technique is perfect. Rapid camera motion may lead to warping and if you look carefully, you may find some artifacts, usually around the sides of the screen. So far, we have looked at previous methods and the new method. It seems better. That's great. But, how do we measure which one is better? Do we just look? An even harder question would be if the new method is indeed better, okay, but by how much better is it? Let's try to answer all of these questions. We can evaluate these techniques against each other in three different ways. One, we can look at the footage ourselves. We have already done that and we had to tightly hold on to our papers. It has done quite well in this test. Test number two is a quantitative test. In other words, we can mathematically define how much distortion there is in an output video, how smooth it is and more, and compare the output videos based on these metrics. In many cases, these previous techniques are quite close to each other. And now, let's unveil the new method. Whoa! It's called best or second best on six out of eight tests. This is truly remarkable, especially given that some of these competitors are from less than a year ago. That is nimble progress in machine learning research. Loving it. And the third way to test which technique is better and by how much is by conducting a user study. The authors have done that too. In this, 46 humans were called in, were shown the shaky input video, the result of a previous method and the new method and were asked three questions. Which video preserves the most content, which has fewer imperfections and which is more stable? And the results were stunning. Despite looking at many different competing techniques, the participants found the new method to be better at the very least 60% of the time on all three questions. In some cases, even 90% of the time, or higher. Praise the papers. Now, there is only one question left. If it is so much better than previous techniques, how much longer does it take to run? With one exception, these previous methods take from half a second to about 7.5 seconds per frame and this new one asks for 9.5 seconds per frame. And in return, it creates these absolutely amazing results. So, from this glorious day on, fewer or maybe no important memories will be lost due to camera shaking. What a time to be alive! Perceptilebs is a visual API for TensorFlow, carefully designed to make machine learning as intuitive as possible. This gives you a faster way to build out models with more transparency into how your model is architected, how it performs, and how to debug it. Look, it lets you toggle between the visual modeler and the code editor. It even generates visualizations for all the model variables and gives you recommendations both during modeling and training and does all this automatically. I only wish I had a tool like this when I was working on my neural networks during my PhD years. Visit perceptilebs.com slash papers to easily install the free local version of their system today. Our thanks to perceptilebs for their support and for helping us make better videos for you. Thanks for watching and for your generous support and I'll see you next time.
|
Dear Fellow Scholars, this is 2 Minute Papers with Dr. Karo Zonai-Fehir. Today, I will try to show you the incredible progress in computer graphics research through the lens of bubbles in computer simulations. Yes, bubbles indeed. Approximately a year ago, we covered a technique which could be used to add bubbles to an already existing fluid simulation. This paper appeared in 2012 and described a super simple method that helped us compute where bubbles appear and disappear over time. The best part of this was that this could be added after the simulation has been finalized, which is an insane value proposition. If we find ourselves yearning for some bubbles, we just add them afterwards, and if we don't like the results, we can take them out with one click. Now, simulations are not only about sites, what about sounds? In 2016, this paper did something that previously seemed impossible. It took this kind of simulation data and made sure that now we can not only add bubbles to a plain water simulation, but also simulate how they would sound. On the geometry side, a follow-up paper appeared just a year later that could simulate a handful of bubbles colliding and sticking together. Then, three years later, in 2020, Christopher Betty's group also proposed a method that was capable of simulating merging and coalescing behavior on larger-scale simulations. So, what about today's paper? Are we going even larger with hundreds of thousands or maybe even millions of bubbles? No, we are going to take just one bubble or at most a handful and have a real close look at a method that is capable of simulating these beautiful, evolving rainbow patterns. The key to this work is that it is modeling how the thickness of the surfaces changes over time. That makes all the difference. Let's look under the hood and observe how much of an effect the evolving layer thickness has on the outputs. The red color coding represents thinner and the blue shows us the thicker regions. This shows us that some regions in these bubbles are more than twice as thick as others. And there are also more extreme cases, there is a six-time difference between this and this part. You can see how the difference in thickness leads to waves of light interfering with the bubble and creating these beautiful rainbow patterns. You can't get this without a proper simulator like this one. Loving it. This variation in thicknesses is responsible for a selection of premium quality effects in a simulation, beyond surface vertices, interference patterns can also be simulated, deformation-dependent rupturing of soap films. This incredible technique can simulate all of these phenomena. And now our big question is, okay, it simulates all of these, but how well does it do that? It is good enough to fool the human eye, but how does it compare to the strictest adversary of all? Reality. I hope you know what's coming. Oh yeah, hold on to your papers because now we will let reality be our judge and compare the simulated results to that. That is one of the biggest challenges in any kind of simulation research, so let's see. This is a piece of real footage of a curved soap film surface where these rainbow patterns get convicted by an external force field. Beautiful. And now let's see the simulation. Wow, this has to be really close. Let's see them side by side and decide together. Whoa. The match in the Swirly region here is just exceptional. Now, note that even if the algorithm is 100% correct, this experiment cannot be a perfect match because not only the physics of the soap film has to be simulated correctly, but the forces that move the rainbow patterns as well. We don't have this information from the real-world footage, so the authors had to try to reproduce these forces, which is not part of the algorithm, but a property of the environment. So I would say that this footage is as close as one can possibly get. My goodness, well done. So how much do we have to pay for this in terms of computation time? If you ask me, I would pay at the very least double for this. And now comes the best part. If you have been holding onto your paper so far, now squeeze that paper because in the cheaper cases, only 4% to 7% extra computation, which is outrageous. There is this more complex case with the large deforming sphere. In this case, the new technique indeed makes a huge difference. So how much extra computation do we have to pay for this? Only 31%. 31% extra computation for this. That is a fantastic deal. You can sign me up right away. As you see, the pace of progress in computer graphics research is absolutely incredible and these simulations are just getting better and better by the day. Imagine what we will be able to do just two more papers down the line. What a time to be alive. This episode has been supported by Lambda GPU Cloud. If you're looking for inexpensive Cloud GPUs for AI, check out Lambda GPU Cloud. They recently launched Quadro RTX 6000, RTX 8000, and V100 instances. And hold onto your papers because Lambda GPU Cloud can cost less than half of AWS and Azure. Plus, they are the only Cloud service with 48GB RTX 8000. Join researchers at organizations like Apple, MIT and Caltech in using Lambda Cloud instances, workstations, or servers. Make sure to go to LambdaLabs.com slash papers to sign up for one of their amazing GPU instances today. Our thanks to Lambda for their long-standing support and for helping us make better videos for you. Thanks for watching and for your generous support, and I'll see you next time.
|
Dear Fellow Scholars, this is Two Minute Papers with Dr. Karo Zsolnai-Fehir. Today we get to be a paper historian and witness the amazing progress in machine learning research together and learn what is new in the world of Nerf's. But first, what is a Nerf? In March of 2020, a paper appeared describing an incredible technique by the name Mural Radiance Fields or Nerf in short. This work enables us to take a bunch of input photos and their locations, learn it, and synthesize new previously unseen views of not just the materials in the scene, but the entire scene itself. And here we are talking not only digital environments, but also real scenes as well. Now, that's quite a value proposition, especially given that it also supported refractive and reflective surfaces as well. These are both quite a challenge. However, of course, Nerf had its limitations. For instance, in many cases it had trouble with scenes with variable lighting conditions and lots of occluders. And to my delight, only five months later, in August of 2020, a follow-up paper appeared by the name Nerf in the world or Nerf W in short. This speciality was tourist attractions that a lot of people take photos of and we then have a collection of photos taken during a different time of the day and, of course, with a lot of people around. And lots of people, of course, means lots of occluders. Nerf W improved the original algorithm to excel more in cases like this. A few months later, on 2020, November 25th, another follow-up paper appeared by the name deformable Neural Radiance Fields, the Nerf. The goal here was to take a selfie video and turn it into a portrait that we can rotate around freely. This is something that the authors call a Nerfie. If we take the original Nerf technique to perform this, we see that it does not do well at all with moving things and that's where the new deformable variant really shines. And today's paper not only has some nice video results embedded in the front page, but it offers a new take on this problem and offers, quote, space-time view synthesis of dynamic scenes. Whoa! That is amazing. But what does that mean? What does this paper really do? The space-time view synthesis means that we can record a video of someone doing something. Since we are recording movement in time and there is also movement in space or in other words, the camera is moving. Both time and space are moving. And what this can do is one frees one of those variables in other words pretend as if the camera didn't move. Or two pretend as if time didn't move. Or three generate new views of the scene while movement takes place. My favorite is that we can pretend to zoom in and even better zoom out even if the recorded video looked like this. So how does this compare to previous methods? There are plenty of nerve variants around. Is this really any good? Let's find out together. This is the original nerve we already know about this and we are not surprised in the slightest that it's not so great on dynamic scenes with a lot of movement. However, what I am surprised by is that all of these previous techniques are from 2020 and all of them struggle with these cases. These comparisons are not against some ancient technology from 1985. No, no. All of them are from the same year. For instance, this previous work is called Consistent Video Depth Estimation and it is from August 2020. We showcased it in this series and marveled at all of these amazing augmented reality applications that it offered. The snowing example here was one of my favorites. And today's paper appeared just three months later in November 2020. And the authors still took the time and effort to compare against this work from just three months ago. That is fantastic. As you see, this previous method kind of works on this dog but the lack of information in some regions is quite apparent. This is still maybe usable but as soon as we transition into a more dynamic example, what do we get? Well, pandemonium. This is true for all previous methods. I cannot imagine that the new method from just a few months later could deal with this difficult case and look at that. So much better. It is still not perfect. You see that we have lost some detail but witnessing this kind of progress in just a few months is truly a sight to behold. It really consistently outperforms all of these techniques from the same year. What a time to be alive. If you, like me, find yourself yearning for more quantitative comparisons, the numbers also show that the two variants of the new proposed technique indeed outpace the competition. And it can even do one more thing. Previous video stabilization techniques were good at taking a shaky input video and creating a smoother output. However, these results often came at the cost of a great deal of cropping. Not this new work. Look at how good it is at stabilization and it does not have to crop all this data. Praise the papers. This episode has been supported by Lambda GPU Cloud. If you're looking for inexpensive Cloud GPUs for AI, check out Lambda GPU Cloud. They've recently launched Quadro RTX 6000, RTX 8000 and V100 instances and hold onto your papers because Lambda GPU Cloud can cost less than half of AWS and Asia. Plus, they are the only Cloud service with 48GB RTX 8000. Join researchers at organizations like Apple, MIT and Caltech in using Lambda Cloud instances, workstations or servers. Make sure to go to lambdalabs.com slash papers to sign up for one of their amazing GPU instances today. Thanks to Lambda for their long-standing support and for helping us make better videos for you. Thanks for watching and for your generous support and I'll see you next time.
|
Dear Fellow Scholars, this is Two Minute Papers with Dr. Karajor Naifahir. This paper will not have the visual fireworks that you see in many of our videos. Oftentimes you get ice cream for the eyes, but today you'll get an ice cream for the mind. And when I read this paper, I almost fell off the chair and I think this work teaches us important lessons and I hope you will appreciate them too. So with that, let's talk about AI's and dealing with text. This research field is improving at an incredible pace. For instance, four years ago, in 2017, scientists at OpenAI embarked on an AI project where they wanted to show a neural network a bunch of Amazon product reviews and wanted to teach it to be able to generate new ones or continue a review when given one. Upon closer inspection, they noticed that the neural network has built up a knowledge of not only language, but also learned that it needs to create a state of the art sentiment detector as well. This means that the AI recognized that in order to be able to continue a review, it needs to be able to understand English and efficiently detect whether the review seems positive or negative. This new work is about text summarization and it really is something else. If you read, read it, the popular online discussion website, and encounter a longer post, you may find a short summary, a TRDR of the same post written by a fellow human. This is good for not only the other readers who are in a hurry, but it is less obvious that it is also good for something else. And now, hold on to your papers because these summaries also provide fertile grounds for a learning algorithm to read a piece of long text, and it's short summary and learn how the two relate to each other. This means that it can be used as training data and can be fed to a learning algorithm. Yum! And the point is that if we give enough of these pairs to these learning algorithms, they will learn to summarize other Reddit posts. So, let's see how well it performs. First, this method learned on about 100,000 well-curated Reddit posts and was also tested on other posts that it hadn't seen before. It was asked to summarize this post from the relationship advice subreddit and let's see how well it did. If you feel like reading the text, you can pause the video here, or if you feel like embracing the TRDR spirit, just carry on and look at these two summarizations. One of these is written by a human and the other one by this new summarization technique. Do you know which is which? Please stop the video and let me know in the comments below. Thank you. So this was written by a human and this by the new AI. And while, of course, this is subjective, I would say that the AI written one feels at the very least as good as the human summary and I can't wait to have a look at the more principle deviation in the paper. Let's see. The higher we go here, the higher the probability of a human favoring the AI written summary to a human written one. And we have smaller AI models on the left and bigger ones to the right. This is the 50% reference line below it, people tend to favor the human's version and if it can get above the 50% line, the AI does a better job than human written TRDRs in the data set. Here are two proposed models. This one significantly outperforms and this other one is a better match. However, whoa, look at that. The authors also propose a human feedback model that even for the smallest model, handily outperforms human written TRDRs and as we grow the AI model, it gets even better than that. Now that's incredible and this is when I almost fell off the chair when reading this paper. But we are not done yet, not even close. Don't forget this AI was trained on Reddit and was also tested on Reddit. So our next question is of course, can it do anything else? How general is the knowledge that it gained? What if we give it a full news article from somewhere else outside of Reddit? Let's see how it performs. Hmm, of course this is also subjective but I would say both are quite good. The human written summary provides a little more information while the AI written one captures the essence of the article and does it very concisely. Great job. So let's see the same graph for summarizing these articles outside of Reddit. I don't expect the AI to perform as well as with the Reddit posts as it is outside the comfort zone but my goodness, this still performs nearly as well as humans. That means that it indeed derived general knowledge from a really narrow training set which is absolutely amazing. Now ironically you see this lead three technique dominating both humans and the AI. What could that be? Some unpublished super intelligent technique? Well, I will have to disappoint. This is not a super sophisticated technique but a dead simple one. So simple that it is just taking the first three sentences of the article which humans seem to prefer a great deal. But note that this simple lead three technique only works for a narrow domain while the AI has learned the English language probably knows about sentiment and a lot of other things that can be used elsewhere. And now the two most impressive things from the paper in my opinion, one, this is not a neural network but a reinforcement learning algorithm that learns from human feedback. A similar technique has been used by DeepMind and other research labs to play video games or control drones and it is really cool to see them excel in text summarization too. Two, it learned from humans but derived so much knowledge from these scores that over time it outperformed its own teacher. And the teacher here is not human beings in general but people who write TRDRs along their posts on Reddit. That truly feels like something straight out of a science fiction movie. What a time to be alive. Now of course not even this technique is perfect. This human versus AI preference thing is just one way of measuring the quality of the summary. There are more sophisticated methods that involve coverage, coherence, accuracy and more. In some of these measurements the AI does not perform as well. But just imagine what this will be able to do two more papers down the line. This episode has been supported by weights and biases. In this post they show you how open AI's prestigious robotics team uses their tool to teach a robot hand to dexterously manipulate a Rubik's cube. During my PhD studies I trained a ton of neural networks which were used in our experiments. However over time there was just too much data in our repositories and what I am looking for is not data but insight. And that's exactly how weights and biases helps you by organizing your experiments. It is used by more than 200 companies and research institutions including open AI, Toyota Research, GitHub and more. And get this, weights and biases is free for all individuals, academics and open source projects. Make sure to visit them through wnb.com slash papers or just click the link in the video description and you can get a free demo today. Our thanks to weights and biases for their long standing support and for helping us make better videos for you. Thanks for watching and for your generous support and I'll see you next time.
|
Dear Fellow Scholars, this is two-minute papers with Dr. Karoji Zornaifahir, between 2013 and 2015 deep-mind worked on an incredible learning algorithm by the name Deep Reinforcement Learning. This technique locked at the pixels of the game, was given a controller and played much like a human wood with the exception that it learned to play some Atari games on a superhuman level. I have tried to train it a few years ago and would like to invite you for a marvelous journey to see what happened. When it starts learning to play an old game, Atari Breakout, at first the algorithm loses all of its lives without any signs of intelligent action. If we wait a bit, it becomes better at playing the game, roughly matching the skill level of an adapt player. But here's the catch, if we wait for longer, we get something absolutely spectacular. Over time, it learns to play like a pro and finds out that the best way to win the game is digging a tunnel through the bricks and hit them from behind. This technique is a combination of a neural network that processes the visual data that we see on the screen and a reinforcement learner that comes up with the gameplay related decisions. This is an amazing algorithm, a true breakthrough in AI research. A key point in this work was that the problem formulation here enabled us to measure our progress easily. We hit one break, we get some points, so do a lot of that. Lose a few lives, the game ends, so don't do that. Easy enough. But there are other exploration-based games like Montezuma's Revenge or Pitfall that it was not good at. And man, these games are a nightmare for any AI because there is no score, or at the very least it's hard to define how well we are doing. Because there are no scores, it is hard to motivate an AI to do anything at all other than just wander around aimlessly. If no one tells us if we are doing well or not, which way do we go? Explore this space or go to the next one. How do we solve all this? And with that, let's discuss the state of play in AI's playing difficult exploration-based computer games. And I think you will love to see how far we have come since. First, there is a previous line of work that infused these agents with a very human-like property, Curiosity. That agent was able to do much, much better at these games and then got addicted to the TV. But that's a different story. Note that the TV problem has been remedied since. And this new method attempts to solve hard exploration games by watching YouTube videos of humans playing the game. And learning from that, as you see, it just rips through these levels in Montezuma's revenge and other games too. So I wonder how does all this magic happen? How did this agent learn to explore? Well, it has three things going for it that really makes this work. One, the skeptical scholar would say that all this takes is just copy-pasting what it saw from the human player. Also, imitation learning is not new, which is a point that we will address in a moment. So, why bother with this? Now, hold on to your papers and observe as it seems noticeably less efficient than the human teacher was. Until we realize that this is not the human player and this is not the AI, but the other way around. Look, it was so observant and took away so much from the human demonstrations that in the end, it became even more efficient than its human teacher. Whoa! Absolutely amazing. And while we are here, I would like to dissect this copy-paste argument. You see, it has an understanding of the game and does not just copy the human demonstrator, but even if it just copied what it saw, it would not be so easy because the AI only sees images and it has to translate how the images change in response to us pressing buttons on the controller. We might also encounter the same level, but at a different time, and we have to understand how to vanquish an opponent and how to perform that. Two, nobody hooked the agent into the game information, which is huge. This means that it doesn't know what buttons are pressed on the controller, no internal numbers or game states are given to it, and most importantly, it is also not given the score of the game. We discussed how difficult this makes everything. Unfortunately, this means that there is no easy way out. It really has to understand what it sees and mine out the relevant information from each of these videos. And as you see, it does that with flying colors. Loving it. And three, it can handle the domain gap. Previous imitation learning methods did not deal with that too well. So, what does that mean? Let's look at this latent space together and find out. This is what a latent space looks like if we just embed the pixels that we see in the videos. Don't worry, I'll tell you in a moment what that is. Here, the clusters are nicely crammed up away from each other, so that's probably good, right? Well, in this problem, not so much. The latent space means a place where similar kinds of data are meant to be close to each other. These are snippets of the demonstration videos that the clusters relate to. Let's test that together. Do you think these images are similar? Yes, most of us humans would say that these are quite similar. In fact, they are nearly the same. So, is this a good latent space embedding? No, not in the slightest. This data is similar. Therefore, these should be close to each other, but this previous technique did not recognize that because these images have slightly different colors, aspect ratios. This has a text overlay, but we all understand that despite all that, we are looking at the same game through different windows. So, does the new technique recognize that? Oh, yes, beautiful. Praise the papers. Similar game states are now close to each other. We can align them properly and therefore we can learn more easily from them. This is one of the reasons why it can play so well. So, there you go. These new AI agents can look at how we perform complex exploration games and learn so well from us that in the end they do even better than we do. And now, to get them to write some amazing papers for us, or you know, two-minute papers episodes. What a time to be alive. This episode has been supported by weights and biases. In this post, they show you how to use their tool to check and visualize what your neural network is learning and even more importantly, a case study on how to find bugs in your system and fix them. Weight and biases provides tools to track your experiments in your deep learning projects. Their system is designed to save you a ton of time and money and it is actively used in projects at prestigious labs such as OpenAI, Toyota Research, GitHub and more. And the best part is that weights and biases is free for all individuals, academics and open source projects. It really is as good as it gets. Make sure to visit them through wnb.com slash papers or just click the link in the video description and you can get a free demo today. Our thanks to weights and biases for their long-standing support and for helping us make better videos for you. Thanks for watching and for your generous support and I'll see you next time.
|
Dear Fellow Scholars, this is Two Minute Papers with Dr. Karajjona Ifahir. Today, we are going to explore a paper that improves on the incredible StarGam 2. What is that? StarGam 2 is a neural network-based learning algorithm that is not only capable of creating these eye-poppingly detailed images of human beings that don't even exist, but it also improved on its previous version in a number of different ways. For instance, with the original StarGam method, we could exert some artistic control over these images. However, look, you see how this part of the teeth and eyes are pinned to a particular location and the algorithm just refuses to let it go, sometimes to the detriment of its surroundings. The improved StarGam 2 method addressed this problem, you can see the results here. Teeth and eyes are now allowed to float around freely and perhaps this is the only place on the internet where we can say that and be happy about it. It could also make two images together and it could do it not only for human faces, but for cars, buildings, horses and more. And got this, this paper was published in December 2019 and since then, it has been used in a number of absolutely incredible applications and follow-up works. Let's look at three of them. One, the first question I usually hear when I talk about an amazing paper like this is, okay, great, but when do I get to use this? And the answer is, right now, because it is implemented in Photoshop in a feature that is called Neural Filters. Two, artistic control over these images has improved so much that now we can pin down a few intuitive parameters and change them with minimal changes to other parts of the image. For instance, it could grow Elon Musk a majestic beard and Elon Musk was not the only person who got an algorithmic beard. I hope you know what's coming. Yes, I got one too. Let me know in the comments whose beard you liked better. Three, a nice follow-up paper that could take a photo of Abraham Lincoln and other historic figures and restore their images as if we were time travelers and took these photos with a more modern camera. The best part here was that it leveraged the superb morphing capabilities of Stargain II and took an image of their siblings, a person who has somewhat similar proportions to the target subject and morphed them into a modern image of this historic figure. This was brilliant because restoring images is hard, but with Stargain II morphing is now easy so the authors decided to trade a difficult problem for an easier one. And the results speak for themselves. Of course, we cannot know for sure if this is what these historic figures really looked like, but for now it makes one heck of a thought experiment. And now let's marvel at these beautiful results with the new method that goes by the name Stargain II Aida. While we look through these results, all of which were generated with the new method, here are three things that it does better. One, it often works just as well as Stargain II, but requires 10 times fewer images for training. This means that now it can create these beautiful images and this can be done by training a set of neural networks from less than 10,000 images at a time. Whoa! That is not much at all. Two, it creates better quality results. The baseline here is the original Stargain II, the numbers are subject to minimization and are a measure of the quality of these images. As you see from the bolded numbers, the new method not only beats the baseline method substantially, but it does it across the board. That is a rare sight indeed. And three, we can train this method faster, it generates these images faster and in the meantime also consumes less memory, which is usually in short supply on our graphics cards. Now we noted that the new version of the method is called Stargain II Aida. What is Aida? This part means adaptive discriminator augmentation. What does that mean exactly? This means that the new method endeavors to squeeze as much information out of these training datasets as it can. Data augmentation is not new, it has been done for many years now, and essentially this means that we rotate, colorize, or even corrupt these images during the training process. The key here is that with this, we are artificially increasing the number of training samples the neural network sees. The difference here is that they used a greater set of augmentation, and the adaptive part means that these augmentation are tailored more to the dataset at hand. And now comes the best part, hold on to your papers and let's look at the timeline here. Stargain II appeared in December 2019, and Stargain II Aida, this method, came out just half a year later. Such immense progress in just six months of time. The pace of progress in machine learning research is absolutely stunning these days. Imagine what we will be able to do with these techniques just a couple more years down the line. What a time to be alive. But this paper also teaches a very important lesson to us that I would like to show you. Have a look at this table that shows the energy expenditure for this project for transparency, but it also tells us the number of experiments that were required to finish such an amazing paper. And that is more than 3,300 experiments. 255 of which were wasted due to technical problems. In the forward of my PhD thesis, I wrote the following quote, research is the study of failure. More precisely, research is the study of obtaining new knowledge through failure. A bad researcher fears 100% of the time, while a good one only fears 99% of the time. Hence what you see written here, and in most papers is only 1% of the work that has been done. I would like to thank Felizia, my wife, for providing motivation, shielding me from distractions, and bringing sunshine to my life to endure through many of these failures. This paper is a great testament to show how difficult the life of a researcher is. How many people give up their dreams after being rejected once, or maybe two times, 10 times? Most people give up after 10 tries, and just imagine having a thousand failed experiments and still not even being close to publishing a paper yet. And with a little more effort, this amazing work came out of it. Failing doesn't mean losing, not in the slightest. Huge congratulations to the authors for their endurance and for this amazing work, and I think this also shows that these weights and biases experiment-tracking tools are invaluable, because it is next to impossible to remember what went wrong with each of them, and what should be fixed. What you see here is a report of this exact paper we have talked about, which was made by weights and biases. I put a link to it in the description. Make sure to have a look, I think it helps you understand this paper better. If you work with learning algorithms on a regular basis, make sure to check out weights and biases. Their system is designed to help you organize your experiments, and it is so good it could shave off weeks or even months of work from your projects, and is completely free for all individuals, academics, and open source projects. This really is as good as it gets, and it is hardly a surprise that they are now used by over 200 companies and research institutions. Make sure to visit them through wnb.com, slash papers, or just click the link in the video description, and you can get a free demo today. Our thanks to weights and biases for their longstanding support, and for helping us make better videos for you. Thanks for watching, and for your generous support, and I'll see you next time.
|
Dear Fellow Scholars, this is two-minute papers with Dr. Karajol Neifahir. This paper is really something else. Scientists at Nvidia came up with an absolutely insane idea for video conferencing. Their idea is not to do what everyone else is doing, which is transmitting our video to the person on the other end. No, no, of course not. That would be too easy. What they do in this work is take only the first image from this video and they throw away the entire video afterwards. But before that, it stores a tiny bit of information from it, which is how our head is moving over time and how our expressions change. That is an absolutely outrageous idea, and of course we like those around here, so does this work. Well, let's have a look. This is the input video, note that this is not transmitted only the first image and some additional information and the rest of this video is discarded. And hold on to your papers because this is the output of the algorithm compared to the input video. No, this is not some kind of misunderstanding. Nobody has copypasted the results there. This is a near-perfect reconstruction of the input, except that the amount of information we need to transmit through the network is significantly less than with previous compression techniques. How much less? Well, you know what's coming, so let's try it out. Here is the output of the new technique and here is the comparison against H.264, a powerful and commonly used video compression standard. Well, to our disappointment, the two seem close. The new technique appears better, especially around the glasses, but the rest is similar. And if you have been holding on to your paper so far, now squeeze that paper because this is not a reasonable comparison, and that is because the previous method was allowed to transmit 6 to 12 times more information. Look, as we further decrease the data allowance of the previous method, it can still transmit more than twice as much information, and at this point there is no contest. This bitrate would be unusable for any kind of video conferencing, while the new method uses less than half as much information, and still transmits a sharp and perfectly fine video. Overall, the authors report that their new method is 10 times more efficient. That is unreal. This is an excellent video reconstruction technique that much is clear. And if it only did that, it would be a great paper. But this is not a great paper. This is an absolutely amazing paper, so it does even more. Much, much more. For instance, it can also rotate our head and make a frontal video, can also fix potential framing issues by translating our head and transferring all of our gestures to a new model. And it is also evaluated well, so all of these features are tested in isolation. So, look at these two previous methods trying to frontalize the input video. One would think that it's not even possible to perform properly, given how much these techniques are struggling with the task, until we look at the new method. My goodness. There is some jumpiness in the neck movement in the output video here, and some warping issues here, but otherwise very impressive results. Now, if you have been holding onto your paper so far, squeeze that paper because these previous methods are not some ancient papers that were published a long time ago. Not at all. Both of them were published within the same year as the new paper. How amazing is that? Wow. I really liked this page from the paper, which showcases both the images and the mathematical measurements against previous methods side by side. There are many ways to measure how close two videos are to each other. The up and down arrows tell us whether the given quality metric is subject to minimization or maximization. For instance, pixel-wise errors are typically minimized, so lesser is better, but we are to maximize the peak signal to noise ratio. And the cool thing is that none of this matters too much as soon as we insert the new technique, which really outpaces all of these. And we are still not done yet. So, we said that the technique takes the first image, reads the evolution of expressions and the head pose from the input video, and then it discards the entirety of the video, save for the first image. The cool thing about this was that we could pretend to rotate the head pose information, and the result is that the head appears rotated in the output image. That was great. But what if we take the source image from someone and take this data, the driving key point sequence from someone else? Well, what we get is motion transfer. Look, we only need one image of the target person, and we can transfer all of our gestures to them in a way that is significantly better than most previous methods. Now, of course, not even this technique is perfect. It still struggles a great deal in the presence of occluder objects, but still, just the fact that this is possible feels like something straight out of a science fiction movie. What you see here is a report of this exact paper we have talked about, which was made by weights and biases. I put a link to it in the description, make sure to have a look, I think it helps you understand this paper better. During my PhD studies, I trained a ton of neural networks which were used in our experiments. However, over time, there was just too much data in our repositories, and what I am looking for is not data, but inside. And that's exactly how weights and biases helps you by organizing your experiments. It is used by more than 200 companies and research institutions, including OpenAI, Toyota Research, GitHub, and more. And get this, weight and biases is free for all individuals, academics, and open source projects. Make sure to visit them through wnba.com slash papers, or just click the link in the video description, and you can get a free demo today. Our thanks to weights and biases for their longstanding support, and for helping us make better videos for you. Thanks for watching and for your generous support, and I'll see you next time.
|
Dear Fellow Scholars, this is two minute papers with Dr. Karo Zsolnai-Fahir. Today, we are going to conquer some absolutely insane fluid and smoke simulations. A common property of these simulation programs is that they subdivide the simulation domain into a grid and they compute important quantities like velocity and pressure in these grid points. Normally, regular grid looks something like this, but this crazy new technique throws away the idea of using this as a grid and uses this instead. This is called an adaptive staggered tilted grid, an AST grid in short. So, what does that really mean? The tilted part means that cells can be rotated by 45 degrees like this and interestingly, they typically appear only where needed, I'll show you in a moment. The adaptive part means that the size of the grid cells is not fixed and can be all over the place. And even better, this concept can be easily generalized to 3D grids as well. Now, when I first read this paper, two things came to my mind, one that is an insane idea, I kinda like it, and two, it cannot possibly work. It turns out only one of these is true. And I was also wondering why? Why do all this? And the answer is because this way we get better fluid and smoke simulations. Oh yeah, let's demonstrate it through 4 beautiful experiments. Experiment number 1, Carmen Vortex Streets. We noted that the tilted grid points only appear where they are needed and these are places where there is a great deal of vorticity. Let's test that. This phenomenon showcases repeated vortex patterns and the algorithm is hard at work here. How do we know? Well, of course, we don't know that yet. So, let's look under the hood together and see what is going on. Oh wow, look at that. The algorithm knows where the vorticity is and as a result, these tilted cells are flowing through the simulation beautifully. Experiment number 2, Smoke Plumes and Porousnets. This technique refines the grids with these tilted grid cells in areas where there is a great deal of turbulence. And wait a second, what is this? The net is also covered with tilted cells. Why is that? The reason for this is that the tilted cells not only cover turbulent regions, but other regions of interest as well. In this case, it enables us to capture this narrow flow around the obstacle. Without this no AST grid, some of these smoke plumes wouldn't make it through the net. Experiment number 3, the boat ride. Note that the surface of the pool is completely covered with the new tilted cells, making sure that the wake of the boat is as detailed as it can possibly be. But, in the meantime, the algorithm is not wasteful. Look, the volume itself is free of them. And now, hold onto your papers for experiment number 4, thin water sheets. You can see the final simulation here, and if we look under the hood, my goodness, just look at how much work this algorithm is doing. And what is even better, it only does so, where it is really needed, it doesn't do any extra work in these regions. I am so far very impressed with this technique. We saw that it does a ton of work for us, and increases the detail in our simulations, and helps things flow through, where they should really flow through. Now, with that said, there is only one question left. What does this cost us? How much more expensive is this ASD grid simulation than a regular grid? Plus 100% computation time? Plus 50%? How much is it worth to you? Please stop the video and leave a comment with your guess. I'll wait. Thank you. The answer is none of those. It costs almost nothing and adds typically an additional 1% of computation time. And in return for that almost nothing, we get all of these beautiful fluid and smoke simulations. What a time to be alive. This episode has been supported by Lambda GPU Cloud. If you're looking for inexpensive Cloud GPUs for AI, check out Lambda GPU Cloud. They've recently launched Quadro RTX 6000, RTX 8000, and V100 instances. And hold onto your papers because Lambda GPU Cloud can cost less than half of AWS and Asia. Plus, they are the only Cloud service with 48GB RTX 8000. Join researchers at organizations like Apple, MIT, and Caltech in using Lambda Cloud instances, workstations, or servers. Make sure to go to lambdalabs.com slash papers to sign up for one of their amazing GPU instances today. Our thanks to Lambda for their long-standing support and for helping us make better videos for you. Thanks for watching and for your generous support, and I'll see you next time.
|
Dear Fellow Scholars, this is two-minute papers with Dr. Karo Zsolnai-Fehir. Here you see people that don't exist. How can that be? Well, they don't exist because these images were created with a neural network-based learning method by the name StarGam2, which can not only create eye-poppingly detailed looking images, but it can also fuse these people together, or generate cars, churches, and of course, cats. An even cooler thing is that many of these techniques allow us to exert artistic control over these images. So, how does that happen? How do we control a neural network? It happens through exploring latent spaces. And what is that? A latent space is a made-up place where we are trying to organize data in a way that similar things are close to each other. What you see here is a 2D latent space for generating different fonts. It is hard to explain why these fonts are similar, but most of us would agree that they indeed share some common properties. The cool thing here is that we can explore this latent space with our cursor and generate all kinds of new fonts. You can try this work in your browser, the link is available in the video description. And luckily, we can build a latent space not only for fonts, but for nearly anything. I am a light transport researcher by trade, so in this earlier paper we were interested in generating hundreds of variants of a material model to populate this scene. In this latent space, we can conquer all of these really cool digital material models. A link to this work is also available in the video description. So, let's recap. One of the cool things we can do with latent spaces is generate new images that are somewhat similar. But, there is a problem. As we go into nearly any direction, not just one thing, but many things about the image change. For instance, as we explore the space of fonts here, not just the width of the font changes, everything changes. Or, if we explore materials here, not just the shininess or the colors of the material change, everything changes. This is great to explore if we can do it in real time. If I change this parameter, not just the car shape changes, the foreground changes, the background changes, again, everything changes. So, these are nice and intuitive controls, but not interpretable controls. Can we get that somehow? The answer is yes, not everything must change. This previous technique is based on Stagen tool and is called Stylflow, and it can take an input photo of a test subject and edit a number of meaningful parameters. Age, expression, lighting, pose, you name it. For instance, it could also grow Elon Musk a majestic beard. And that's not all, because Elon Musk is not the only person who got a beard. Look, this is me here, after I got locked up for dropping my papers. And I spent so long in here that I grew a beard. Or I mean this neural network gave me one. And since the punishment for dropping your papers is not short, in fact it is quite long, this happened. Ouch! I hereby promise to never drop my papers ever again. You will also have to hold on to yours too, so stay alert. So, apparently interpretable controls already exist. And I wonder how far can we push this concept? Beard or no beard is great, but what about cars? What about paintings? Well, this new technique found a way to navigate these latent spaces and introduces three amazing new examples of interpretable controls that I haven't seen anywhere else yet. One, it can change the car geometry. We can change the sportiness of the car and even ask the design to be more or less boxy. Note that there is some additional damage here, but we can counteract that by changing the foreground to our taste, for instance, add some grass in there. Two, it can repaint paintings. We can change the roughness of the grass strokes, simplify the style, or even rotate the model. This way we can create or adjust without having to even touch a paintbrush. Three, facial expressions. First, when I started reading this paper, I was a little suspicious. I have seen these controls before, so I looked at it like this. But, as I saw how well it did, I went more like this. And this paper can do way more. It can add lipstick, change the shape of the mouth or the eyes, and do all this with very little collateral damage to the remainder of the image. Loving it. It can also find and blur the background similarly to those amazing portrait mode photos that newer smartphones can do. And of course, it can also do the usual suspects, adjusting the age, hairstyle, or growing a beard. So with that, there we go. Now, with the power of mural network-based learning methods, we can create new car designs, can repaint paintings without ever touching a paintbrush, and give someone a shave. It truly feels like we are living in a science fiction world. What a time to be alive! This episode has been supported by weights and biases. In this post, they show you how to use their tool to visualize confusion matrices and find out where your mural network made mistakes and what exactly those mistakes were. Weight and biases provides tools to track your experiments in your deep learning projects. Their system is designed to save you a ton of time and money, and it is actively used in projects at prestigious labs such as OpenAI, Toyota Research, GitHub, and more. And the best part is that weights and biases is free for all individuals, academics, and open source projects. It really is as good as it gets. Make sure to visit them through wnb.com slash papers, or just click the link in the video description, and you can get the free demo today. Our thanks to weights and biases for their long-standing support, and for helping us make better videos for you. Thanks for watching, and for your generous support, and I'll see you next time.
|
Dear Fellow Scholars, this is Two Minute Papers with Dr. Karajona Ifaher. If we study the laws of physics and program them into a computer, we can create a beautiful simulation that showcases the process of baking. And if we so desire, when we are done, we can even tear a loaf of bread apart. With this previous method, we can also smash Oreos, candy crabs, pumpkins, and much, much more. This jelly fracture scene is my long time favorite. And this new work asks a brazen question only a proper computer graphics researcher could ask, can we write an even more extreme simulation? I don't think so, but apparently this paper promises a technique that supports more extreme compression and deformation, and when they say that, they really mean it. And let's see what this can do through five super fun experiments. Experiment number one squishing. As you see, this paper aligns well with the favorite pastimes of a computer graphics researcher, which is, of course, destroying virtual objects in a spectacular fashion. First, we force these soft elastic virtual objects through a thin obstacle tube. Things get quite squishy here, out, and when they come out on the other side, their geometries can also separate properly. And watch how beautifully they regain their original shapes afterwards. Experiment number two, the tendril test. We grab a squishy ball and throw it at the wall, and here comes the cool part. This panel was made of glass, so we also get to view the whole interaction through it, and this way we can see all the squishing happening. Look, the tendrils are super detailed, and every single one remains intact, and intersection free, despite the intense compression. Outstanding. Experiment number three, the twisting test. We take a piece of mat and keep twisting and twisting, and still going. Note that the algorithm has to compute up to half a million contact events every time it advances the time a tiny bit, and still, no self-intersections, no anomalies. This is crazy. Some of our more seasoned fellow scholars will immediately ask, okay, great, but how real is all this? Is this just good enough to fool the entry in die, or does it really simulate what would happen in reality? Well, hold on to your papers, because here comes my favorite part in these simulation papers, and this is when we let reality be our judge, and try to reproduce real-world footage with a simulation. Experiment number four, the high-speed impact test. Here is the real footage of a foam practice ball fired at a plate. And now, at the point of impact, this part of the ball has stopped, but the other side is still flying with a high velocity. So, what will be the result? A ton of compression. So, what does this simulator say about this? My goodness, just look at that. This is really accurate, loving it. This sounds all great, but do we really need this technique? The answer shall be given by experiment number five, ghosts, and chains. What could that mean? Here, you see Houdini's volume, the industry standard simulation for a cloth, soft body, and the number of other kinds of simulations. It is an absolutely amazing tool, but wait a second. Look, artificial ghost forces appear even on a simple test case with 35 chain links. And I wonder if the new method can deal with these 35 chain links? The answer is a resounding yes, no ghost forces. And not only that, but it can deal with even longer chains let's try a hundred links. Oh yeah, now we're talking. And now, only one question remains. How much do we have to wait for all this? All this new technique asks for is a few seconds per frame for the simpler scenes and in the order of minutes per frame for the more crazy tests out there. Praise the papers. That is a fantastic deal. And what is even more fantastic, all this is performed on your processor. So, of course, if someone can implement it in a way that it runs on the graphics card, the next paper down the line will be much, much faster. This episode has been supported by weights and biases. In this post, they show you how to use their tool to interpret the results of your neural networks. For instance, they tell you how to look if your neural network has even looked at the dog in an image before classifying it to be a dog. If you work with learning algorithms on a regular basis, make sure to check out weights and biases. Their system is designed to help you organize your experiments and it is so good it could shave off weeks or even months of work from your projects and is completely free for all individuals, academics and open source projects. This really is as good as it gets and it is hardly a surprise that they are now used by over 200 companies and research institutions. Make sure to visit them through wnb.com slash papers or just click the link in the video description and you can get a free demo today. Our thanks to weights and biases for their longstanding support and for helping us make better videos for you. Thanks for watching and for your generous support and I'll see you next time.
|
Dear Fellow Scholars, this is two-minute papers with Dr. Karojona Ifehid. Have a look at these beautiful, fair-fluid simulations from a previous paper. These are fluids that have magnetic properties and thus respond to an external magnetic field, and yes, you are seeing correctly they are able to even climb things. And the best part is that the simulator was so accurate that we could run it side by side with real life footage and see that they run very similarly. Excellent! Now, running these simulations took a considerable amount of time. To address this, a follow-up paper appeared that showcased a surface-only formulation. What does that mean? Well, a key observation here was that for a class of fair-fluids, we don't have to compute how the magnetic forces act on the entirety of the 3D fluid domain, we only have to compute them on the surface of the model. So, what does this get us? Well, these amazing fluid librarians and all of these fair-fluid simulations, but faster. So remember, the first work did something new, but took a very long time and the second work improved it to make it faster and more practical. Please remember this for later in this video. And now, let's fast forward to today's paper, and this new work can also simulate fair-fluids and not only that, but also support a broader range of magnetic phenomena, including rigid and deformable magnetic bodies and two-way coupling too. Oh my, that is sensational! But first, what do these terms mean exactly? Let's perform four experiments and after you watch them, I promise that you will understand all about them. Let's look at the rigid bodies first in experiment number one. Iron box versus magnet. We are starting out slow, and now we are waiting for the attraction to kick in. And there we go. Wonderful. Experiment number two. Deformable magnetic bodies. In other words, magnetic lotus versus a moving magnet. This one is absolutely beautiful, look at how the petals here are modeled as thin, elastic sheets that dance around in the presence of a moving magnet. And if you think this is dancing, stay tuned, there will be an example with even better dance moves in a moment. And experiment number three. Two-way coupling. We noted this coupling thing earlier, so what does that mean? What coupling means is that here, the water can have an effect on the magnet, and the two-way part means that in return, the magnet can also have an effect on the water as well. This is excellent, because we don't have to think about the limitations of the simulation, we can just drop in nearly anything into our simulation domain, beat a fluid, solid, magnetic or not, and we can expect that their interactions are going to be modeled properly. It's standing. And I promised some more dancing, so here goes experiment number four, the dancing, ferrofluid. I love how informative this compass is here. It is a simple object that tells us how an external magnetic field evolves over time. I love this elegant solution. Normally, we have to visualize the magnetic induction lines so we can better see why the tentacles of a magnetic octopus move, or why two ferrofluid droplets repel or attract each other. In this case, the authors opted for a much more concise and elegant solution, and I also like that the compass is not just a 2D overlay, but a properly shaded 3D object with specular reflections as well. Excellent attention to detail. This is really my kind of paper. Now, these simulations were not run on any kind of supercomputer or a network of computers, this runs on the processor of your consumer machine at home. However, simulating even the simpler scenes takes hours. For more complex scenes, even days. And that's not all, the ferrofluid with the YenYang symbol took nearly a month to compute. So is that a problem? No, no, of course not. Not in the slightest, because thanks to this paper, general magnetic simulations that were previously impossible are now possible and don't forget research is a process. As you saw in the example at the start of this video, with the surface only ferrofluid formulation, it may become much faster, just one more paper down the line. I wanted to show you the first two papers in this video to demonstrate how quickly that can happen. And two more papers down the line, oh my, then the sky is the limit. What a time to be alive. This episode has been supported by Lambda GPU Cloud. If you're looking for inexpensive Cloud GPUs for AI, check out Lambda GPU Cloud. They've recently launched Quadro RTX 6000, RTX 8000, and V100 instances. And hold onto your papers because Lambda GPU Cloud can cost less than half of AWS and Azure. Plus, they are the only Cloud service with 48GB RTX 8000. Join researchers at organizations like Apple, MIT and Caltech in using Lambda Cloud instances, workstations or servers. Make sure to go to lambdaleps.com slash papers to sign up for one of their amazing GPU instances today. Our thanks to Lambda for their longstanding support and for helping us make better videos for you. Thanks for watching and for your generous support, and I'll see you next time.
|
Dear Fellow Scholars, this is Two Minute Papers with Dr. Kato Jolene-Fehir. I am a light transport researcher by trade and I am very happy today because we have an absolutely amazing light transport paper we are going to enjoy today. As many of you know, we write these programs that you can run on your computer to simulate millions and millions of light rays and calculate how they get absorbed or scattered off of our objects in a virtual scene. Initially, we start out with a really noisy image and as we add more rays, the image gets clearer and clearer over time. We can also simulate sophisticated material models in these programs. A modern way of doing that is through using these material nodes. With these, we can conjure up a ton of different material models and change their physical properties to our liking. As you see, they are very expressive indeed, however, the more nodes we use, the less clear it becomes how they interact with each other. And as you see, every time we change something, we have to wait until a new image is rendered. That is very time consuming and more importantly, we have to have some material modeling expertise to use this. This concept is very powerful. For instance, I think if you watch the Perceptilebs sponsorship spot at the end of this video, you will be very surprised to see that they also use node groups, but with theirs, you don't build material models, you can build machine learning models. What would also be really cool if we could just give the machine a photo and it would figure out how to set up these nodes so it looks exactly like the material in the photo. So, is that possible or is that science fiction? Well, have a look at our paper called photorealistic material editing. With this technique, we can easily create these beautiful material models in a matter of seconds, even if we don't know a thing about light transport simulations. It does something that is similar to what many call differentiable rendering. Here is the workflow, we give it a bunch of images like these which were created on this particular test scene and it guesses what parameters to use to get these material models. Now, of course, this doesn't make any sense whatsoever because we have produced these images ourselves so we know exactly what parameters to use to produce this. In other words, this thing seems useless and now comes the magic part because we don't use these images. No, no, we load them into Photoshop and edit them to our liking and just pretend that these images were created with the light simulation program. This means that we can create a lot of quickly and really poorly executed edits. For instance, the stitched specular highlight in the first example isn't very well done and neither is the background of the gold target image in the middle. However, the key observation is that we have built a mathematical framework which makes this pretending really work. Look, in the next step our method proceeds to find a photorealistic material description that, when rendered, resembles this target image and works well even in the presence of these poorly executed edits. So these materials are completely made up in Photoshop and it turns out we can create photorealistic materials through these notegraphs that look almost exactly the same, quite remarkable. And the whole process executes in 20 seconds. If you are one of the more curious fellow scholars out there, this paper and its source code are available in the video description. Now, this differentiable thing has a lot of steam. For instance, there are more works on differentiable rendering. In this other work, we can take a photo of a scene and the learning-based method turns the knobs until it finds a digital object that matches its geometry and material properties. This was a stunning piece of work from Vence Ayakob and his group. Of course, who else? They are some of the best in the business. And we don't even need to be in the area of light transport simulations to enjoy the benefits of differentiable formulations, for instance, this is differentiable physics. So, what is that? Imagine that we have this billiard game where we would like to hit the white ball with just the right amount of force and from the right direction, such that the blue ball ends up close to the black spot. Well, this example shows that this is unlikely to happen by chance and we have to engage in a fair amount of trial and error to make this happen. What this differentiable programming system does for us is that we can specify an end state which is the blue ball on the black dot, and it is able to compute the required forces and angles to make this happen. Very close. So, after you look here, maybe you can now guess what's next for this differentiable technique. It starts out with a piece of simulated ink with a checkerboard pattern and it exerts just the appropriate forces so that it forms exactly the Yin Yang symbol shortly after. And now that we understand what differentiable techniques are capable of, we are ready to proceed to today's paper. This is a proper, fully differentiable material capture technique for real photographs. All this needs is one flash photograph of a real world material. We have those around us in abundance and similarly to our previous method, it sets up the material notes for it. That is a good thing because I don't know about you, but I do not want to touch this mess at all. Luckily, we don't have to. Look, the left is the target photo and the right is the initial guess of the algorithm that is not bad but also not very close. And now, hold on to your papers and just look at how it proceeds to refine this material until it closely matches the target. And with that, we have a digital representation of these materials. We can now easily build a library of these materials and assign them to objects in our scene. And then, we run the light simulation program and here we go. Beautiful. At this point, if we feel adventurous, we can adjust small things in the material graphs to create a digital material that is more in line with our artistic vision. That is great because it is much easier to adjust an already existing material model than creating one from scratch. So, what are the key differences between our work from last year and this new paper? Our work made a rough initial guess and optimized the parameters afterwards. It was also chock full of neural networks. It also created materials from a sample, but that sample was not a photograph, but a photoshopped image. That is really cool. However, this new method takes an almost arbitrary photo. Many of these we can take ourselves or even get them from the internet. Therefore, this new method is more general. It also supports 131 different material node types which is insanity. Huge congratulations to the authors. If I would be an artist, I would want to work with this right about now. What a time to be alive. So, there you go. This was quite a ride and I hope you enjoyed it. Just half as much as I did. And if you enjoyed it, at least as much as I did. And you feel a little stranded at home and are thinking that this light transport thing is pretty cool. And you would like to learn more about it. I held a master level course on this topic at the Technical University of Vienna. Since I was always teaching it to a handful of motivated students, I thought that the teachings shouldn't only be available for the privileged few who can afford a college education, but the teachings should be available for everyone. Free education for everyone, that's what I want. So, the course is available free of charge for everyone. No strings attached. Make sure to click the link in the video description to get started. We write a full light simulation program from scratch there and learn about physics, the world around us and more perceptilebs is a visual API for TensorFlow, carefully designed to make machine learning as intuitive as possible. This gives you a faster way to build out models with more transparency into how your model is architected, how it performs and how to debug it. Look, it lets you toggle between the visual modeler and the code editor. It even generates visualizations for all the model variables and gives you recommendations both during modeling and training and does all this automatically. I only wish I had a tool like this when I was working on my neural networks during my PhD years. Visit perceptilebs.com slash papers to easily install the free local version of their system today. Our thanks to perceptilebs for their support and for helping us make better videos for you. Thanks for watching and for your generous support and I'll see you next time.
|
Dear Fellow Scholars, this is 2 Minute Papers with Dr. Karo Jornai Fahir, if we wish to create an adorable virtual monster and animate it, we first have to engage in 3D modeling. Then, if we wish to make it move and make this movement believable, we have to specify where the bones and joints are located within the model. This process is called rigging. As you see, it is quite laborious and requires expertise in this domain to pull this off. And now, imagine this process with no 3D modeling and no rigging. This is, of course, impossible. Right? Well, now you know what's coming, so hold onto your papers because here is a newer technique that indeed performs the impossible. All we need to do is grab a pencil and create a rough sketch of our character, then it will take a big breath and inflate it into a 3D model. This process was nearly 7 times faster than the classical workflow, but what matters even more, this new workflow requires zero expertise in 3D modeling and rigging. This means that with this technique, absolutely anybody can become an artist. So, we noted that these models can also be animated. Is that so? Yes, that's right. We can indeed animate these models by using these red control points, and even better, we get to specify where these points go. That's a good thing because we can make sure that the prescribed part can move around opening up the possibility of creating and animating a wide range of characters. And I would say all this can be done in a matter of minutes, but it's even better sometimes even within a minute. Whoa! This new technique does a lot of lagwork that previous methods were not able to pull off so well. For instance, it takes a little information about which part is in front or behind the model. Then, it stitches all of these strokes together and inflates are drawing into a 3D model, and it does this better than previous methods. Look. Well, okay, the new one looks a bit better where the body parts connect here, and that's it. Wait a second. Aha! Somebody didn't do their job correctly. And we went from this work to this in just two years. This progress is absolute insanity. Now, let's have a look at a full workflow from start to end. First, we draw the strokes, note that we can specify that one arm and leg is in front of the body, and the other one is behind, and bam! The 3D model is now done. Wow! That was quick! And now, add the little red control points for animation, and let the fun begin. Mr. your paper has been officially accepted. Move the feet, pin the hands, rock the body. Wait, not only that, but this paper was accepted to the C-Graph Asia conference, which is equivalent to winning an Olympic gold medal in computer graphics research, if you will. So add the little neck movement too. Oh yeah, now we're talking. With this technique, the possibilities really feel endless. We can animate humanoids, monsters, other adorable creatures, or can even make scientific illustrations come to life without any modeling and rigging expertise. Do you remember this earlier video where we could paint on a piece of 3D geometry and transfer its properties onto a 3D model? This method can be combined with that too. Yum! And in case you're wondering how quick this combination is, my goodness, very, very quick. Now, this technique is also not perfect. One of the limitations of this single view drawing workflow is that we only have limited control over the proportions in depth. The drawing occludes regions is also not that easy. The authors proposed possible solutions to these limitations in the paper, so make sure to have a look in the video description. And it appears to me that with a little polishing, this may be ready to go for artistic projects right now. If you have a closer look, you will also see that this work also cites the flow paper from Mihai Chicks and Mihai. Extra start points for that. And with that said, when can we use this? And here comes the best part right now. The authors really put their papers where their mouth is, or in other words, the source code for this project is available. Also, there is an online demo. Woohoo! The link is available in the video description. Make sure to read the instructions before you start. So, there you go. Instant 3D models with animation without requiring 3D modeling and rigging expertise. What do you think? Let me know in the comments below. This episode has been supported by weights and biases. In this post, they show you how to use transformers from the Hanging Face library and how to track your model performance. During my PhD studies, I trained a ton of neural networks which were used in our experiments. However, over time, there was just too much data in our repositories and what I am looking for is not data, but insight. And that's exactly how weights and biases helps you by organizing your experiments. It is used by more than 200 companies and research institutions, including OpenAI, Toyota Research, GitHub, and more. And get this, weights and biases is free for all individuals, academics, and open source projects. Make sure to visit them through wnba.com slash papers or just click the link in the video description and you can get a free demo today. Our thanks to weights and biases for their long-standing support and for helping us make better videos for you. Thanks for watching and for your generous support and I'll see you next time.
|
Dear Fellow Scholars, this is Two Minute Papers with Dr. Karajjona Ifehir. Today we are going to travel in time. With the ascendancy of Neural Network Based Learning Techniques, this previous method enables us to take an old, old black and white movie that suffers from a lot of problems like missing data, flickering and more, give it to a Neural Network and have it restored for us. And here you can not only see how much better this restored version is, but it took it one step further. It also performed colorization. Essentially here we could produce six colorized reference images and the Neural Network uses them as our direction and propagates all this information to the remainder of the frames. So this work did restoration and colorization at the same time. This was absolutely amazing and now comes something even better. Today we have a new piece of work that performs not only restoration and colorization, but super resolution as well. What this means is that we can take an antique photo which suffers from a lot of issues. Look, these old films exaggerate wrinkles, a great deal, they even darken the lips and do funny things with red colors. For instance, subsurface scattering is also missing. This is light penetrating our skin and bouncing inside before coming out again. And the lack of this effect is why the skin looks a little plasticky here. Luckily we can simulate all these phenomena on our computers. I am a light transport researcher by trade and this is from our earlier paper with the Activision Blizzard game development company. This is the same phenomenon, a simulation without subsurface scattering and this one is with simulating this effect. Beautiful. You can find a link to this paper in the video description. So with all these problems with the antique photos, our question is what did Lincoln really look like? Well, let's try an earlier framework for restoration, colorization, and super resolution and... Well, unfortunately most of our issues still remain. Lots of exaggerated wrinkles, plasticky look, lots of detail missing. Can we do better? Well, hold on to your papers and observe the output with the new technique. Wow! The restoration indeed took place properly, brought the wrinkles down to a much more realistic level. Skin looks like skin because of subsurface scattering and the super resolution part is responsible for a lot of new detail everywhere, but especially around the lips. Outstanding. It truly feels like this photo has been refotographed with a modern camera and with that please meet time travel, refotography. And the curious thing is that all these sounds flat out impossible. Why is that? Since we don't have old and new image pairs of Lincoln and many other historic figures, the question naturally arises in the mind of the curious fellow-scaler, how do we train a neural network to perform this? And the answer is that we need to use their siblings. Now this doesn't mean that Lincoln had a long-lost sibling that we don't know about, what this means is that as the output image is fed through our neural network, we can generate a photorealistic image of someone and this someone kind of resembles the target subject and has all the details filled in. Then in the next step, we can start morphing the sibling until it starts resembling the test subject. With this previously existing Stagen II technique, morphing is now easy to do, but restoration is hard, so essentially with this, we can skip the difficult restoration part and just do the easier morphing instead, trading a difficult problem for an easier one. Absolutely brilliant idea. And if you have been holding onto your paper so far, now squeeze that paper because it can do even more. Age progression. Look, if we only have a few target photos of Thomas Edison throughout his life, these will be our yardsticks and the algorithm is able to generate his aging process between these yardstick images. And the best part is that these images have different lighting, pose, and none of this is an issue for the technique. It just doesn't care and it still works beautifully. Wow. So we saw earlier that there are other methods that attempt to do this too, at least the colorization part. Yes, we have colorization and other techniques in abundance, so how does this compare to those? It appears to outpace all of them really convincingly. The numbers from the user study and the algorithmically generated scores also favored the new technique. This is a huge leap forward. Do you have some other applications in mind for this new technique? Let me know in the comments what you would do with this or how you would like to see it improved. Now, of course, not even this technique is perfect, blurry and noisy regions can still appear here and there. And note that Stagian 2, the basis for this algorithm, came out just a little more than a year ago. And it is amazing that we are witnessing such incredible progress in so little time. My goodness. And just imagine what the next paper down the line will bring. What a time to be alive. What you see here is a report for a previous paper that we covered in this series which was made by Wades and Biosys. Wades and Biosys provides tools to track your experiments in your deep learning projects. Their system is designed to save you a ton of time and money and it is actively used in projects at prestigious labs such as OpenAI, Toyota Research, GitHub and more. And the best part is that Wades and Biosys is free for all individuals, academics and open source projects. It really is as good as it gets. Make sure to visit them through wnb.com slash papers or just click the link in the video description and you can get a free demo today. Our thanks to Wades and Biosys for their longstanding support and for helping us make better videos for you. Thanks for watching and for your generous support and I'll see you next time.
|
Dear Fellow Scholars, this is two-minute papers with Dr. Karo Zsolnai-Fehir in 2017 scientists at OpenAI published a paper where virtual humans learned to tackle each other in a sumo competition of sorts and found out how to rock a stable stance to block others from tackling them. This was a super interesting work because it involved self-play or in other words copies of the same AI were playing against each other and the question was how do we pair them with each other to maximize their learning. They found something really remarkable when they asked the algorithm to defeat an older version of itself. If it can reliably pull that off, it will lead to a rapid and predictable learning process. This kind of curriculum-driven learning can supercharge many different kinds of AI's. For instance, this robot from a later paper is essentially blind as it only has proprioceptive sensors which means that the only thing that the robot senses is its own internal state and that's it. No cameras, no depth sensors, no light or nothing. And at first it behaves as we would expect it. Look, when we start out, the agent is very clumsy and can barely walk through a simple terrain. But as time passes, it grows to be a little more confident and with that the terrain also becomes more difficult over time in order to maximize learning. So how potent is this kind of curriculum in teaching the AI? Well, it learned a great deal in the simulation and a scientist deployed it into the real world just look at how well it traversed through this rocky mountain, stream and not even this nightmare-ish snowy descent gave it too much trouble. This new technique proposes a similar curriculum-based approach where we would teach all kinds of virtual life forms to navigate on stepping stones. The examples include a virtual human, a bipedal robot called Cassie and this sphere with toothpick legs too. The authors call it monster, so you know what? Monster it is. The fundamental question here is how do we organize the stepping stones in this virtual environment to deliver the best teaching to this AI? We can freely choose the heights and orientations of the upcoming steps and, of course, it is easier said than done. If the curriculum is too easy, no meaningful learning will take place and if it gets too difficult too quickly, well then in the better case this happens. And in the worst case, whoops! This work proposes an adaptive curriculum that constantly measures how these agents perform and creates challenges that progressively get harder, but in a way that they can be solved by the agents. It can even deal with cases where the AI already knows how to climb up and down and even deal with longer steps. But that does not mean that we are done because if we don't build the Spires right, this happens. But after learning 12 to 24 hours with this adaptive curriculum learning method, they become able to even run, deal with huge step height variations, high-step tilt variations, and let's see if they can pass the hardest exam. Look at this mess, my goodness, lots of variation in every perimeter. And yes, it works. And the key point is that the system is general enough that it can teach different body types to do the same. If there is one thing that you take home from this video, it shouldn't be that it takes from 12 to 24 hours, it should be that the system is general. Normally, if we have a new body type, we need to write a new control algorithm, but in this case, whatever the body type is, we can use the same algorithm to teach it. Absolutely amazing. What a time to be alive. However, I know what you're thinking. Why teach them to navigate just stepping stones? This is such a narrow application of locomotion. So why this task? A great question. And the answer is that the generality of this technique, we just talked about also means that the stepping stone navigation truly was just a stepping stone. And here it is. We can deploy these agents to a continuous terrain and expect them to lean on their stepping stone chops to navigate well here too. Another great triumph for curriculum based AI training environments. So what do you think? What would you use this technique for? Let me know in the comments or if you wish to discuss similar topics with other fellow scholars in a warm and welcoming environment, make sure to join our Discord channel. The link is available in the video description. This episode has been supported by Lambda GPU Cloud. If you're looking for inexpensive Cloud GPUs for AI, check out Lambda GPU Cloud. They've recently launched Quadro RTX 6000, RTX 8000 and V100 instances. And hold onto your papers because Lambda GPU Cloud can cost less than half of AWS and Azure. Plus they are the only Cloud service with 48GB RTX 8000. Join researchers at organizations like Apple, MIT and Caltech in using Lambda Cloud instances, workstations or servers. Make sure to go to LambdaLabs.com slash papers to sign up for one of their amazing GPU instances today. Our thanks to Lambda for their long standing support and for helping us make better videos for you. Thanks for watching and for your generous support and I'll see you next time.
|
Dear Fellow Scholars, this is Two Minute Papers with Dr. Carter-Jorna Ifehid. I got so excited by the amazing results of this paper. I will try my best to explain why, and by the end of this video, there will be a comparison that blew me away, and I hope you will appreciate it too. With the rise of neural network-based learning algorithms, we are living the advent of image generation techniques. What you see here is a set of breathtaking results created with a technique called StyleGand2. This can generate images of humans, cars, cats, and more. As you see, the progress in machine learning-based image generation is just stunning. And don't worry for a second about the progress in text processing, because that is also similarly amazing these days. A few months ago, OpenAI published their GPT-3 model that they unleashed to read the internet and learned not just our language, but much, much more. For instance, the internet also contains a lot of computer code, so it learned to generate website layouts from a written description. But that's not all, not even close, to the joy of technical PhD students around the world. It can properly type-set mathematical equations from a plain English description as well. And get this, it can also translate a complex legal text into plain language, or the other way around. And it does many of these things nearly as well as humans. So, what was the key to this work? One of the keys of GPT-3 was that it uses a neural network architecture that is called the Transformer Network. This really took the world by storm in the last few years, so our first question is, why Transformers? One, Transformer Networks can typically learn on stupendously large data sets, like the whole internet, and extract a lot of information from it. That is a very good thing. And two, Transformers are attention-based neural networks, which means that they are good at learning and generating long sequences of data. Ok, but how do we benefit from this? Well, when we ask OpenEAS GPT-3 to continue our sentences, it is able to look back at what we have written previously. And it looks at not just a couple of characters. No, no, it looks at up to several pages of writing backwards to make sure that it continues what we write the best way it can. That sounds amazing. But what is the lesson here? Just use Transformers for everything, and off we go? Well, not quite. They are indeed good at a lot of things when it comes to text processing tasks, but they don't excel at generating high resolution images at all. Can this be improved somehow? Well, this is what this new technique does, and much, much more. So, let's dive in and see what it can do. First, we can give it an incomplete image and ask it to finish it. Not bad, but OpenEAS image GPT could do that too, so what else can it do? Oh boy, a lot more. And by the way, we will compare the results of this technique against image GPT at the end of this video. Make sure not to miss that. I almost fell off the chair, you will see in a moment why. Two, it can do one of my favorites, depth to image generation. We give it a depth map, which is very easy to produce, and it creates a photorealistic image that corresponds to it, which is very hard. We do the easy part, the AI does the hard part. Great. And with this, we not only get a selection of these images, but since we have their depth maps, we can also rotate them around as if they were 3D objects. Nice. Three, we can also give it a map of labels, which is, again, very easy to do. We just say, here goes the sea, put some mountains there, and the sky here, and it will create a beautiful landscape image that corresponds to that. I can't wait to see what amazing artists all over the world will be able to get out of these techniques, and these results are already breathtaking, but research is a process and just imagine how good they will become two more papers down the line. My goodness. Four, it can also perform super resolution. This is the CSI thing where in goes a blurry image and out comes a finer, more detailed version of it. Which craft? And finally, five, we can give it a pose and it generates humans that take these poses. Now, the important thing here is that it can supercharge transformer networks to do these things at the same time with just one technique. So, how does it compare to OpenAI's image completion technique? Well, remember that technique was beyond amazing and set a really high bar. So, let's have a look together. They were both given the upper half of this image and had to fill in the lower half. Remember, as we just learned, transformers are not great at high resolution image synthesis. So, here for OpenAI's image GPT, we expect heavily pixelated images, and, oh yes, that's right. So, now, hold on to your papers and let's see how much more detailed the new technique is. Holy matter of papers. Do you see what I see here? Image GPT came out just a few months ago and there is already this kind of progress. So, there we go. Just imagine what we will be able to do with these supercharge transformers just two more papers down the line. Wow! And that's where I almost fell off the chair when reading this paper. Hope you held on to yours. It truly feels like we are living in a science fiction world. What a time to be alive. What you see here is a report of this exact paper we have talked about which was made by weights and biases. I put a link to it in the description. Make sure to have a look. I think it helps you understand this paper better. Weight and biases provides tools to track your experiments in your deep learning projects. Their system is designed to save you a ton of time and money, and it is actively used in projects at prestigious labs such as OpenAI, Toyota Research, GitHub, and more. And the best part is that weights and biases is free for all individuals, academics, and open source projects. It really is as good as it gets. Make sure to visit them through wnb.com slash papers or just click the link in the video description and you can get the free demo today. Our thanks to weights and biases for their longstanding support and for helping us make better videos for you. Thanks for watching and for your generous support and I'll see you next time.
|
Dear Fellow Scholars, this is two-minute papers with Dr. Karo Zsolnai-Fehir. Today, we are going to read some minds. A few months ago, our first video appeared on brain-machine interfaces. It was about a paper from Neuralink, which promised a great deal. For instance, they proposed a brain-machine interface that could read these peaks' thoughts as it was running on the treadmill. And what's more, they not only read its thoughts, but they could also predict what the peak's brain is about to do. So, this was about reading thoughts related to movement. And to be able to use these brain-machine interfaces to the fullest, they should be able to enable some sort of communication for people who have lost their ability to move or speak. So, what about writing or speaking? As impossible as it sounds, can we somehow restore that, or is that still science fiction? Many of you told me that you would like to hear more about this topic, so, due to popular requests, here it is, a look beyond Neuralink's project to see what else is out there. This is a collaboration between Stanford University and a selection of other institutions, and it allows brain-to-text transcription, where all the test subject has to do is imagine writing the letters, and they magically appear on the screen. And now, start holding on to your papers and just look at how quickly it goes. Ninety characters per minute, with over 94% of accuracy, which can be improved to over 99% accuracy with an additional auto-correct technique. That is absolutely amazing, a true miracle. Ninety characters per minute means that the test subject here, who has a paralyzed hand, can almost think about writing these letters continuously, and most of them are decoded and put on the screen in less than a second. Also, wait a second, 90 characters per minute, that is about 80% as fast as the average typing speed on a smartphone screen for an able-bodied person of this age group. Whoa! It is quite remarkable that even years after paralysis, the motor cortex is still strong enough to be read by a brain computer interface well enough for such typing speed and accuracy. It truly feels like we are living in a science fiction world. Of course, not even this technique is perfect, it has its own limitations. For instance, we can't edit or delete text. Have no access to capital letters, and the method has a calibration step that takes a long time, although it doesn't get significantly worse if we shorten this calibration time a bit. So, how does this work? First the participant starts thinking of writing one letter at a time. Here you see the recorded neural activity, this is subject to decoding. You can see the decoded signals here. And we can just give this to a computer to distinguish between them as is, we project these into a 2D latent space where it is easy to find which letter corresponds to which region. Look, they form relatively tight clusters, therefore it is now easy to decide which of the squiggles corresponds to which letter. The decoding part is done using a recurrent neural network which is in the out with memory and can deal with sequences of data. So here, in goes the brain activity and out comes the decision that says which character these activities correspond to. Of course, our alphabet was not designed to be decoded with neural networks. So here is an almost science fiction-like question. How do we reformulate the alphabet to tailor it to maximize the efficiency of a neural network decoding our thoughts? Or simpler, what would the alphabet look like if neural networks were in charge? Well, this paper has an answer to that too, so let's have a look. The squiggles indeed look like they came from another planet, so what do we gain from this? Well, look at the distance matrix for the English alphabet. The diagonal is supposed to be very blue, but what is not supposed to be blue at all are the regions that surround it. Look, the blue color here means that in the English alphabet the letters M and N can be relatively easily confused, same with the letters O and C and there are many more similarities. And now, look, here is the same distance matrix for the optimized alphabet. No dark blue inside outside the diagonal. Much easier to process and decode. If neural networks were in charge, this is what the alphabet would look like. Glorious. Also, the fact that we are talking about squiggles is not a trivial insight at all, traditional methods typically rely on movement in straight lines to select letters and buttons. The other key thought in this paper is that modern neural network-based methods can decode these thoughts of squiggles reliably. That is absolutely amazing. And wait a second, note that there is only one participant in the user study. Why just one participant? Why not calling a bunch of people to test this? It is because this method uses a microelectro-array and this requires surgery to insert and because of that, these studies are difficult to perform and are usually done at times when the participant has brain surgery anyways for other reasons. Having more people in the study is usually prohibitively expensive if at all possible for this kind of brain implant. And note that research is a process and these papers are stepping stones. And now we are able to help people write 90 characters every minute with a brain machine interface and I can only imagine how good these techniques will become two more papers down the line. And don't forget there are research works on non-invasive devices too. So what do you think? Let me know in the comments below. What a time to be alive! This episode has been supported by Lambda GPU Cloud. If you are looking for inexpensive Cloud GPUs for AI, check out Lambda GPU Cloud. They recently launched Quadro RTX 6000, RTX 8000 and V100 instances and hold onto your papers because Lambda GPU Cloud can cost less than half of AWS and Asia. Plus they are the only Cloud service with 48GB RTX 8000. Join researchers at organizations like Apple, MIT and Caltech in using Lambda Cloud instances, workstations or servers. Make sure to go to lambdalabs.com slash papers to sign up for one of their amazing GPU instances today. Thanks to Lambda for their long-standing support and for helping us make better videos for you. Thanks for watching and for your generous support and I'll see you next time.
|
Dear Fellow Scholars, this is two minute tapers with Dr. Káloj Zsolnai-Fehér. The promise of virtual reality VR is indeed truly incredible. If one day it comes to fruition, doctors could be trained to perform surgery in a virtual environment, we could train better pilots with better flight simulators, expose astronauts to virtual zero gravity simulations, you name it. This previous work uses a learning based algorithm to teach a head-mounted camera to tell the orientation of our hands at all times. Okay, so what can we do with this? A great deal. For instance, we can type on a virtual keyboard or implement all kinds of virtual user interfaces that we can interact with. We can also organize imaginary boxes and of course we can't leave out the two minute papers favorite, going into a physics simulation and playing it with our own hands. But of course, not everything is perfect here, however. Look, hand-hand interactions don't work so well, so Fox who prefer virtual reality applications that include washing our hands should look elsewhere. And in this series we often say one more paper down the line and it will be significantly better. So now here's the moment of truth, let's see that one more paper down the line. Let's go in, guns blazing and give it examples with challenging hand-hand interactions, deformations, lots of self-contact and self-occlusion. Take a look at this footage. This seems like a nightmare for any hand reconstruction algorithm. Who the heck can solve this? And look, interestingly they also recorded the hand model with gloves as well. How curious! And now hold on to your papers because these are not gloves. No no no, what you see here is the reconstruction of the hand model by a new algorithm. Look, it can deal with all of these rapid hand motions and what's more, it also works on this challenging hand massage scene. Look at all those beautiful details. It not only fits like a glove here too, but I see creases, folds and deformations too. This reconstruction is truly out of this world. To be able to do this, the algorithm has to output triangle meshes that typically contain over a hundred thousand faces. Please remember this as we will talk about it later. And now let's see how it does all this magic because there's plenty of magic under the hood. We get five ingredients that are paramount to getting an output of this quality. Ingredient number one is the physics term. Without it we can't even dream of tracking self-occlusion and contact properly. Two, since there are plenty of deformations going on in the input footage, the deformation term accounts for that. It makes a huge difference in the reconstruction of the thumb here. And if you think, wow, that is horrific, then you'll need to hold onto your papers for the next one, which is three, the geometric consistency term. This one is not for the faint of the heart, you have been warned. Are you ready? Let's go. Yikes! A piece of advice, if you decide to implement this technique, make sure to include the geometric consistency term so no one has to see this footage ever again. Thank you. With the worst already behind us, let's proceed to ingredient number four, the photo consistency term. This ensures that fingernail tips don't end up sliding into the finger. And five, the collision term fixes problems like this to make sure that the fingers don't penetrate each other. And this is an excellent paper, so in the evaluation section, these terms are also tested in isolation, and the authors tell us exactly how much each of these ingredients contribute to the solution. Now, these five ingredients are not cheap in terms of computation time, and remember, we also mentioned that many of these meshes have several hundred thousand faces. This means that this technique takes a very long time to compute all this. It is not real time, not even close, for instance, reconstructing the mesh for the hand massage scene takes more than ten minutes per frame. This means hours, or even days of computation, to accomplish this. Now the question naturally arises, is that a problem? No, not in the slightest. This is a zero to one paper, which means that it takes a problem that was previously impossible, and now it makes it possible. That is absolutely amazing. And as always, research is a process, and this is an important stepping stone in this process. I bet that two more good papers down the line, and we will be getting these gloves interactively. I am so happy about this solution, as it could finally give us no ways to interact with each other in virtual spaces, add more realism to digital characters, help us better understand human human interactions, and it may also enable new applications in physical rehabilitation. And these reconstructions, indeed, fit these tasks like a glove. What a time to be alive! Perceptilebs is a visual API for TensorFlow, carefully designed to make machine learning as intuitive as possible. This gives you a faster way to build out models with more transparency into how your model is architected, how it performs, and how to debug it. Look, it lets you toggle between the visual modeler and the code editor. It even generates visualizations for all the model variables, and gives you recommendations both during modeling and training, and does all this automatically. I only wish I had a tool like this when I was working on my neural networks during my PhD years. And with perceptilebs.com, slash papers to easily install the free local version of their system today. Thanks to perceptilebs for their support, and for helping us make better videos for you. Thank you for watching, and for your generous support, and I'll see you next time.
|
Dear Fellow Scholars, this is two minute papers with Dr. Kato Zsolnai-Fehir. Let's talk about snow simulations. Being able to simulate snow on our computers is not new. It's been possible for a few years now. For instance, this legendary Disney paper from 2013 was capable of performing that. So, why do researchers write new papers on this topic? Well, because the 2013 paper had its limitations. Let's talk about them while looking at some really sweet footage from a new paper. Limitation number one for previous works is that there's no simulations were sinfully expensive. Most of them took not seconds and sometimes not even minutes, but half an hour per frame. Yes, that means all nighter simulations. They were also not as good in fracturing interactions, simulating powder snow and avalanches were also out of reach. Until now, you see, simulating snow is quite challenging. It clamps up, deforms, breaks and hardens under compression. And even my favorite phase change from fluid to snow. These are all really challenging to simulate properly, and in a moment you will see that this new method is capable of even more beyond that. Let's start with friction. First, we turn on the snow machine, and then engage the wipers. That looks wonderful. And now, may I request seeing some tire marks? There you go. This looks really good, so how about taking a closer look at this phenomenon? VB here is the boundary friction coefficient, and it is a parameter that can be chosen freely by us. So let's see what that looks like. If we initialize this to a low value, we'll get very little friction. And if we crank this up, look, things get a great deal more sticky. A big clump of snow also breaks apart in a spectacular manner, also showcasing compression and fracturing, beyond the boundary friction effect we just looked at. Oh my, this is really good. Oh my, this is beautiful. Okay, now, this is a computer graphics paper. If you're a seasoned fellow scholar, you know what this means. It means that it is time to put some virtual bunnies into the oven. This is a great example for rule number one, for watching physics simulations, which is that we discussed the physics part, and not the visuals. So why are these bunnies blue? Well, let's chuck them up. Well, let's chuck them into the oven and find out. Aha, they are color coded for temperature. Look, they start from minus 100 Celsius, that's the blue, and we see the colors change as they approach 0 degrees Celsius. At this point, they don't yet start melting, but they are already falling apart. So it was indeed a good design decision to show the temperatures, because it tells us exactly what is going on here. Without this, we would be expecting melting. Well, can we see that melting in action too? You bet. Now, hold on to your papers, and bring forth the soft isomet. This machine can not only create an exquisite dessert for computer graphics researchers, but also showcases the individual contributions of this new technique, one by one. There is the melting. Yes. Add a little frosting, and there you go. Bon appétit. Now, as we feared, many of these larger-scale simulations require computing the physics for millions of particles. So how long does that take? When we need millions of particles, we typically have to wait a few minutes per frame, but if we have a smaller scene, we can get away with these computations in a few seconds per frame. Goodness. We went from hours per frame to seconds per frame in just one paper. Outstanding work. And also, wait a second. If we are talking millions of particles, I wonder how much memory it takes to keep track of them. Let's see. Whoa. This is very appealing. I was expecting a few gigabytes, yet it only asks for a fraction of it, a couple hundred megabytes. So, with this hefty value proposition, it is no wonder that this paper has been accepted to the C-Graph conference. This is the Olympic Gold Medal of Computer Graphics Research, if you will. Huge congratulations to the authors of this paper. This was quite an experience. What a time to be alive. This episode has been supported by Lambda GPU Cloud. If you are looking for inexpensive Cloud GPUs for AI, check out Lambda GPU Cloud. They recently launched Quadro RTX 6000, RTX 8000, and V100 instances. And hold onto your papers, because Lambda GPU Cloud can cost less than half of AWS and Asia. Plus, they are the only Cloud service with 48 GB, RTX 8000. Join researchers at organizations like Apple, MIT, and Caltech in using Lambda Cloud instances, workstations, or servers. Make sure to go to LambdaLabs.com, slash papers to sign up for one of their amazing GPU instances today. Our thanks to Lambda for their long-standing support and for helping us make better videos for you. Thanks for watching and for your generous support, and I'll see you next time.
|
Dear Fellow Scholars, this is two-minute papers with Dr. Karajon Aifahir. Most people think that if we have a piece of camera footage that is a little choppy, then there is nothing we can do with it, we better throw it away. Is that true? No, not at all. Earlier, we discussed two potential techniques to remedy this common problem. The problem statement is simple, in goes a choppy video, something happens, and then out comes a smooth and creamy video. This process is often referred to as frame interpolation or frame in between. And of course, it's easier said than done. If it works well, it really looks like magic, much like in the science fiction movies. So what are the potential some things that we can use to make this happen? One optical flow. This is an originally handcrafted method that tries to predict the motion that takes place between these frames. This can kind of produce new information, and I use this in these videos on a regular basis, but the output footage also has to be carefully inspected for unwanted artifacts, which are a relatively common occurrence. Two, we can also try to give a bunch of training data to a neural network and teach it to perform this frame in between. And if we do, the results are magnificent. We can do so much with this. But if we can do this for video frames, here is a crazy idea. How about a similar kind of in-betweening for animating humanoids? That would really be something else, and it would save us so much time and work. Let's see what this new method can do in this area. The value proposition of this technique is as simple as it gets. To set up a bunch of keyframes, these are the transparent figures, and the neural network creates realistic motion that transitions from one stage to the next one. Look, it really seems to be able to do it all, it can perform twists and turns, wrist walks and runs, and you will see in a minute even dance moves. Hmm, this in-betweening for animating humanoid motion idea may not be so crazy after all. Once more, this could be super useful for artists working in the industry who can not only do all this, but they can also set up movement variations by moving the keyframes around spatially. Or, we can even set up temporal variations to create different timings for the movement. Excellent. Of course, it cannot do everything if we set up the intermediate stages in a way that uncommon motions would be required to fill in, we might end up with one of these failure cases. And all these results depend on how much training data we have with the kinds of motion we need to fill in. Let's have a look at a more detailed example. This smooth chap has been given lots of training data with dancing moves, and look. When we pull out these dance moves from his training data, he becomes a drunkard. So talking about training data, how much motion capture footage was given to this algorithm? Well, it used the Ubisoft La Forge animation dataset. This contains five subjects, 77 sequences, and about 4.5 hours of footage in total. Wow, that is not that much. For instance, it only has eight movement sequences for dancing. That is not much at all. And we've already seen that the model can dance. That is some serious data efficiency, especially given that it can even climb through obstacles. So much knowledge has been extracted from so little data. It truly feels like we are living in a science fiction world. What a time to be alive. So, when we write a paper like this, how do we compare the results to previous techniques? How can we decide which technique is better? Well, the level one solution is a user study. We call some folks in, show them the footage, ask which one they like best, the previous method, or this one. That would work, but of course, it is quite laborious, but fortunately, there is a level two solution. And this level two solution is called normalized power spectrum similarity, NPSS in short. This is a number that we can produce with the computer. No humans are required, and it measures how believable these motions are. And the key of NPSS is that it correlates with human judgment, or in other words, if this says that the technique is better, then it is likely that humans would also come to the same conclusion. So, let's see those results. Here are the previous methods. NPSS is subject to minimization, in other words, the lower, the better, and let's see the new method. Oh, yes, it indeed outpaces the competition. So there is no wonder that this incredible paper was accepted to the SIGRAPH Asia conference. What does that mean exactly? If research were the Olympics, a SIGRAPH where SIGRAPH Asia paper would be the gold medal. And this was Mr. Felix Harvey's first few papers. Huge congratulations. And as an additional goodie, it can create an animation of me when I lost my papers. And this is me when I found them. Do you have some more ideas on how we could put such an amazing technique to use? Let me know in the comments below. What you see here is a report of this exact paper we have talked about, which was made by weights and biases. I put a link to it in the description. Make sure to have a look. I think it helps you understand this paper better. If you work with learning algorithms on a regular basis, make sure to check out weights and biases. Their system is designed to help you organize your experiments and it is so good it could shave off weeks or even months of work from your projects and is completely free for all individuals, academics, and open source projects. This really is as good as it gets and it is hardly a surprise that they are now used by over 200 companies and research institutions. Make sure to visit them through wnb.com slash papers or just click the link in the video description and you can get a free demo today. Our thanks to weights and biases for their longstanding support and for helping us make better videos for you. Thanks for watching and for your generous support and I'll see you next time.
|
Dear Fellow Scholars, this is Two Minute Papers, episode number 500 with Dr. Karajol Nehfehir. And on this glorious day, we are going to simulate the kinematics of yarn and cloth on our computers. We will transition into today's paper in a moment, but for context, here is a wonderful work to show you what we were able to do in 2012, and we will see how far we have come since. This previous work was about creating these highly detailed cloth geometries for digital characters. Here you see one of its coolest results where it shows how the simulated forces pull the entire piece of garment together. We start out with dreaming up a piece of cloth geometry, and this simulator gradually transforms it into a real-world version of that by subjecting it to real physical forces. This is a step that we call your level relaxation. A few years ago, when I worked at Disney Research, I attended to the talk of the Oscar-a-world winning researcher Steve Martiner, who presented this paper. And when I saw these results, shockwaves went through my body. It was one of my truly formative hold onto your paper's moments that I never forget. Now note that to produce these results, one had to wait for hours and hours to compute all these interactions. So, this paper was published in 2012, and now nearly nine years have passed, so I wonder how far we have come since. Well, let's see together. Today, with this new technique, we can conjure up similar animations where the pieces of garments tighten. Beautiful. Now, let's look under the hood of the simulator and... Well, well, well. Do you see what I see here? Red dots. So, why did I get so excited about a couple red dots? Let's find out together. These red dots solve a fundamental problem when simulating the movement of these yarns. The issue is that in the mathematical description of this problem, there is a stiffness term that does not behave well when two of these points slide too close to each other. Interestingly, our simulated material gets infinitely stiff in these points. This is incorrect behavior, and it makes the simulation unstable. Not good. So, what do we do to alleviate this? Now, we can use this new technique that detects these cases and addresses them by introducing these additional red nodes. These are used as a stand-in until things stabilize. Look, we wait until these two points slide off of each other, and now the distances are large enough so that the mathematical framework can regain its validity and compute the stiffness term correctly. And, look, the red dot disappears and the simulation can continue without breaking. So, if we go back to another piece of under-the-hood footage, we now understand why these red dots come and go. They come when two nodes get too close to each other, and they disappear as they pass each other, keeping the simulation intact. And, with this method, we can simulate this beautiful phenomenon when we throw a piece of garment on the sphere, and all kinds of stretching and sliding takes place. Marvelous! So, what else can this do? Oh boy, it can even simulate multiple cloth layers, look at the pocket and the stitching patterns here. Beautiful. We can also put a neck tag on this shirt, and start stretching and sharing it into oblivion, pay special attention to the difference in how the shirt and the neck tag reacts to the same forces. We can also stack three tablecloths on top of each other, and see how they would behave if we would not simulate friction. And now, the same footage with friction. Much more realistic. And if we look under the hood, you see that the algorithm is doing a ton of work with these red nodes. Look, the table nodes that they had to insert tens of thousands of these nodes to keep the simulation intact. So, how long do we have to wait for a simulation like this? The 2012 paper took several hours, so what about this one? Well, this says that we need a few seconds per time step, and typically several time steps correspond to one frame. So, where does this put us? Well, it puts us in the domain of not hours per every frame of animation here, but two minutes and sometimes even seconds per frame. And not only that, but this simulator is also more robust as it can deal with these unpleasant cases where these points get too close to each other. So, I think this was a great testament to the amazing rate of progress in computer graphics research. What a time to be alive. This episode has been supported by weights and biases. In this post, they show you how to use their tool to draw bounding boxes for object detection and even more importantly, how to debug them. During my PhD studies, I trained a ton of neural networks which were used in our experiments. However, over time, there was just too much data in our repositories and what I am looking for is not data, but insight. And that's exactly how weights and biases helps you by organizing your experiments. It is used by more than 200 companies and research institutions, including OpenAI, Toyota Research, GitHub, and more. And get this, weights and biases is free for all individuals, academics, and open source projects. Make sure to visit them through wnba.com slash papers or just click the link in the video description and you can get a free demo today. Our thanks to weights and biases for their longstanding support and for helping us make better videos for you. Thanks for watching and for your generous support and I'll see you next time.
|
Dear Fellow Scholars, this is Two Minute Papers with Dr. Karajjola Ifeher. Approximately five months ago, we talked about a technique called Neural Radiance Fields or Nerf in short, where the input is the location of the camera and an image of what the camera sees. We take a few of those, give them to a neural network to learn them and synthesize new, previously unseen views of not just the materials in the scene, but the entire scene itself. In short, we take a few samples and the neural network learns what should be there between the samples. In comes non-continuous data, a bunch of photos, and out goes a continuous video where the AI feels in the data between these samples. With this we can change the view direction, but only that. This concept can also be used for other variables. For instance, this work is able to change the lighting, but only the lighting. By the way, this is from long ago from around two minute papers episode number 13, so our season fellow scholars know that this was almost 500 episodes ago. Or the third potential variable is time. With this AI-based physics simulator, we can advance the time and the algorithm would try to guess how a piece of fluid would evolve over time. This was amazing, but as you might have guessed, we can advance time, but only the time. And this was just a couple of examples from a slow of works that are capable of doing one or at most two of these. These are all amazing techniques, but they offer separate features. One can change the view, but nothing else, one for the illumination, but nothing else, and one for time, but nothing else. To the advent of neural network based learning algorithms, I wonder if it is possible to create an algorithm that does all three. Or is this just science fiction? Well, hold on to your papers because this new work that goes by the name X Fields, we can indeed change the time and the view direction and the lighting separately. Or even better, do all three at the same time. Look at how we can play with the time back and forth and set the fluid levels as we desire, that is the time part, and we can also play with the other two parameters as well at the same time. But still, the results that we see here can range from absolutely amazing to trivial depending on just one factor. And that factor is how much training data was available for the algorithm. neural networks typically require loads of training data to learn a new concept. For instance, if we wish to teach a neural network what a cat is, we have to show it thousands and thousands of images of cats. So how much training data is needed for this? And now hold on to your papers and... Whoa! Look at these five dots here. Do you know what this means? It means that all the AISR was five images, that is five samples from the scene with different light positions and it could fill in all the missing details with such accuracy that we can create this smooth and creamy transition. It almost feels like we have made at least a hundred photographs of the scene. And all this from five input photos. Absolutely amazing. Now, here is my other favorite example. I am a light transport simulation researcher by trade, so by definition I love caustics. A caustic is a beautiful phenomenon in nature where curved surfaces reflect or reflect light and concentrate it to a relatively small area. I hope that you are not surprised when I say that it is the favorite phenomenon of most light transport researchers. And just look at how beautifully it deals with it. You can take any of these intermediate AI generated images and sell them as real ones and I doubt anyone would notice. So, it does three things that previous techniques could do one by one, but really how does its quality compare to these previous methods? Let's see how it does on thin geometry which is a notoriously difficult case for these methods. Here is a previous one. Look, the thick part is reconstructed correctly. However, look at the missing top of the grass blade. Yep, that's gone. A different previous technique by the name Local Lightfield Fusion not only missed the top as well, but also introduced halo-like artifacts to the scene. And as you can see with this footage, the new method solves all of these problems really well. And it is quite close to the true reference footage that we kept hidden from the AI. Perhaps the best part is that it also has an online demo that you can try right now, so make sure to click the link in the video description to have a look. Of course, not even this technique is perfect, there are cases where it might confuse the foreground with the background and we are still not out of the water when it comes to thin geometry. Also, an extension that I would love to see is changing material properties. Here, you see some results from our earlier paper on Neural Rendering where we can change the material properties of this test object and get a near perfect photorealistic image of it in about 5 milliseconds per image. I would love to see it combined with a technique like this one and while it looks super challenging, it is easily possible that we will have something like that within two years. The link to our Neural Rendering paper and its source code is also available in the video description. What a time to be alive! What you see here is a report of this exact paper we have talked about which was made by Wates and Biasis. I put a link to it in the description, make sure to have a look, I think it helps you understand this paper better. Wates and Biasis provides tools to track your experiments in your deep learning projects. Your system is designed to save you a ton of time and money and it is actively used in projects at prestigious labs such as OpenAI, Toyota Research, GitHub and more. And the best part is that Wates and Biasis is free for all individuals, academics and open source projects. It really is as good as it gets. Make sure to visit them through wnb.com slash papers or just click the link in the video description and you can get a free demo today. Thanks for watching and for your generous support and I'll see you next time!
|
Dear Fellow Scholars, this is two-minute papers with Dr. Karo Zonai-Fehir. In this paper, you will not only see an amazing technique for two-way coupled fluid solid simulations, but you will also see some of the most creative demonstrations of this new method I've seen in a while. But first things first, what is this two-way coupling thing? Two-way coupling in a fluid simulation means being able to process contact. You see, the armadillos can collide with the fluids, but the fluids are also allowed to move the armadillos. And this new work can compute this kind of contact. And by this, I mean lots and lots of contact. But one of the most important lessons of this paper is that we don't necessarily need a scene discrazy to be dominated by two-way coupling. Have a look at this experiment with these adorable duckies and below the propellers are starting up. Please don't be one of those graphics papers. Okay, good. We dodged this one. So the propellers are not here to dismember things. They are here to spin up to 160 RPM. And since they are two-way coupled, they pump water from one tank to the other, raising the water levels, allowing the duckies to pass. An excellent demonstration of a proper algorithm that can compute two-way coupling really well. And simulating this scene is much, much more challenging than we might think. Why is that? Note that the speed of the propellers is quite high, which is a huge challenge to previous methods. If we wish to complete the simulation in a reasonable amount of time, it simulates the interaction incorrectly and no ducks can pass. The no technique can simulate this correctly and not only that, but it is also 4.5 times faster than the previous method. Also, check out this elegant demonstration of two-way coupling. We start slowly unscrewing this bolt and nothing too crazy going on here. However, look, we have tiny cutouts in the bolt, allowing the water to start gushing out. The pipe was made transparent so we can track the water levels slowly decreasing. And finally, when the bolt falls out, we get some more two-way coupling action with the water. Once more, such a beautiful demonstration of a difficult to simulate phenomenon. Loving it. A traditional technique cannot simulate this properly, unless we add a lot of extra computation, at which point it is still unstable. Ouch! And even with more extra computation, we can finally do this, but hold onto your papers because the no proposed technique can do it about 10 times faster. It also supports contact-rich geometry as well. Look, we have a great deal going on here. You are seeing up to 38 million fluid particles interacting with these walls, given with lots of rich geometry, and there will be interactions with mud, and elastic trees as well. This can really do them all. And did you notice that throughout this video, we saw a lot of delta T's. What are those? Delta T is something that we call time step size. The smaller this number is, the tinier the time steps, with which we can advance the simulation when computing every interaction, and hence the more steps there are to compute. In simpler words, generally, time step size is an important factor in the computation time, and the smaller this is, the slower, but more accurate the simulation will be. This is why we needed to reduce the time steps by more than 30 times to get a stable simulation here with the previous method. And this paper proposes a technique that can get away with time steps that are typically from 10 times to 100 times larger than previous methods. And it is still stable. That is an incredible achievement. So what does that mean in a practical case? Well, hold on to your papers because this means that it is up to 58 times faster than previous methods. 58 times. Whoa! With a previous method, I would need to run something for nearly two months, and the new method would be able to compute the same within a day. Witchcraft, I'm telling you. What a time to be alive. Also, as usual, I couldn't resist creating a slow motion version of some of these videos, so if this is something that you wish to see, make sure to visit our Instagram page in the video description for more. This episode has been supported by Lambda GPU Cloud. If you're looking for inexpensive Cloud GPUs for AI, check out Lambda GPU Cloud. They recently launched Quadro RTX 6000, RTX 8000, and V100 instances, and hold on to your papers because Lambda GPU Cloud can cost less than half of AWS and Asia. Plus, they are the only Cloud service with 48GB RTX 8000. Join researchers at organizations like Apple, MIT, and Caltech in using Lambda Cloud instances, workstations, or servers. Make sure to go to LambdaLabs.com slash papers to sign up for one of their amazing GPU instances today. Our thanks to Lambda for their long-standing support and for helping us make better videos for you. Thanks for watching and for your generous support, and I'll see you next time.
|
Dear Fellow Scholars, this is two minute papers with Dr. Károly Zsolnai-Fehér. You are in for a real treat today, because today, once again, we are not going to simulate just plain regular fluids. No, we are going to simulate ferrofluids. These are fluids that have magnetic properties and respond to an external magnetic field and get this. They are even able to climb things. Look at this footage from a previous paper. Here is a legendary, real experiment where with magnetism, we can make a ferrofluid climb up on these steel helix. Look at that. And now, the simulation. Look at how closely it matches the real footage. Marvelous. Especially that it is hard to overstate how challenging it is to create an accurate simulation like this. And the paper got even better. This footage could even be used as proper teaching material. On this axis, you can see how the fluid disturbances get more pronounced as a response to a stronger magnetic field. And in this direction, you see how the effect of surface tension smooths out these shapes. What a visualization. The information density here is just out of this world while it is still so easy to read at a glance. And it is also absolutely beautiful. This paper was a true masterpiece. The first author of this work was Liboh Wang and it was advised by professor Dominik Michaz, who has a strong physics background. And here's the punchline, Liboh Wang is a PhD student and this was his first paper. Let me say it again. This was Liboh Wang's first paper and it is a masterpiece. Wow. And it gets better because this new paper is called Surface Only Faro Fluids and yes, it is from the same authors. So this paper is supposed to be better, but the previous technique set a really high bar. How the heck do you beat that? What more could we possibly ask for? Well, this new method showcases a surface only formulation and the key observation here is that for a class of ferrofluids, we don't have to compute how the magnetic forces act on the entirety of the 3D fluid domain, we only have to compute them on the surface of the model. So what does this give us? One of my favorite experiments. In this case, we squeeze the fluid between two glass planes and start cranking up the magnetic field strength perpendicular to these planes. Of course, we expect that it starts flowing sideways, but not at all how we would expect it. Wow. Look at how these beautiful fluid librarians start slowly forming. And we can simulate all this on our home computers today. We are truly living in a science fiction world. Now, if you find yourself missing the climbing experiment from the previous paper, don't despair, this can still do that too. Look, first we can control the movement of the fluid by turning on the upper magnet, then slowly turn it off while turning on the lower magnet to give rise to this beautiful climbing phenomenon. And that's not all. Fortunately, this work is also ample in amazing visualizations. For instance, this one shows how the ferrofluid changes if we crank up the strength of our magnets and how changing the surface tension determines the distance between the spikes and the overall smoothness of the fluid. What a time to be alive. One of the limitations of this technique is that it does not deal with viscosity well, so if we are looking to create a crazy goo simulation like this one, but with ferrofluids, we will need something else for that. Perhaps that something will be the next paper down the line. PerceptiLabs is a visual API for TensorFlow carefully designed to make machine learning as intuitive as possible. This gives you a faster way to build out models with more transparency into how your model is architected, how it performs, and how to debug it. Look, it lets you toggle between the visual modeler and the code editor. It even generates visualizations for all the model variables and gives you recommendations both during modeling and training and does all this automatically. I only wish I had a tool like this when I was working on my neural networks during my PhD years. And that's it, perceptilabs.com slash papers to easily install the free local version of their system today. Our thanks to perceptilabs for their support and for helping us make better videos for you. Thank you for watching and for your generous support and I'll see you next time.
|
Dear Fellow Scholars, this is two-minute papers with Dr. Karo Zsolnai-Fehir. In early 2019, a learning-based technique appeared that could perform common natural language processing operations, for instance, answering questions, completing text, reading comprehension, summarization, and more. This method was developed by scientists at OpenAI and they called it GPT2. The key idea for GPT2 was that all of these problems could be formulated as different variants of text completion problems where all we need to do is provide it an incomplete piece of text and it would try to finish it. Then in June 2020 came GPT3 that supercharged this idea and among many incredible examples it could generate website layouts from a written description. However, no one said that these neural networks can only deal with text information and, sure enough, a few months later, scientists at OpenAI thought that if we can complete text sentences, why not try to complete images too? They called this project image GPT and the problem statement was simple. We give it an incomplete image and we ask the AI to fill in the missing pixels. That could identify that the cat here likely holds a piece of paper and finish the picture accordingly and even understood that if we have a droplet here and we see just a portion of the ripples, then this means a splash must be filled in. And now, right in January 2021, just seven months after the release of GPT3, here is their new mind-blowing technique that explores the connection between text and images. But finishing images already kind of works, so what new thing can it do? In just a few moments you will see that the more appropriate question would be, what can it do? For now, well, it creates images from our written text captions and you will see in a moment how monumental of a challenge that is. The name of this technique is a mix of Salvador Dalí and Pixar's Wally. So please meet Dalí. And now let's see it through an example. For neural network-based learning methods, it is easy to recognize that this text says OpenAI and what a storefront is. Images of both of these exist in abundance. Understanding that is simple. However, generating a storefront that says OpenAI is quite a challenge. Is it really possible that it can do that? Well, let's try it. Look, it works. Wow! Now, of course, if you look here, you immediately see that it is by no means perfect, but let's marvel at the fact that we can get all kinds of 2D and 3D text. Look at the storefronts from different orientations and it can deal with all of these cases reasonably well. And of course, it is not limited to storefronts. We can request license plates, bags of chips, neon signs and more. It can really do all that. So what else? Well, get this. It can also kind of invent new things. So let's put our entrepreneurial hat on and try to invent something here. For instance, let's try to create a triangular clock. Or pentagonal. Or you know, just make it a hexagon. It really doesn't matter because we can ask for absolutely anything and get a bunch of prototypes in a matter of seconds. Now let's make it white and look. Now we have a happy, happy caroy. Why is that? It is because I am a light transport researcher by trade, so the first thing I look at when seeing these generated images is how physically plausible they are. For instance, look at this white clock here on the blue table. And it not only put it on the table, but it also made sure to generate appropriate glossary reflections that matches the color of the clock. It can do this too. Loving it. Apparently, it understands geometry, shapes and materials. I wonder what else does it understand? Well, get this. For instance, it even understands styles and rendering techniques. Being a graphics person, I am so happy to see that it learned the concept of low polygon count rendering, isometric views, clay objects, and we can even add an x-ray view to the owl. Kind of. And now, if all that wasn't enough, hold on to your papers because we can also commission artistic illustrations for free, and not only that, but even have fine-grained control over these artistic illustrations. I also learned that if manatees wore suits, they would wear them like this, and after a long and strenuous day walking their dogs, they can go for yet another round in pajamas. But it does not stop there. That can not only generate paintings of nearly anything, but we can even choose the artistic style and the time of day as well. The night images are a little on the nose, as most of them have the moon in the background, but I'll be more than happy to take these. And the best part is that you can try it yourself right now through the link in the video description. In general, not all results are perfect, but it's hard to even fathom all the things this will enable us to do in the near future when we can get our hands on these pre-trained models. And this may be the first technique where the results are not limited by the algorithm, but by our own imagination. Now this is a quote that I said about GPT-3, and notice that the exact same thing can be said about Dolly. Quote, the main point is that working with GPT-3 is a really peculiar process where we know that a vast body of knowledge lies within, but it only emerges if we can bring it out with properly written prompts. It almost feels like a new kind of programming that is open to everyone, even people without any programming or technical knowledge. If a computer is a bicycle for the mind, then GPT-3 is a fighter jet. Absolutely incredible. And I think this kind of programming is going to be more and more common in the future. Now, note that these are some amazing preliminary results, but the full paper is not available yet. So this was not two minutes, and it was not about a paper. Welcome to two-minute papers. Jokes aside, I cannot wait for the paper to appear and I'll be here to have a closer look whenever it happens. Make sure to subscribe and hit the bell icon to not miss it when the big day comes. Until then, let me know in the comments what crazy concoctions you came up with. PerceptiLabs is a visual API for TensorFlow, carefully designed to make machine learning as intuitive as possible. This gives you a faster way to build out models with more transparency into how your model is architected, how it performs, and how to debug it. Look, it lets you toggle between the visual modeler and the code editor. It even generates visualizations for all the model variables and gives you recommendations both during modeling and training and does all this automatically. I only wish I had a tool like this when I was working on my neural networks during my PhD years. Visit perceptiLabs.com slash papers to easily install the free local version of their system today. Our thanks to perceptiLabs for their support and for helping us make better videos for you. Watch again for your generous support and I'll see you next time.
|
Dear Fellow Scholars, this is two-minute papers with Dr. Karajona Ifeher. Whenever we take a photo, we capture a piece of reality from one viewpoint. Or if we have multiple cameras on our smartphone, a few viewpoints at most. In an earlier episode, we explored how to upgrade these to 3D photos where we could kind of look behind the person. I am saying kind of because what we see here is not reality. This is statistical data that is filled in by an algorithm to match its surroundings which we refer to as image in painting. So strictly speaking, it is likely information, but not necessarily true information. And also, we can recognize the synthetic parts of the image as they are significantly blurrier. So the question naturally arises in the mind of the curious scholar, how about actually looking behind the person? Is that somehow possible or is that still science fiction? Well, hold on to your papers because this technique shows us the images of the future by sticking a bunch of cameras onto a spherical shell. And when we capture a video, it will see something like this. And the goal is to untangle this mess, and we are not done yet. We also need to reconstruct the geometry of the scene as if the video was captured from many different viewpoints at the same time. Absolutely amazing. And yes, this means that we can change our viewpoint while the video is running. Since it is doing the reconstruction in layers, we know how far each object is in these scenes enabling us to rotate these sparks and flames and look at them in 3D. Yum! Now, I am a light transport researcher by trade, so I hope you can tell that I am very happy about these beautiful volumetric effects, but I would also love to know how it deals with reflective surfaces. Let's see together. Look at the reflections in the sand here, and I'll add a lot of camera movement. Wow, this thing works. It really works. And it does not break a sweat even if we try a more reflective surface or an even more reflective surface. This is as reflective as it gets I'm afraid, and we still get a consistent and crisp image in the mirror. Bravo! Alright, let's get a little more greedy. What about seeing through thin fences? That is quite a challenge. And look at the tailwax there. This is still a touch blurrier here and there, but overall, very impressive. So what do we do with a video like this? Well, we can use our mouse to look around within the photo in our web browser, you can try this yourself right now by clicking on the paper in the video description. Make sure to follow the instructions if you do. Or we can make the viewing experience even more immersive to the head mounted display, where, of course, the image will follow wherever we turn our head. Both of these truly feel like entering a photograph and getting a feel of the room therein. Loving it. Now, since there is a lot of information in these light field videos, it also needs a powerful internet connection to relay them. And when using h.265, a powerful video compression standard, we are talking in the order of hundreds of megabits. It is like streaming several videos in 4K resolution at the same time. Compression helps, however, we also have to make sure that we don't compress too much, so that compression artifacts don't eat the content behind thin geometry, or at least not too much. I bet this will be an interesting topic for a follow-up paper, so make sure to subscribe and hit the bell icon to not miss it when it appears. And for now, more practical light field photos and videos will be available that allow us to almost feel like we are really in the room with the subjects of the videos. What a time to be alive! This episode has been supported by Lambda GPU Cloud. If you are looking for inexpensive Cloud GPUs for AI, check out Lambda GPU Cloud. They recently launched Quadro RTX 6000, RTX 8000, and V100 instances, and hold onto your papers because Lambda GPU Cloud can cost less than half of AWS and Azure. Plus, they are the only Cloud service with 48GB RTX 8000. And researchers at organizations like Apple, MIT, and Caltech in using Lambda Cloud instances, workstations, or servers. Make sure to go to lambdalabs.com slash papers to sign up for one of their amazing GPU instances today. Our thanks to Lambda for their long-standing support and for helping us make better videos for you. Thanks for watching and for your generous support, and I'll see you next time.
|
Dear Fellow Scholars, this is Two Minute Papers with Dr. Károly Zsolnai-Fehér. Today, we are going to get a taste of how insanely quick progress is in machine learning research. In March of 2020, a paper appeared that goes by the name Neural Radiance Fields, Nerf in short. With this technique, we could take a bunch of input photos, get a neural network to learn them, and then synthesize new previously unseen views of not just the materials in the scene, but the entire scene itself. And here we are talking not only digital environments, but also real scenes as well. Just to make sure, once again, it can learn and reproduce entire real-world scenes from only a few views by using neural networks. However, of course, Nerf had its limitations. For instance, in many cases, it has trouble with scenes with variable lighting conditions and lots of occluders. And to my delight, only five months later, in August of 2020, a follow-up paper by the name Nerf in the wild, or Nerf W in short. Its specialty was tourist attractions that a lot of people take photos of, and we then have a collection of photos taken during a different time of the day, and of course with a lot of people around. And lots of people, of course, means lots of occluders. Nerf W improved the original algorithm to excel more in cases like this. And we are still not done yet, because get this only three months later, on 2020, November 25th, another follow-up paper appeared by the name deformable neural radiance fields, D-Nerf. The goal here is to take a selfie video and turn it into a portrait that we can rotate around freely. This is something that the authors call a nerfie. If we take the original Nerf technique to perform this, we see that it does not do well at all with moving things. And here is where the deformable part of the name comes into play. And now, hold onto your papers and marvel at the results of the new D-Nerf technique. A clean reconstruction. We indeed get a nice portrait that we can rotate around freely, and all of the previous Nerf artifacts are gone. It performs well, even on difficult cases with beards, all kinds of hairstyles, and more. And now, hold onto your papers, because glasses work too, and not only that, but it even computes the proper reflection and refraction off of the lens. And this is just the start of a deluge of new features. For instance, we can even zoom out and capture the whole body of the test subject. Furthermore, it is not limited to people. It also works on dogs too, although in this case, we will have to settle with a lower resolution output. It can pull off the iconic dolly zoom effect really well. And, amusingly, we can even perform a Nerfception, which is recording ourselves, as we record ourselves. I hope that now you have a good feel of the pace of progress in machine learning research, which is absolutely incredible. So much progress in just 9 months of research. My goodness. What a time to be alive! This episode has been supported by weights and biases. In this post, they show you how to use their tool to analyze chest x-rays and more. If you work with learning algorithms on a regular basis, make sure to check out weights and biases. Their system is designed to help you organize your experiments, and it is so good it could shave off weeks or even months of work from your projects, and is completely free for all individuals, academics, and open source projects. This really is as good as it gets, and it is hardly a surprise that they are now used by over 200 companies and research institutions. Make sure to visit them through wnb.com, slash papers, or just click the link in the video description, and you can get a free demo today. Our thanks to weights and biases for their longstanding support, and for helping us make better videos for you. Thanks for watching and for your generous support, and I'll see you next time.
|
Dear Fellow Scholars, this is two-minute papers with Dr. Karo Zsolnai-Fehir. Here you see people that don't exist. They don't exist because these images were created with a neural network-based learning method by the name Staggan2, which can not only create eye-poppingly detailed looking images, but it can also fuse these people together, or generate cars, churches, horses, and of course cats. This is quite convincing, so is there anything else to do in human phase generation? Are we done? Well, this footage from a new paper may give some of it away. If you have been watching this series for a while, you know that of course researchers always find a way to make these techniques better. We always say two more papers down the line, and it will be improved a great deal. And now we are one more paper down the line, so let's see together if there has been any improvement. This new technique is based on Staggan2 and is called Styrofo, and it can take an input photo of a test subject and edit a number of meaningful parameters. Age, expression, lighting, pose, you name it. Now, note that there were other techniques that could pull this off, but the key improvement here is that one, we can perform many sequential changes, and two, it does all this while remaining faithful to the original photo. And hold on to your papers because three, it can also help Elon Musk to grow a majestic beard. And believe it or not, you will also see a run of this algorithm on me as well at the end of the video. First, let's adjust the lighting a little, and now it's time for that beard. Loving it. Now, a little aging, and one more beard please. It seems to me that this beard is different from the young man beard, which is nice attention to detail. And note that we have strayed very far from the input image, but if you look at the intermediate states, you see that the essence of the test subject is indeed quite similar. This Elon is still Elon. Note that these results are not publicly available and were made specifically for us so you can only see this here on two minute papers. That is quite an honor, so thank you very much to Rameen Abdal, the first author of the paper for being so kind. Now, another key improvement in this work is that we can change one of these parameters with little effect on anything else. Have a look at this workflow and see how well we can perform these sequential edits. These are the source images and the labels showcase one variable change for each sub-sequent image and you can see how amazingly surgical the changes are. Which craft? If it says that we changed the facial hair, that's the only change I see in the output. And just think about the fact that these starting images are real photos, but the outputs of the algorithm are made up people that don't exist. And notice that the background is also mostly left untouched. Why does that matter? You will see in the next example. So far so good, but this method can do way more than this. Now let's edit not people, but cars. I love how well the color and pose variations work. Now, if you look here, you see that there is quite a bit of collateral damage, as not only the cars, but the background is also changing, opening up the door for a potentially more surgical follow-up paper. Make sure to subscribe and hit the bell icon to get notified when we cover this one more paper down the line. And now, here's what you have been waiting for, we'll get hands-on experience with this technique where I shall be the subject of the next experiment. First, the algorithm is looking for high-resolution frontal images, so for instance, this would not work at all, we would have to look for a different image. No matter, here's another one where I got locked up for reading too many papers. This could be used as an input for the algorithm. And look, this image looks a little different. Why is that? It is because StarGand 2 runs an embedding operation on the photo before starting its work. This is an interesting detail that we only see if we start using the technique ourselves. And now, come on, give me that beard. Oh, loving it. What do you think? Whose beard is better? Elans or mine? Let me know in the comments below. And now, please meet Old Mancaroy, he will tell you that papers were way better back in his day and the usual transformations. But also note that as a limitation, we had quite a bit of collateral damage for the background. This was quite an experience. Thank you so much. And we are not done yet because this paper just keeps on giving. It can also perform attribute transfer. What this means is that we have two input photos. This will be the source image, and we can choose a bunch of parameters that we would like to extract from it. For instance, the lighting and pose can be extracted through attribute transfer, and it seems to even estimate the age of the target subject and change the source to match it better. Loving it. The source code for this technique is also available and make sure to have a look at the paper. It is very thoroughly evaluated. The authors, when the extra mile there and it really shows. And I hope that you now agree, even if there is a technique that appears quite mature, researchers always find a way to further improve it. And who knew it took a competent AI to get me to grow a beard. Totally worth it. What a time to be alive. This episode has been supported by weights and biases. In this post, they show you how to use sweeps to automate hyper parameter optimization and explore the space of possible models and find the best one. During my PhD studies, I trained a ton of neural networks which were used in our experiments. However, over time, there was just too much data in our repositories and what I am looking for is not data, but insight. And that's exactly how weights and biases helps you by organizing your experiments. It is used by more than 200 companies and research institutions, including OpenAI, Toyota Research, GitHub, and more. And get this, weights and biases is free for all individuals, academics, and open source projects. Make sure to visit them through www.nb.com slash papers or just click the link in the video description and you can get a free demo today. Our thanks to weights and biases for their long-standing support and for helping us make better videos for you. Thanks for watching and for your generous support, and I'll see you next time.
|
Dear Fellow Scholars, this is two-minute papers with Dr. Karo Zsolnai-Fehir. This new paper fixes many common problems when it comes to two-way coupling in fluid simulations. And of course, the first question is, what is two-way coupling? It means that here the boxes are allowed to move the smoke and the added two-way coupling part means that now the smoke is also allowed to blow away the boxes. What's more, the vortices here on the right are even able to suspend the red box in the air for a few seconds. An excellent demonstration of a beautiful phenomenon. However, simulating this effect properly for water simulations and for gooey materials is a huge challenge, so let's see how traditional methods deal with them. Experiment number one. Water bunnies. Do you see what I am seeing here? Did you see the magic trick? Let's look again. Observe how much water we are starting with. A full bunny worth of water, and then by the end we have maybe a quarter of a bunny left. Oh yes, we have a substantial amount of numerical dissipation in the simulator that leads to volume loss. Can this be solved somehow? Well, let's see how this new work deals with this. Starting with one bunny. And ending it with one bunny. Nice. Just look at the difference of the volume of water left with a new method compared to the previous one. Night and day difference. And this was not even the worst volume loss I've seen. Make sure to hold on to your papers and check out this one. Experiment number two. Gooey dragons and balls. When using a traditional technique, Whoa, this guy is gone. And when we try a different method, it does this. My goodness. So let's see if the new method can deal with this case. Oh yes, yes it can. And now onwards to experiment number three. If you think that research is about throwing things at the wall and seeing what sticks in the case of this scene, you are not wrong. So what should happen here given these materials? Well, the bunny should stick to the goo and not fall too quickly. Hmm, none of which happens here. The previous method does not simulate viscosity properly and hence this artificial melting phenomenon emerges. I wonder if the new method can do this too. And yes, they stick together and the goo correctly slows down the fall of the bunny. So how does this magic work? Normally in these simulations we have to compute pressure, viscosity and frictional contact separately, which are three different tasks. The technique described in the paper is called monolith because it has a monolithic pressure viscosity contact solver. Yes, this means that it does all three of these tasks in one go, which is mathematically a tiny bit more involved, but it gives us a proper simulator where water and goo can interact with solids. No volume loss, no artificial melting, no crazy jumpy behavior. And here comes the punchline. I was thinking that all right, a more accurate simulator, that is always welcome, but what is the price of this accuracy? How much longer do I have to wait? If you have been holding on to your papers, now squeeze that paper because this technique is not slower, but up to 10 times faster than previous methods, and that's where I fell off the chair when reading this paper. And with this, I hope that we will be able to marvel at even more delightful two way coupled simulations in the near future. What a time to be alive. This episode has been supported by weights and biases. In this post, they show you how you can get an email or Slack notification when your model crashes. With this, you can check on your model performance on any device. Heavenly. Weight and biases provides tools to track your experiments in your deep learning projects. Their system is designed to save you a ton of time and money, and it is actively used in projects at prestigious labs such as OpenAI, Toyota Research, GitHub, and more. And the best part is that weights and biases is free for all individuals, academics, and open source projects. It really is as good as it gets. Make sure to visit them through wnb.com slash papers, or just click the link in the video description, and you can get the free demo today. Our thanks to weights and biases for their longstanding support, and for helping us make better videos for you. Thanks for watching, and for your generous support, and I'll see you next time.
|
Dear Fellow Scholars, this is two-minute papers with Dr. Karaj Zonai-Fehir. What you see here is a bunch of vector images. Vector images are not like most images that you see on the internet. Those are RESTR images. Those are like photos and are made of pixels. While vector images are not made of pixels, they are made of shapes. These vector images have lots of advantages. They have really small file sizes. Can be zoomed into as much as we desire and things don't get pixelated. And hence, vector images are really well suited for logos, user interface icons, and more. Now, if we wish to, we can convert vector images into RESTR images, so the shapes will become pixels. This is easy, but here is the problem. If we do it once, there is no going back, or at least not easily. This method promises to make this conversion a two-way street, so we can take a RESTR image, a photo, if you will, and work with it, as if it were a vector image. Now, what does that mean? Oh, boy, a lot of goodies. For instance, we can perform sculpting, or in other words, manipulating shapes without touching any pixels. We can work with the shapes here instead. And much easier, or my favorite, perform painterly rendering. Now, what you see here is not the new algorithm performing this. This is a genetic algorithm I wrote a few years ago that takes a target image, which is the Mona Lisa here, takes a bunch of randomly colored triangles, and starts reorganizing them to get as close to the target image as possible. The source code and the video explaining how it works is available in the video description. And now, let's see how this new method performs on a similar task. It can start with a large number of different shapes, and just look at how beautifully these shapes evolve, and start converging to the target image. Loving it. But that's not all. It also has a nice solution to an old, but challenging problem in computer graphics that is referred to as seam carving. If you ask me, I like to call it image squishing. Why? Well, look here. This gives us an easy way of intelligently squishing an image into different aspect ratios. So good. So, can we measure how well it does what it does? How does it compare to Adobe's state-of-the-art method when vectorizing a photo? Well, it can not only do more, but it also does it better. The new method is significantly closer to the target image here, no question about it. And now comes the best part. It not only provides higher quality results than the previous methods, but it only takes approximately a second to perform all this. Wow. So, there you go. Finally, with this technique, we can edit pixels as if they weren't pixels at all. It feels like we are living in a science fiction world. What a time to be alive. This episode has been supported by Lambda GPU Cloud. If you're looking for inexpensive Cloud GPUs for AI, check out Lambda GPU Cloud. They've recently launched Quadro RTX 6000, RTX 8000, and V100 instances. And hold onto your papers because Lambda GPU Cloud can cost less than half of AWS and Azure. Plus, they are the only Cloud service with 48GB, RTX 8000. Join researchers at organizations like Apple, MIT, and Caltech in using Lambda Cloud instances, workstations, or servers. Make sure to go to lambdaleps.com, slash papers to sign up for one of their amazing GPU instances today. Our thanks to Lambda for their long-standing support and for helping us make better videos for you. Thanks for watching and for your generous support, and I'll see you next time.
|
Dear Fellow Scholars, this is two minute papers with Dr. Karajon A. Feher. We hear many opinions every day on how text policy should change, what we should do, and how it would affect us. But of course, whenever we change something, we have to be mindful of its ramifications. Experimenting with text policies willy-nilly is of course ill-advised in the real world, but we are researchers so we can create our little virtual world, populate them with virtual character A.I.s, and we can experiment all we want here. Just imagine a future where a politician comes and says, I will lift up the middle class by creating policy X, well, let's simulate that policy X in a virtual world and see if it actually works as they promised. That would be glorious, but of course, I know, I know, wishful thinking, right? But maybe with this paper not so much. Look, these workers are reinforcement learning agents, which means that they observe their environment, inquire what the text rates and other parameters are, and decide how to proceed. They start working, they pay their taxes, and over time, they learn to maximize their own well-being given a tax system. This is the inner loop of the simulation. Now comes the even cooler part, the outer loop. This means that we have not only simulated worker A.I.s, but simulated policy maker A.I.s 2, and they look at inequality, wealth, distribution, and other market dynamics, and adjust the tax policy to maximize something that we find important. And herein lies the key of the experiment. Let's start with the goal of the simulation. We seek a tax policy that maximizes equality and productivity at the same time. This is, of course, immensely difficult. Every decision comes with its own trade-offs. We talk about the results in a moment, but first, let's marvel at the fact that it simulates agents with higher and lower skills, and not only that, but with these, specialization starts appearing. For instance, the lower skill agents started gathering and selling materials where the higher skill agents started buying up the materials to build houses. The goal of the policy maker A.I. is to maximize the equality and productivity of all of these agents. As a comparison, it also simulated the 2018 US Federal Tax Rate, an analytical tax formula where the marginal tax rate decreases with income, a free market model, and also proposed its own tax policies. So, how well did it do? Let's have a look at the results together. The free market model excels in maximizing productivity, which sounds great, until we find out that it does this at the cost of equality. Look, the top agent owns almost everything, leaving nearly nothing for everyone else. The US Federal Tax and Analytical models strike a better balance between the two, but neither seems optimal. So, where is the A.I. economic model on this curve? Well, hold on to your papers because it is here. It improved the trade-off between equality and productivity by 16% by proposing a system that is harder to game, that gives a bigger piece of the pie for the middle class and subsidizes lower skill A.I. workers. And do not forget that the key here is that it not only proposes things, but it can prove that these policies serve everyone, at least within the constraints of this simulation. Now, that's what I call a great paper. And there is so much more useful knowledge in this paper, I really urge you to have a look, it is a fantastic read. And needless to say, I'd love to see more research in this area. For instance, I'd love to know what happens if we start optimizing for sustainability as well. These objectives can be specified in this little virtual world, and we can experiment with what happens if we are guided by these requirements. And now, onwards to more transparent tax policies. What a time to be alive! Perceptilebs is a visual API for TensorFlow carefully designed to make machine learning as intuitive as possible. This gives you a faster way to build out models with more transparency into how your model is architected, how it performs, and how to debug it. Look, it lets you toggle between the visual modeler and the code editor. It even generates visualizations for all the model variables and gives you recommendations both during modeling and training, and does all this automatically. I only wish I had a tool like this when I was working on my neural networks during my PhD years. Visit perceptilebs.com slash papers to easily install the free local version of their system today. Our thanks to perceptilebs for their support, and for helping us make better videos for you. Thanks for watching and for your generous support, and I'll see you next time.
|
Dear Fellow Scholars, this is two minute papers with Dr. Karajjola Efehir. This is a standard color photo made with a smartphone. Hence, it contains only a two-dere presentation of the world, and when we look at it, our brain is able to reconstruct the 3D information from it. And I wonder, would it be possible for an AI to do the same and go all the way and create a 3D version of this photo that we can rotate around? Well, this new learning based method promises exactly that, and if that is at all possible, even more. These are big words, so let's have a look if it can indeed live up to its promise. So, first, we take a photograph and we'll find out together in a moment what kind of phone is needed for this. Probably an amazing one, right? For now, this will be the input, and now, let's see the 3D photo as an output. Let's rotate this around, and wow, this is amazing. And you know what is even more amazing, since pretty much every smartphone is equipped with a gyroscope, these photos can be rotated around in harmony with the rotation of our phones, and wait a second, is this some sort of misunderstanding, or do I see correctly that we can even look behind the human if we wanted to? That content was not even part of the original photo. How does this work? More on that in a moment. Also, just imagine putting on a pair of VR glasses and looking at a plane to the photo and get an experience as if we were really there. It truly feels like we are living in a science fiction world. If we grab our trusty smartphone and use these images, we can create a timeline full of these 3D photos and marvel at how beautifully we can scroll such a timeline here. And now we have piled up quite a few questions here. How is this wizardry possible? What kind of phone do we need for this? Do we need a depth sensor? Maybe even LIDAR. Let's look under the hood and find out together. This is the input. One colored photograph that is expected, and let's continue. Goodness, now this is unexpected. The algorithm creates a depth map by itself. This depth map tells the algorithm how far different parts of the image are from the camera. Just look at how crisp the outlines are. My goodness. So good. Then with this depth information, it now has an understanding of what is where in this image and creates these layers. Which is, unfortunately, not much help. As you remember, we don't have any information on what is behind the person. No matter because we can use a technique that implements image in painting to fill in these regions with sensible data. And now, with this, we can start exploring these 3D photos. So if it created this depth map from the color information, this means that we don't even need a depth sensor for this. Just a simple color photograph. But, wait a minute, this means that we can plug in any photo from any phone or camera that we or someone else talk at any time. And I mean at any time, right? Just imagine taking a black and white photo of a historic event, colorizing it with the previous learning based method and passing this color image to this new method. And then this happens. My goodness. So all this looks and sounds great. But how long do we have to wait for such a 3D photo to be generated? Does my phone battery get completely drained by the time all this computation is done? What is your guess? Please stop the video and leave a comment with your guess. I'll wait. Alright, so is this a battery killer? Let's see. The depth estimation step takes, whoa! A quarter of a second, in painting, half a second, and after a little housekeeping, we find out that this is not a battery killer at all because the whole process is done in approximately one second. Holy matter of papers. I am very excited to see this technique out there in the wild as soon as possible. What you see here is a report of this exact paper we have talked about which was made by Wades and Biasis. I put a link to it in the description. Make sure to have a look. I think it helps you understand this paper better. If you work with learning algorithms on a regular basis, make sure to check out Wades and Biasis. Their system is designed to help you organize your experiments and it is so good it could shave off weeks or even months of work from your projects and is completely free for all individuals, academics, and open source projects. This really is as good as it gets. And it is hardly a surprise that they are now used by over 200 companies and research institutions. Make sure to visit them through wnb.com slash papers or just click the link in the video description and you can get a free demo today. Our thanks to Wades and Biasis for their longstanding support and for helping us make better videos for you. Thanks for watching and for your generous support and I'll see you next time.
|
Dear Fellow Scholars, this is 2 Minute Papers with Dr. Karajon Aifahir. I hope you like Wiggles and Jiggles, because today we are going to see a lot of them. You see, this technique promises to imbue a rigged animation with elastoplastic secondary effects. Now, if you tell this to a computer graphics researcher, they will be extremely happy to hear that this is finally possible, but what does this really mean? This beautiful fish animation is one of the finest demonstrations of the new method. So, what happened here? Well, the first part means that we modeled a piece of 3D geometry and we wish to make it move, but in order to make this movement believable, we have to specify where the bones and joints are located within the model. This process is called rigging and this model will be the input for it. We can make it move with traditional methods, well, kind of. You see that the bones and joints are working, but the model is still solid. For instance, look, the trunk and ears are both completely solid. This is not what we would expect to see from this kind of motion. So, can we do better than this? Well, hold on to your papers and let's see how this new technique enhances these animations. Oh yes, floppy ears. Also, the trunk is dangling everywhere. What a lovely animation. Prepare for a lot more rigels and jiggles. Now, I love how we can set up the material properties for this method. Of course, it cannot just make decisions by itself because it would compete with the artist's vision. The goal is always to enhance these animations in a way that gives artists more control over what happens. So, what about more elaborate models? Do they work too? Let's have a look and find out together. This is the traditional animation method. No deformation on the belly. Nostrils are not moving too much. So now, let's see the new method. Look at that. The face, ears, nose and mouth now show elastic movement. So cool. Even the belly is deforming as the model is running about. So, we already see that it can kind of deal with the forces that are present in the simulation. Let's give this aspect a closer look. This is the traditional method we are moving up and down. Up and down. All right, but look at the vectors here. There is an external force field or in simpler words, the wind is blowing. And unfortunately, not much is really happening to this model. But when we plug in this new technique, look, it finally responds to these external forces. And yes, as a reward, we get more vehicles. So, what else can this do? A lot more. For instance, it does not require this particular kind of rig with the bones and joints. This Hedgehog was instead rigged with two handles, which is much simpler. But unfortunately, when we start to move it with traditional techniques, well, the leg is kind of pinned to the ground where the remainder of the model moves. So, how did the new technique deal with this? Oh, yes, the animation is much more realistic and things are dangling around in a much more lively manner. The fact that it works on many different kinds of rigged models out there in the world boosters the usability of this technique a great deal. But we are not done yet. No, no, if you have been holding onto your paper so far, now squeeze that paper because we can take a scene, drop in a bunch of objects and expect a realistic output. But wait a second. If you have been watching the series for a while, you know for a fact that for this, we need to run an elaborate physics simulator. For instance, just look at this muscle simulation from earlier. And here's the key. This animation took over an hour for every second of video footage. That you see here. The new method does not need to compute a full-blown physics simulation to add this kind of elastic behavior and hence in many cases it runs in real time. And it works for all kinds of rigs out there in the world and we even have artistic control over the output. We don't need elaborate models and many hours of rigging and simulation to be able to create a beautiful animation anymore. What a time to be alive. This episode has been supported by weights and biases. In this post they show you how to debug and compare models by tracking predictions, hyper parameters, GPU usage and more. During my PhD studies I trained a ton of neural networks which were used in our experiments. However, over time there was just too much data in our repositories and what I am looking for is not data but insight. And that's exactly how weights and biases helps you by organizing your experiments. It is used by more than 200 companies and research institutions including OpenAI, Toyota Research, GitHub and more. And get this, weight and biases is free for all individuals, academics and open source projects. Make sure to visit them through wnb.com slash papers or just click the link in the video description and you can get a free demo today. Our thanks to weights and biases for their long standing support and for helping us make better videos for you. Thanks for watching and for your generous support and I'll see you next time.
|
Dear Fellow Scholars, this is Two Minute Papers with Dr. Karo Zsolnai-Fehir. Feast your eyes upon this simulation from our previous episode that showcases a high viscosity material that is honey. The fact that honey is viscous means that it is a material that is highly resistant against deformation. In simpler words, if we can simulate viscosity well, we can engage in the favorite pastimes of the computer graphics researcher, or in other words, take some of these letters, throw them around, watch them slowly lose their previous shapes, and then, of course, destroy them in a spectacular manner. I love making simulations like this, and when I do, I want to see a lot of detail, which unfortunately means that I also have to run these simulations for a long time. So, when I saw that this new technique promises to compute these about four times faster, it really grabbed my attention. Here is a visual demonstration of such a speed difference. Hmm, so, four times, you say, that sounds fantastic. But, what's the catch here? This kind of speed up usually comes with cutting corners. So, let's test the might of this new method through three examples of increasing difficulty. Experiment number one. Here is the regular simulation, and here is the new technique. So, let's see the quality differences. One more time. Well, I don't see any. Do you? Four times faster with no degradation in quality. Hmm, so far, so good. So now, let's give it a harder example. Experiment number two. Varying viscousities and temperatures. In other words, let's give this bunny a hot shower. That was beautiful, and our question is, again, how close is this to the slow reference simulation? Wow, this is really close. I have to look carefully to even have a fighting chance in finding a difference. Checkmark. And I also wonder, can it deal with extremely detailed simulations? And now, hold on to your papers for experiment number three, dropping a viscous bunny on thin wires, and just look at the remnants of the poor bunny stuck in there. Loving it. Now, for almost every episode of this paper, I get comments saying, Karoi, this is all great, but when do I get to use this? When does this make it to the real world? These questions are completely justified, and the answer is, right about now. You can use this right now. This paper was published in 2019, and now, it appears to be already part of Houdini, one of the industry standard programs for visual effects and physics simulations. Tech transfer in just one year. Wow! Huge congratulations to Ryan Godade and his colleagues for this incredible paper, and huge respect to the folks at Houdini, who keep outdoing themselves with these amazing updates. This episode has been supported by weights and biases. In this post, they show you how to monitor and optimize your GPU consumption during model training in real time with one line of code. Weight and biases provides tools to track your experiments in your deep learning projects. Their system is designed to save you a ton of time and money, and it is actively used in projects at prestigious labs, such as OpenAI, Toyota Research, GitHub, and more. And the best part is that weights and biases is free for all individuals, academics, and open source projects. It really is as good as it gets. Make sure to visit them through wnb.com slash papers, or just click the link in the video description, and you can get a free demo today. Our thanks to weights and biases for their longstanding support, and for helping us make better videos for you. Thanks for watching, and for your generous support, and I'll see you next time.
|
Dear Fellow Scholars, this is two-minute papers with Dr. Karajona Ifahid. Everybody loves style transfer. This is a task typically done with neural networks where we have two images, one for content and one for style, and the output is the content image reimagined with this new style. The cool thing is that the style can be a different photo, a famous painting, or even wooden patterns. Feast your eyes on these majestic images of this cat reimagined with wooden parketry with these previous methods. And now look at the result of this new technique that looks way nicer. Everything is in order here except one thing. And now hold on to your papers because this is not style transfer. Not at all. This is not a synthetic photo made by a neural network. This is a reproduction of this cat image by cutting wood slabs into tiny pieces and putting them together carefully. This is computational parketry. And here the key requirement is that if we look from afar, it looks like the target image, but if we zoom in, it gets abundantly clear that the puzzle pieces here are indeed made of real wood. And that is an excellent intuition for this work. It is kind of like image stylization, but done in the real world. Now that is extremely challenging. Why is that? Well, first there are lots of different kinds of wood types. Second, if this piece was not a physical object but an image, this job would not be that hard because we could add to it, clone it, and do all kinds of pixel magic to it. However, these are real physical pieces of wood, so we can do exactly none of that. The only thing we can do is take away from it and we have limitations even on that because we have to design it in a way that a CNC device should be able to cut these pieces. And third, you will see that initially nothing seems to work well. However, this technique does this with flying colors, so I wonder how does this really work? First, we can take a photo of the wood panels that we have at our disposal, decide how and where to cut, give these instructions to the CNC machine to perform the cutting, and now we have to assemble them in a way that it resembles the target image. Well, still, that's easier said than done. For instance, imagine that we have this target image and we have these wood panels. This doesn't look anything like that, so how could we possibly approximate it? If we try to match the colors of the tool, we get something that is too much in the middle, and these colors don't resemble any of the original inputs. Not good. Instead, the authors opted to transform both of them to grayscale and match not the colors, but the intensities of the colors instead. This seems a little more usable, until we realize that we still don't know what pieces to use and where. Look, here on the left you see how the image is being reproduced with the wood pieces, but we have to mind the fact that as soon as we cut out one piece of wood, it is not available anymore, so it has to be subtracted from our wood panel repository here. As our resources are constrained, depending on what order we put the pieces together, we may get a completely different result. But, look, there is still a problem. The left part of the suit gets a lot of detail, while the right part, not so much. I cannot judge which solution is better, less or more detail, but it needs to be a little more consistent over the image. Now you see that whatever we do, nothing seems to work well in the general case. Now, we could get a much better solution if we would run the algorithm with every possible starting point in the image and with every possible ordering of the wood pieces, but that would take longer than our lifetime to finish. So, what do we do? Well, the authors have two really cool heuristics to address this problem. First, we can start from the middle that usually gives us a reasonably good solution since the object of interest is often in the middle of the image and the good pieces are still available for it. Or, even better, if that does not work too well, we can look for salient regions. These are the places where there is a lot going on and try to fill them in first. As you see, both of these tricks seem to work quite well most of the time. Finally, something that works. And if you have been holding onto your paper so far, now squeeze that paper because this technique not only works, but provides us a great deal of artistic control over the results. Look at that. And that's not all. We can even control the resolution of the output, or we can create a hand-drawn geometry ourselves. I love how the authors took a really challenging problem when nothing really worked well. And still, they didn't stop until they absolutely nailed the solution. Congratulations! This episode has been supported by Lambda GPU Cloud. If you're looking for inexpensive Cloud GPUs for AI, check out Lambda GPU Cloud. They've recently launched Quadro RTX 6000, RTX 8000, and V100 instances. And hold onto your papers because Lambda GPU Cloud can cost less than half of AWS and Asia. Plus, they are the only Cloud service with 48GB RTX 8000. Join researchers at organizations like Apple, MIT, and Caltech in using Lambda Cloud instances, workstations, or servers. Make sure to go to LambdaLabs.com slash papers to sign up for one of their amazing GPU instances today. Our thanks to Lambda for their long-standing support and for helping us make better videos for you. Thanks for watching and for your generous support, and I'll see you next time.
|
Dear Fellow Scholars, this is two minute papers with Dr. Karajor Naifahir. What you see here is one of my favorite kind of paper where we train a robot in a software simulation and then we try to bring it into the real world and see if it can still navigate there. The simulation is the textbook and the real world is the exam if you will. And it's a really hard one. So let's see if this one passes or fails. Now the real world is messy and full of obstacles so to even have a fighting chance in training a robot to deal with that, the simulation better has those properties too. So let's add some hills, steps and stairs and let's see if it can overcome these. Well, not at first as you see the agent is very clumsy and can barely walk through a simple terrain. But as time passes it grows to be a little more confident and with that the terrain also progressively becomes more and more difficult in order to maximize learning. That is a great life lesson right there. Now while we look at these terrain experiments you can start holding on to your papers because this robot knows absolutely nothing about the outside world. It has no traditional cameras, no radar, no lidar, no depth sensors, no no no, none of that. Only proprioceptive sensors are allowed which means that the only thing that the robot senses is its own internal state and that's it. Whoa! For instance, it knows about its orientation and twist of the base unit, a little joint information like positions and velocities and really not that much more, all of which is proprioceptive information. Yup, that's it. And bravo, it is doing quite well in the simulation. However, reality is never quite the same as the simulations, so I wonder if what was learned here can be used there. Let's see. Wow! Look at how well it traverses through this Rocky Mountain, stream, and not even this nightmare-y, slow-ed descent gives it too much trouble. It works even if it cannot get a proper foothold and it is sleeping all the time, and it also learned to engage in this adorable jumpy behavior when stuck in vegetation. And it learned all this by itself. Absolute witchcraft. When looking at this table, we now understand that it still has reasonable speeds through moss and why it is slower in vegetation than in mud. Really cool. If you have been holding onto your paper so far, now squeeze that paper because if all that wasn't hard enough, let's add an additional 10 kilogram or 22 pound payload and see if it can shoulder it. And let's be honest, this should not happen. Wow! Look at that! It can not only shoulder it, but it also adjusts its movement to the changes to its own internal physics. To accomplish this, it uses a concept that is called learning by cheating or the teacher student architecture when the agent that learned in the simulator will be the teacher. Now, after we escaped the simulator into the real world, we cannot lean on the simulator anymore to try to see what works, so then what do we do? Well, here the teacher takes the place of the simulator, it distills its knowledge down to train the student further. If both of them do their jobs really well, the student will successfully pass the exam with flying colors. As you see, it is exactly the case here. This is an outrageously good paper. What a time to be alive! This episode has been supported by weights and biases. In this post, they show you how to use their reports to explain how your model works, show plots of how model versions improved, discuss bugs, and demonstrate progress towards milestones. If you work with learning algorithms on a regular basis, make sure to check out weights and biases. Their system is designed to help you organize your experiments, and it is so good, it could shave off weeks or even months of work from your projects, and is completely free for all individuals, academics, and open source projects. This really is as good as it gets, and it is hardly a surprise that they are now used by over 200 companies and research institutions. Make sure to visit them through wnb.com slash papers, or just click the link in the video description, and you can get a free demo today. Our thanks to weights and biases for their longstanding support, and for helping us make better videos for you. Thanks for watching, and for your generous support, and I'll see you next time.
|
Dear Fellow Scholars, this is two-minute papers with Dr. Karajjona Ifehir. Let's ask these humans to jump into the pool at the same time on 3, 2, 1, go. Then this happens. And if we slow things down, like we would slow down an action sequence in a movie, look, this is 1, and then 2, and then 3. Not even close to being in the same time. We see that this footage is completely unselfageable. And please note the smile at the end of the video I'll tell you in a moment why. Imagine that after a little training, the little chaps were able to jump into the pool at the same time. Congratulations! But wait a second, look, the smile is the same as in the previous footage. This is not a different video, this is the same video as before, but it has been re-timed to seem as if the jumps happened at the same time. Absolutely amazing! And how about this footage? They move in tandem. Totally in tandem, right? Except that this was the original footage, which is a complete mess. And now, with this technique, it could be re-timed as if this happened. Incredible work. So, what is this with surgery? This is a new learning-based technique that can pull off nice tricks like this, and it can deal with cases that would otherwise be extremely challenging to re-time by hand. To find out why, let's have a look at this footage. Mm-hmm, got it. And now, let's try to pretend that this meeting never happened. Now, first, to be able to do this, we have to be able to recognize the test subjects of the video. This method performs pose estimation. These are the skeletons that you see here. Nothing new here. This can be done with off-the-shelf components. However, that is not nearly enough to remove or re-time them. We need to do more. Why is that? Well, look, mirror reflections and shadows. If we would only be able to remove the person, these secondary effects would give the trick away in a second. To address this issue, one of the key contributions of this new AI-based technique is that it is able to find these mirror reflections, shadows, and even more, which can be defined as motions that correlate with the test subject or, in other words, things that move together with the humans. And if we have all of these puzzle pieces, we can use a newer render to remove people or even adjust the timing of these videos. And now, hold on to your papers because it is not at all limited to shadows and mirror reflections. Remember this example. Here, the algorithm recognizes that these people cause deformations in the trampolines and is able to line them up together. Note that we need some intense and variable time-warping to make this happen. And, as an additional bonus, the photo bombing human has been removed from the footage. The neural renderer used in this work is from the Pix2Pix paper from only three years ago, and now, with some ingenious improvements, it can be elevated to a whole new level. Huge congratulations to the authors for this incredible achievement. And all this can be done in an interactive app that is able to quickly preview the results for us. What a time to be alive! What you see here is a report of this exact paper we have talked about, which was made by weights and biases. I put a link to it in the description. Make sure to have a look. I think it helps you understand this paper better. During my PhD studies, I trained a ton of neural networks which were used in our experiments. However, over time, there was just too much data in our repositories, and what I am looking for is not data, but insight. And that's exactly how weights and biases helps you by organizing your experiments. It is used by more than 200 companies and research institutions, including OpenAI, Toyota Research, GitHub, and more. And get this, weight and biases is free for all individuals, academics, and open source projects. Make sure to visit them through wnb.com slash papers, or just click the link in the video description, and you can get a free demo today. Our thanks to weights and biases for their long-standing support, and for helping us make better videos for you. Thanks for watching, and for your generous support, and I'll see you next time.
|
Dear Fellow Scholars, this is Two Minute Papers with Dr. Karo Jean-Eiffahir, through the power of neural network-based learning algorithms, today it is possible to perform video-to-video translation. This means that, for instance, in-go's a daytime video and outcomes a nighttime version of the same footage. This previous method had some remarkable properties, for instance, look, you can even see the reflections of the night lights appearing on the car hood. This is reassuring news indeed, because the spark of some sort of intelligence was already there in these algorithms. And believe it or not, this is old, old news, because we have been able to do these for about three years now. Of course, this was not perfect, the results were missing a lot of details, the frames of the video were quite a bit apart, artifacts were abundant, and more. But this paper was a huge leap at the time, and today I wonder how far have we come in three years? What can we do now that we were not able to do three years ago? Let's have a look together. This input video is much more detailed, a lot more continuous, so I wonder what we could do with this. Well, hold on to your papers and... Woohoo! Look at that! One, we have changed the sky and put a spaceship in there. That is already amazing, but look! Two, the spaceship is not stationary, but moves in harmony with the other objects in the video. And three, since the sky has changed, the lighting situation has also changed, so the colors of the remainder of the image also have to change. Let's see the before and after on that. Yes, excellent! And we can do so much more with this, for instance, put a castle in the sky... Or, you know what? Let's think big. Make it an extra planet, please. Wow! Thank you! And, wait a minute. If it is able to recolor the image after changing its surroundings, how well does it do it? Can we change the background to a dynamic one? What about a thunderstorm? Oh, yeah! That's as dynamic as it gets, and a new method handles this case beautifully. And before we look under the hood to see how all this wizardry is done, let's list our expectations. One, we expect that it has to know what picks us to change to load a different sky model. Two, it should know how the image is changing and rotating over time. And three, some recoloring has to take place. Now let's have a look and see if we find the parts we are expecting. And, yes, it has a sky-mating network. This finds the parts of the image where the sky is. There is the motion estimator that computes the optical flow of the image, distracts the movement of the sky over time, and there is the recoloring module as well. So, there we go. This can do not only sky replacement, but detailed weather and lighting synthesis is also possible for these videos. What a time to be alive! Now, if you remember, first we looked at the results, listed our expectations, and then we looked at the architecture of this neural network. This is a technique that I try to teach to my students in my Light Transport simulation course at the Technical University of Vienna, and I think if you check it out, you will be surprised by how effective it is. Now, note that I was always teaching it to a handful of motivated students, and I believe that the teachings shouldn't only be available for the privileged few who can afford a college education, but the teachings should be available for everyone. Free education for everyone, that's what I want. So, the course is available free of charge for everyone, no strings attached, so make sure to click the link in the video description to get started. We write a full-light simulation program from scratch there, and learn about physics, the world around us, and more. And what is perhaps even more important is that I try to teach you a powerful way of understanding difficult mathematical concepts. Make sure to have a look. What you see here is a report of this exact paper we have talked about, which was made by Wades and Biasis. I put a link to it in the description. Make sure to have a look. I think it helps you understand this paper better. Wades and Biasis provides tools to track your experiments in your deep learning projects. Their system is designed to save you a ton of time and money, and it is actively used in projects at prestigious labs, such as OpenAI, Toyota Research, GitHub, and more. And the best part is that Wades and Biasis is free for all individuals, academics, and open source projects. It really is as good as it gets. Make sure to visit them through wnb.com slash papers, or just click the link in the video description, and you can get the free demo today. Our thanks to Wades and Biasis for their long-standing support and for helping us make better videos for you. Thanks for watching and for your generous support, and I'll see you next time.
|
Dear Fellow Scholars, this is two minute papers with Dr. Karajjona Ifehr. The promise of virtual reality VR is indeed truly incredible. If one day it comes to fruition, doctors could be trained to perform surgery in a virtual environment. We could train better pilots with better flight simulators, expose astronauts to virtual zero gravity simulations, you name it. An important part of doing many of these is simulating walking in a virtual environment. You see, we can be located in a small room, put on a VR headset and enter a wonderful, expensive virtual world. However, as we start walking, we immediately experience a big problem. What is that problem? Well, we bump into things. As a remedy, we could make our virtual world smaller, but that would defeat the purpose. This earlier technique addresses this walking problem spectacularly by redirection. So, what is this redirection thing exactly? Redirection is a simple concept that changes our movement in the virtual world, so it deviates from a real path in the room, in a way that both lets us explore the virtual world, and not bump into walls and objects in reality in the meantime. Here you can see how the blue and orange lines deviate, which means that the algorithm is at work. With this, we can wonder about in a huge and majestic virtual landscape, or a cramped bar, even when being confined to a small physical room. Loving the idea. But there is more to interacting with virtual worlds than walking, for instance, look at this tech demo that requires more precise hand movements. How do we perform this? Well, the key is here. Controllers. Clearly, they work, but can we get rid of them? Can we just opt for a more natural solution and use our hands instead? Well, hold on to your papers, because this new work uses a learning-based algorithm to teach a head-mounted camera to tell the orientation of our hands at all times. Of course, the quality of the execution matters a great deal, so we have to ensure at least three things. One is that the hand tracking happens with minimal latency, which means that we see our actions immediately with minimal delay. Two, we need low jitter, which means that the key points of the reconstructed hand should not change too much from frame to frame. This happens a great deal with previous methods, and what about the new one? Oh, yes, much smoother. Checkmark. Note that the new method also remembers the history of the hand movement, and therefore can deal with difficult occlusion situations. For instance, look at the pinky here. A previous technique would not know what's going on with it, but this new one knows exactly what is going on, because it has information on what the hand was doing a moment ago. And three, this needs to work in all kinds of lighting conditions. Let's see if it can reconstruct a range of mythical creatures in poor lighting conditions. Yes, these ducks are reconstructed just as well as the mighty poke monster, and these scissors too. Bravo. So, what can we do with this? A great deal. For instance, we can type on a virtual keyboard, or implement all kinds of virtual user interfaces that we can interact with. We can also organize imaginary boxes, and of course, we can't leave out the two-minute papers favorite, going into a physics simulation, and playing with it. But of course, not everything is perfect here. Look, hand-hand interactions don't work so well, so folks who prefer virtual reality applications that include washing our hands should look elsewhere. But of course, one step at a time. This episode has been supported by Lambda GPU Cloud. If you're looking for inexpensive Cloud GPUs for AI, check out Lambda GPU Cloud. They've recently launched Quadro RTX 6000, RTX 8000, and V100 instances, and hold onto your papers because Lambda GPU Cloud can cost less than half of AWS and Azure. Plus, they are the only Cloud service with 48GB RTX 8000. Join researchers at organizations like Apple, MIT, and Caltech in using Lambda Cloud instances, workstations, or servers. Make sure to go to LambdaLabs.com slash papers to sign up for one of their amazing GPU instances today. Our thanks to Lambda for their long-standing support, and for helping us make better videos for you. Thanks for watching and for your generous support, and I'll see you next time.
|
Dear Fellow Scholars, this is Two Minute Papers with Dr. Karajjola Ifehir. Today we are going to have a look at the state of egocentric video conferencing. Now this doesn't mean that only we get to speak during a meeting, it means that we are wearing a camera which looks like this, and the goal is to use a learning algorithm to synthesize this frontal view of us. Now note that what you see here is the recorded reference footage, this is reality and this would need to be somehow synthesized by the algorithm. If we could pull that off, we could add a low-cost egocentric camera to smart glasses and it could pretend to see us from the front which would be amazing for hands-free video conferencing. That would be insanity. But wait a second, how is this even possible? For us to even have a fighting chance, there are four major problems to overcome here. One, this camera lens is very close to us which means that it doesn't see the entirety of the face. That sounds extremely challenging. And if that wasn't bad enough, two, we also have tons of distortion in the images or in other words, things don't look like they look in reality, we would have to account for that two. Three, it would also have to take into account our current expression, gaze, blinking, and more. Oh boy, and finally, four, the output needs to be photorealistic or even better video realistic. Remember, we don't just need one image, but a continuously moving video output. So the problem is, once again, input, egocentric view, output, synthesized, frontal view. This is the reference footage, reality, if you will, and now, let's see how this learning based algorithm is able to reconstruct it. Hello, is this a mistake? They look identical. As if they were just copied here. No, you will see in a moment that it's not a mistake. This means that the AI is giving us a nearly perfect reconstruction of the remainder of the human face. That is absolutely amazing. Now, it is still not perfect. There are some differences. So how do we get a good feel of where the inaccuracies are? The answer is a difference image. Look, regions with warmer colors indicate where the reconstruction is inaccurate compared to the real reference footage. For instance, with an earlier method by the name, picks to picks, the hair and the beard are doing fine, while we have quite a bit of reconstruction error on the remainder of the face. So, did the new method do better than this? Let's have a look together. Oh yeah, it does much better across the entirety of the face. It still has some trouble with the cable and the glasses, but otherwise, this is a clean, clean image. Bravo. Now, we talked about the challenge of reconstructing expressions correctly. To be able to read the other person is of utmost importance during a video conference. So, how good is it at gestures? Well, let's put it through an intense stress test. This is as intense as it gets without having access to Jim Carey as a test subject, I suppose. And I bet there was a lot of fun to be had in the lab on this day. And the results are outstanding. Especially if we compare it again to the picks to picks technique from 2017. I love this idea. Because if we can overcome the huge shortcomings of the egocentric camera, in return, we get an excellent view of subtle facial expressions and can deal with the tiniest eye movements, twitches, tongue movements, and more. And it really shows in the results. Now, please note that this technique needs to be trained on each of these test subjects. About 4 minutes of video footage is fine and this calibration process only needs to be done once. So, once again, the technique knows these people and had seen them before. But in return, it can do even more. If all of this is synthesized, we have a lot of control over the data and the AI understands what much of this data means. So, with all that extra knowledge, what else can we do with this footage? For instance, we can not just reconstruct, but create arbitrary head movement. We can guess what the real head movement is because we have a view of the background. We can simply remove it. Or from the movement of the background, we can infer what kind of head movement is taking place. And what's even better, we can not only get control over the head movement and change it, but even remove the movement from the footage altogether. And we can also remove the glasses and pretend to have dressed properly for an occasion. How cool is that? Now, make no mistake, this paper contains a ton of comparisons against a variety of other works as well. Here are some, but make sure to check them all out in the video description. Now, of course, even this new method isn't perfect, it does not work all that well in low light situations, but of course, let's leave something to improve for the next paper down the line. And hopefully, in the near future, we will be able to seamlessly getting contact with our loved ones through smart glasses and egocentric cameras. What a time to be alive! What you see here is a report of this exact paper we have talked about which was made by weights and biases. I put a link to it in the description, make sure to have a look, I think it helps you understand this paper better. If you work with learning algorithms on a regular basis, make sure to check out weights and biases. Their system is designed to help you organize your experiments and it is so good it could shave off weeks or even months of work from your projects and is completely free for all individuals, academics and open source projects. This really is as good as it gets and it is hardly a surprise that they are now used by over 200 companies and research institutions. Make sure to visit them through wnb.com slash papers or just click the link in the video description and you can get a free demo today. Our thanks to weights and biases for their longstanding support and for helping us make better videos for you. Thanks for watching and for your generous support and I'll see you next time.
|
Dear Fellow Scholars, this is 2 Minute Papers with Dr. Karo Zsolnai-Fehir. Today, we are going to immerse ourselves into the wonderful art of rigid body, disentanglement, or, in simpler words, we are going to solve these puzzles. To be more exact, we'll sit and enjoy our time while a learning-based algorithm is going to miraculously solve some really challenging puzzles. Yes, this is going to be a spicy paper. Now, these puzzles are roughly ordered by difficulty, where the easier ones are on the left, and as we traverse to the right, things get more and more difficult. And get this, this new technique is able to algorithmically solve all of them. If you are like me, and you don't believe a word of what's been said, let's see if it lives up to its promise by solving three of them in increasing difficulty. Let's start with an easy one. Example number one. Here, the algorithm recognizes that we need to pull the circular part of the red piece through the blue one, and apply the appropriate rotations to make sure that we don't get stuck during this process. While we finish this sequence, please note that this video contains spoilers for some well-known puzzles. If you wish to experience them yourself, pause this video, and, I guess, buy and try them. This one was good enough to warm ourselves up, so let's hop on to the next one. Example number two. The duet puzzle. Well, that was quite a bit of a jump in difficulty, because we seem stuck right at the start. Hmm, this seems flat out impossible. Until the algorithm recognizes that there are these small notches in the puzzle, and if we rotate the red piece correctly, we may go from one cell to the next one. Great, so now it's not impossible anymore, it is more like a maze that we have to climb through. But the challenges are still not over. Where is the end point? How do we finish this puzzle? There has to be a notch on the side. Yes, there is this one, or this one, so we ultimately have to end up in one of these places. And there we go. Bravo. Experiment number three. My favorite, the enigma. Hmm, this is exactly the opposite of the previous one. It looks so easy. Just get the tiny opening on the red piece onto this part of the blue one, and we are done. Uh-oh. The curved part is in the way. Something that seems so easy now suddenly seems absolutely impossible. But fortunately, this learning algorithm does not get discouraged, and does not know what impossible is, and it finds this tricky series of rotations to go around the entirety of the blue piece and then finish the puzzle. Glorious. What a roller coaster, a hallmark of an expertly designed puzzle. And to experience more of these puzzles, make sure to have a look at the paper and its website with a super fun interactive app. If you do, you will also learn really cool new things, for instance, if you get back home in the middle of the night after a long day and two of your keys are stuck together, you will know exactly how to handle it. And all this can be done through the power of machine learning and computer graphics research. What a time to be alive. So, how does this wizardry work exactly? The key techniques here are tunnel discovery and path planning. First, a neural network looks at the puzzles and identifies where the gaps and notches are and specifies the starting position and the goal position that we need to achieve to finish the puzzle. Then, a set of collision-free key configurations are identified, after which the blooming step can commence. So, what does that do? Well, the goal is to be able to go through these narrow tunnels that represent tricky steps in the puzzles that typically require some unintuitive rotations. These are typically the most challenging parts of the puzzles, and the blooming step starts from these narrow tunnels and helps us reach the bigger bubbles of the puzzle. But as you see, not all roads connect, or at least not easily. The forest-connect step tries to connect these roads through collision-free paths, and now, finally, all we have to do is find the shortest path from the start to the end point to solve the puzzle. And also, according to my internal numbering system, this is two-minute papers, Episode number 478. And to every single one of you fellow scholars who are watching this, thank you so much to all of you for being with us for so long on this incredible journey. Man, I love my job, and I jump out of bed full of energy, and joy, knowing that I get to read research papers, and flip out together with many of you fellow scholars on a regular basis. Thank you. This episode has been supported by weights and biases. In this post, they show you how to visualize your psychic learned models with just a few lines of code. Look at all those beautiful visualizations. So good! You can even try an example in an interactive notebook through the link in the video description. During my PhD studies, I trained a ton of neural networks, which were used in our experiments. However, over time, there was just too much data in our repositories, and what I am looking for is not data, but insight. And that's exactly how weights and biases helps you by organizing your experiments. It is used by more than 200 companies and research institutions, including OpenAI, Toyota Research, GitHub, and more. And get this, weights and biases is free for all individuals, academics, and open source projects. Make sure to visit them through wnb.com slash papers, or just click the link in the video description, and you can get a free demo today. Our thanks to weights and biases for their longstanding support, and for helping us make better videos for you. Thanks for watching, and for your generous support, and I'll see you next time.
|
Dear Fellow Scholars, hold onto your papers because we have an emergency situation. Everybody can make deepfakes now by recording a voice sample such as this one and the lips of the target subject will move as if they themselves were saying this. Not bad huh? And now let's see what else this new method can do. First let's watch this short clip of a speech and make sure to pay attention to the fact that the louder voice is the English translator and if you pay attention you can hear the chancellor's original voice in the background too. Framework for our future cooperation. Day Manuel. Only 8 months ago you were awarded the... So what is the problem here? It's strictly speaking, there is no problem here, this is just the way the speech was recorded. However, what if we could recreate this video in a way that the chancellor's lips would be synced not to her own voice but to the voice of the English interpreter? This would give an impression as if the speech was given in English and the video content would follow what we hear. Now that sounds like something straight out of a science fiction movie, perhaps even with today's advanced machine learning techniques, but let's see if it's possible. This is a state of the art technique from last year that attempts to perform this. Um, development in a very different history. We signed this on the 56th anniversary of the illicit treaty of 1966. Hmm, there are extraneous lip movements which are the remnants of the original video so much so that she seems to be giving two speeches at the same time, not too convincing. So is this not possible to pull off? Well, now hold onto your papers and let's see how this new paper does at the same problem. On Germany but also for a starting point of a very different development in a very different history. We signed this on the 56th anniversary of the illicit treaty of 9th. Wow, now that's significantly better. The remnants of the previous speech are still there, but the footage is much, much more convincing. What's even better is that the previous technique was published just one year ago by the same research group. Such a great leap in just one year. My goodness. So apparently this is possible. But I would like to see another example just to make sure. That's what he'd like to on the first issue, it's not for me to comment on the visit of Madame. Checkmark. So far this is an amazing leap, but believe it or not, this is just one of the easier applications of the new model. So let's see what acid can do. For instance, many of us are sitting at home yearning for some learning materials, but the vast majority of these were recorded in only one language. Before if we could read up famous lectures into many other languages. The computer vision is moving on the basis of the deep learning very quickly. For example, we use the same color as the car now to make a lot of things. Look at that. Any lecture could be available in any language and look as if they were originally recorded in these foreign languages as long as someone says the words, which can also be kind of automated through speech synthesis these days. So, it clearly works on real characters, but are you thinking what I am thinking? Three, what about lip syncing animated characters? Imagine if a line has to be changed in a Disney movie, can we synthesize new video footage without calling in the animators for yet another all-nighter? Let's give it a try. I'll go around. Wait for my call. Indeed we can, loving it. Let's do one more. Four, of course we have a lot of these meme gifts on the internet. What about redubbing those with an arbitrary line of our choice? Yup, that is indeed also possible. Well done. And imagine that this is such a leap, just one more work down the line from the 2019 paper, I can only imagine what results we will see one more paper down the line. It not only does what it does better, but it can also be applied to a multitude of problems. What a time to be alive. When we look under the hood, we see that the two key components that enable this wizardry are here and here. So what does this mean exactly? It means that we jointly improve the quality of the lip syncing and the visual quality of the video. These two modules curate the results offered by the main generator neural network and reject solutions that don't have enough detail or don't match the speech that we hear and thereby they steer it towards much higher quality solutions. If we continue this training process for 29 hours for the lip syncing discriminator, we get these incredible results. Now let's have a quick look at the user study and humans appear to almost never prefer the older method compared to this one. I tend to agree. If you consider these forgeries to be defects, then there you go. Useful defects that can potentially help people around the world stranded at home to study and improve themselves. Imagine what good this could do. Well done. This episode has been supported by Lambda GPU Cloud. If you're looking for inexpensive Cloud GPUs for AI, check out Lambda GPU Cloud. They recently launched Quadro RTX 6000, RTX 8000 and V100 instances and hold onto your papers because Lambda GPU Cloud can cost less than half of AWS and Asia. Plus they are the only Cloud service with 48GB RTX 8000. Join researchers at organizations like Apple, MIT and Caltech in using Lambda Cloud instances workstations or servers. Make sure to go to lambdaleps.com slash papers to sign up for one of their amazing GPU instances today. Our thanks to Lambda for their longstanding support and for helping us make better videos for you. Thanks for watching and for your generous support and I'll see you next time.
|
Dear Fellow Scholars, this is 2 Minute Papers with Dr. Karo Zsolnai-Fehir. Today we are going to talk about image colorization. This is a problem where we take an old, old black and white photo, run it through one of these amazing new learning-based algorithms, and out comes an image that is properly colored. The obvious application of this is, of course, image restoration. What you see here is some amazing results with this new method, so clearly this can be done very well and you will find out in a moment how challenging this problem is. But there are also less obvious applications, for instance, image compression too. Just imagine that we wouldn't have to transmit colored images over the internet, black and white photos would be fine, and if there is an AIB's algorithm in every computer, they would be able to restore the colors perfectly. This would save a lot of bandwidth and energy, and it would be a glorious thing to do. So how well do the current learning-based algorithms do at image colorization? Wow, that is a hard, hard question, and let's find out together why. First, let's have a look at the results of this new method that you saw so far against previously existing techniques. Here is the black and white input image, and here is the ground truth output that we have concealed from the algorithms. Only we see the result, this will remain our secret, and the algorithms only have access to the black and white input. Let's start. Yep, these are all learning-based methods, so they all appear to know what a strawberry is, and they color it accordingly. So far, so good. However, this problem is much, much harder than just colorizing strawberries. We have grapes too. That does not sound like much, but there are many problems with grapes. For instance, there are many kinds of grapes, and they are also translucent, and therefore their appearance depends way more on the lighting of the scene. That's a problem, because the algorithm not only has to know what objects are present in this image, but what the lighting around them looks like, and how their material properties should interact with this kind of lighting. Goodness, that is a tremendously difficult problem. Previous methods did a reasonably good job at colorizing the image, but the grapes remain a challenge. And now, hold on to your papers, and let's see the new method. Wow, just look at that. The translucency of the grapes is captured beautifully, and the colors are very close to the ground truth. But that was just one example. What if we generate a lot of results and show them to a few people? What do the users say? Let's compare the Deoldify technique with this new method. Deoldify is an interesting specimen, as it combines three previously published papers really well, but it is not a published paper. Fortunately, its source code is available, and it is easy to compare results against it. So, let's see how it fares against this new technique. Apparently, they trade blows, but more often than not, the new method seems to come out ahead. Now note that we have a secret, and that secret is that we can see the reference images, which are hidden from the algorithms. This helps us a great deal, because we can mathematically compare the results of the learning algorithms to the reference. So, what do the numbers say? Yep, now you see why I said that it is very hard to tell which algorithm is the best, because they all trade blows, and depending on how we measure the difference between two images, we get a different winner. All this is true, until we have a look at the results with this new paper. If you have been holding on to your papers, now squeeze that paper, because this new method smokes a competition on every data set, regardless of what we are measuring. So, what is this black magic? What is behind this wizardry? This method uses an of the shelf object detection module that takes the interesting elements out of the image, which are then colorized one by one. Of course, this way we haven't colorized the entire image, so parts of it would remain black and white, which is clearly not what we are looking for. As a remedy, let's color the entire image independently from the previous process. Now, we have an OK quality result, where the colors have reached the entirety of the image, but now, what do we do? We also colorize the objects independently, so some things are colorized twice, and they are different. This doesn't make any sense whatsoever, until we introduce another fusion module to the process that stitches these overlapping results into one coherent output. And the results are absolutely amazing. So, how much do we have to wait to get all this? If we take a small-ish image, the colorization can take place five times a second, and just two more papers down the line, this will easily run in real time, I am sure. You know what? With the pace of progress in machine learning research today, maybe even just one more paper down the line. So, what about the limitations? You remember that this contains an object detector, and if it goes haywire, all bets are off, and we might have to revert to the only OK full image colorization method. Also, note that all of these comparisons showcase image colorization, while the oldify is also capable of colorizing videos as well. But of course, one step at a time. What a time to be alive! What you see here is a report of this exact paper we have talked about, which was made by weights and biases. I put a link to it in the description, make sure to have a look, I think it helps you understand this paper better. Weight and biases provides tools to track your experiments in your deep learning projects. Their system is designed to save you a ton of time and money, and it is actively used in projects at prestigious labs, such as OpenAI, Toyota Research, GitHub, and more. And the best part is that if you have an open source, academic, or personal project, you can use their tools for free. It really is as good as it gets. Make sure to visit them through www.nb.com slash papers, or click the link in the video description to start tracking your experiments in five minutes. Our thanks to weights and biases for their long-standing support, and for helping us make better videos for you. Thanks for watching and for your generous support, and I'll see you next time.
|
De haircutotokim ke sociallye Overf22-ban egyszerűen edizzé van lehet pelökszbelül, és én больше de te szerelhetik az FT30 ára volt így, és kiverszekre helyen velem deniedig. Szíteni, hogyha boyrát encompasszt justo nincs a muzá 60 tutulci lime,mondom a period A polson is kerülni, extendingiér részni el幹嘛, Menni a vallóğlu 짴�ítyvel éjűvel, felind ezekzköntjenli és angadása a utá everything, de már hogyan ez azύ ilyen engelyizet ezek been állánban fériak. Öconre拿z industráthat科ám. Ha egy Miatt snowball Отni, Mongosan benefitsazt szálasz importantbó religion darkató ha secondsjínű csináltálok viszőtt úgyelférme és lesz verdadásósnubban a csapélt a napadógnak a v saltta. Sonya BER Nézzel, karr TheCast De ce omnésre is, egyre vecz Satanibilities volt az conquoluól talán, ez kivár月 important THOM Wandig lowraexhalesi gejus, és belทüttületi, their Katherine-d Selur biztosba, de illet내 estirakkéndi ez bon 9. Ne snekezd先 unboxan im cables volt az, hogy én ha patizsi állt szoléri meg maga, és halos panber gészélenek nem lefítani. S exposure illetés listenben nem megszologistson, Básját el ez Van akar agasps új acreszt ücent a csevet fel.çozont az őtsszelehet fogani körepozni kicsi, az Outroban is a ré atékeléde része próbálok, hogy behasizerszett semmi a 800, de nem apresent usted halb wiredok stall томál Third compress doeszt valamort sabaz easieráj elé shelfárabad és a docommelyször iPodoklon meg interactive erfüldül ha a fいますiveszont wird egy egész kitit interestedsec a meomingsek a lesz visszagy, A Dragon800 działan baszt規te dolgó, was van-avatna viss proportions, és az ölog- Vávoljok mag涅cséter a m little uram perok kannola jár Auto4A napART-e munt ashog a maximára is, draggingó人re de�edétgemit dolgnalon, hogy Majd meg a lenne se lesz. Re taki bagszá leutatva az, meg novelty hidden visual.類ása kaholpl word, a<|de|><|transcribe|> sére jоны kundert édme swordsmanopulav pressának, majd puede den leszár meg mault. Ez a mault marad tenerドlebben útzenek szü Valerie 2021 способimpression recognized ak裃baamtok. végétmi 400-os érlvésztét, felég Robot, de az időkől eség unikorba�uns signature pimőba terénn Hongert sészenki務lásban, nem sudd hosted Lyn ju goodbye, ezeken hív Tingajom a<|fa|><|translate|>, nem ki mít- Intelligence Jvének Ózban, legétsésem Karrakom éreltő fißkül belettem, de gondol она catászéks slateek. Round 16.athol zse Ed视os lóáánból ide az elég innovativesolomet. Tehől reagálják minden affot, mi sazzába létalába én leghajadomDaSz ahoraban, meg nevegy tőd descendedod. Azan hogy ilyen átor az a fegykéssel általánál adt kéne muchísimo újквubb az önéleménye is, a sólo butjáеннаяbiIL� s civilization-nak, teh overlaSomethingi söd connection mobile. és de limpzelnyukában ő� regulályos bere àj짝án amennづround! Gobben is van οly Store Pěreu nyistem.. Vagy már.
|
Dear Fellow Scholars, this is two-minute papers with Dr. Karo Zonai-Fehir. Today, we are going to see a lot of physics simulations with many many collisions. In particular, you will see a lot of beautiful footage which contains contact between thin shells and rigid bodies. In this simulation program, at least one of these objects will always be represented as a signed distance field. This representation is useful because it helps us rapidly compute whether something is inside or outside of this object. However, a collision here takes two objects, of course, so the other object will be represented as a triangle mesh, which is perhaps the most common way of storing object geometry in computer graphics. However, with this, we have a problem. Sign distance fields were great, triangle meshes also were great for a number of applications, however, computing where they overlap when they collide is still slow and difficult. If that does not sound bad enough, it gets even worse than that. How so? Let's have a look together. Experiment number one. Let's try to intersect this cone with this rubbery sheet using a traditional technique, and there is only one rule no poking through is allowed. Well, guess what just happened? This earlier technique is called point sampling, and we either have to check too many points in the two geometries against each other, which takes way too long and still fails, or we skimp on some of them, but then this happens. Important contact points go missing. Not good. Let's see how this new method does with this case. Now that's what I'm talking about. No poking through anywhere to be seen, and let's have another look. Wait a second. Are you seeing what I am seeing? Look at this part. After the first collision, the contact points are moving ever so slightly, many, many times, and the new method is not missing any of them. Then things get a little out of hand, and it still works perfectly. Amazing. I can only imagine how many of these interactions the previous point sampling technique would miss, but we won't know because it has already failed long ago. Let's do another one. Experiment number two. Dragon versus cloth sheet. This is the previous point sampling method. We now see that it can find some of the interactions, but many others go missing, and due to the anomalies, we can't continue the animation by pulling the cloth sheet of the dragon because it is stuck. Let's see how the new method failed in this case. Oh yeah. Nothing pokes through, and therefore now we can continue the animation by doing this. Excellent. Experiment number three. Robocurtin. Point sampling. Oh no. This is a disaster. And now hold on to your papers and marvel at the new proposed method. Just look at how beautifully we can pull off the Robocurtin through this fine geometry. Loving it. So, of course, if you are a seasoned fellow scholar, you probably want to know how much computation do we have to do with the old and new methods. How much longer do I have to wait for the new improved technique? Let's have a look together. Each rigid shell here is a mesh that uses 129,000 triangles and the old point sampling method took 15 milliseconds to compute the collisions and this time it has done reasonably well. What about the new one? How much more computation do we have to perform to make sure that the simulations are robust? Please stop the video and make a guess. I'll wait. Alright, let's see and the new one does it in half a millisecond. Half a millisecond. It is not slower at all, quite the opposite, 30 times faster. My goodness. Huge congratulations on yet another masterpiece from scientists at NVIDIA and the University of Copenhagen. While we look at some more results in case you're wondering, the authors used NVIDIA's omniverse platform to create these amazing rendered worlds. And now, with this new method, we can infuse our physics simulation programs with the robust and blazing fast collision detector and I truly can't wait to see where talented artists will take these tools. What a time to be alive. This episode has been supported by weights and biases. In this post, they show you how to use their system to visualize how well people are doing in the draw-thruarch benchmark. What's more, you can even try an example in an interactive notebook through the link in the video description. Weight and biases provides tools to track your experiments in your deep learning projects. Their system is designed to save you a ton of time and money and it is actively used in projects at prestigious labs such as OpenAI, Toyota Research, GitHub and more. And the best part is that if you have an open source, academic or personal project, you can use their tools for free. It really is as good as it gets. Make sure to visit them through wnb.com slash papers or click the link in the video description to start tracking your experiments in five minutes. Our thanks to weights and biases for their long-standing support and for helping us make better videos for you. Thanks for watching and for your generous support and I'll see you next time.
|
Dear Fellow Scholars, this is two-minute papers with Dr. Karajol Neyfahir. Today, with the power of computer graphics research, we can use our computers to run fluid simulations, simulate, immersing a selection of objects into jelly, or tear meat in a way that much like in reality, it tears along the muscle fibers. If we look at the abstract of this amazing new paper, we see this quoting. This allows us to trace high-speed motion of objects, colliding a gas-curve geometry to reduce the number of constraints to increase the robustness of the simulation and to simplify the formulation of the solver. What? This sounds impassable, but at the very least outragiously good. Let's look at three examples of what it can do and see for ourselves if it can live up to its promise. One, it can simulate a steering mechanism full of joints and compact. Yup, an entire servo steering mechanism is simulated with a prescribed mass ratio, loving it. I hereby declare that it passes inspection and now we can take off for some off-roading. All of the movement is simulated really well and wait a minute. Hold on to your papers. Are you seeing what I am seeing? Look, even the tire deformations are part of the simulation. Beautiful. And now, let's do a stress test and race through a bunch of obstacles and see how well those tires can take it. At the end of the video, I will tell you how much time it takes to simulate all this and note that I had to look three times because I could not believe my eyes. Two, herestitution. Or, in other words, we can smash an independent marble into a bunch of others and their combined velocity will be correctly computed. We know for a fact that the computations are correct because, when I stop the video here, you can see that the marbles themselves are smiling. The joys of curved geometry and specular reflections. Of course, this is not true because if we attempt to do the same with a classical earlier technique by the name Position Based Dynamics, this would happen. Yes, the velocities became erroneously large and the marbles jump off of the wire. And they still appear to be very happy about it. Of course, with the new technique, the simulation is much more stable and realistic. Talking about stability, is it stable only in a small-scale simulation or can it take a huge scene with lots of interactions? Would it still work? Well, let's run a stress test and find out. Ha ha, this animation can run all day long and not one thing appears to behave incorrectly. Loving this. Three, it can also simulate these beautiful high-frequency roles that we often experience when we drop a coin on a table. This kind of interaction is very challenging to simulate correctly because of the high-frequency nature of the motion and the curved geometry that interacts with the table. I would love to see a technique that algorithmically generates the sound for this. I could almost hear its sound in my head. Believe it or not, this should be possible and is subject to some research attention in computer graphics. The animation here was given but the sounds were algorithmically generated. Listen. Let me know in the comments if you are one of our OG Fellow Scholars who were here when this episode was published hundreds of videos ago. So, how much do we have to wait to simulate all of these crazy physical interactions? We mentioned that the tires are stiff and take a great deal of computation to simulate properly. So, as always, all nighters, right? Nope. Look at that. Holy matter of papers. The car example takes only 18 milliseconds to compute per frame, which means 55 frames per second. Goodness. Not only do we not need an all-nighter, we don't even need to leave for a coffee break. And the rolling marbles took even less and, woohoo! The high-frequency coin example needs only one third of a millisecond, which means that we can generate more than 3,000 frames with it per second. We not only don't need an all-nighter or a coffee break, we don't even need to wait at all. Now, at the start of the video, I noted that the claim in the abstract sounds almost outrageous. It is because it promises to be able to do more than previous techniques, simplify the simulation algorithm itself, make it more robust, and do all this while being blazing fast. If someone told me that there is a work that does all this at the same time, I would say that give me that paper immediately because I do not believe a word of it. And yet, it really leaves up to its promise. Typically, as a research field matures, we see new techniques that can do more than previous methods, but the price to be paid for it is in the form of complexity. The algorithms get more and more involved over time, and with that, they often get slower and less robust. The engineers in the industry have to decide how much complexity they are willing to shoulder to be able to simulate all of these beautiful interactions. Don't forget, these code bases have to be maintained and improved for many, many years, so choosing a simple base algorithm is of utmost importance. But here, none of these factors need to be considered because there is nearly no trade off here. It is simpler, more robust, and better at the same time. It really feels like we are living in a science fiction world. What a time to be alive! Huge congratulations to scientists at NVIDIA and the University of Copenhagen for this. Don't forget, they could have kept the results for themselves, but they chose to share the details of this algorithm with everyone, free of charge. Thank you so much for doing this. What you see here is a report for a previous paper that we covered in this series, which was made by Wades and Biasis. Wades and Biasis provides tools to track your experiments in your deep learning projects. Their system is designed to save you a ton of time and money, and it is actively used in projects at prestigious labs such as OpenAI, Toyota Research, GitHub, and more. And the best part is that if you have an open source, academic, or personal project, you can use their tools for free. It really is as good as it gets. Make sure to visit them through www.nb.com slash papers, or click the link in the video description to start tracking your experiments in five minutes. Our thanks to Wades and Biasis for their long-standing support and for helping us make better videos for you. Thanks for watching and for your generous support, and I'll see you next time.
|
Dear Fellow Scholars, this is two-minute papers with Dr. Karo Zsolnai-Fehir, it is time for some fluids. Hmm-hmm. As many of you know, in this series we often talk about fluid simulations and sometimes the examples showcase a fluid splash, but not much else. However, in real production environments, these simulations often involve complex scenes with many objects that interact with each other and they're in lies the problem. Computing these interactions is called coupling and it is very difficult to get right, but is necessary for many of the beautiful scenes you will see throughout this video. Getting this right is of utmost importance if we wish to create a realistic simulation where fluids and solids interact. So the first question would be, as many of these techniques build upon the material point method or MPM in short, why not just use that? Well, let's do exactly that and see how it does on this scene. Let's draw up the liquid ball on the bunny and... Uh-oh. A lot of it has now stuck to the bunny. This is not supposed to happen. So, what about the improved version of MPM? Yep, still too sticky. And now, let's have a look at how this new technique handles this situation. I want to see dry and floppy bunny ears. Yes, now that's what I'm talking about. Now then, that's great, but what else can this do? A lot more. For instance, we can engage in the favorite pastimes of the computer graphics researcher, which is, of course, destroying objects in a spectacular manner. This is going to be a very challenging scene. Ouch. And now, let physics take care of the rest. This was a harrowing, but beautiful simulation. And we can try to challenge the algorithm even more. Here, we have three elastic spheres filled with water, and now, watch how they deform as they hit the ground and how the water gushes out exactly as it should. And now, hold on to your papers, because there is a great deal to be seen in this animation, but the most important part remains invisible. Get this. All three spheres use a different hyper-elasticity model to demonstrate that this new technique can be plugged into many existing techniques. And it works so seamlessly that I don't think anyone would be able to tell the difference. And it can do even more. For instance, it can also simulate wet sand. Wow. And I say wow, not only because of this beautiful result, but there is more behind it. If you are one of our hardcore long-time fellow scholars, you may remember that three years ago, we needed an entire paper to pull this off. This algorithm is more general and can simulate this kind of interaction between liquids and granular media as an additional side effect. We can also simulate dropping this creature into a piece of fluid, and as we increase the density of the creature, it sinks in in a realistic manner. While we are lifting frogs and help an elastic bear take a bath, let's look at why this technique works so well. The key to achieving these amazing results in a reasonable amount of time is that this new method is able to find these interfaces where the fluids and solids meet and handles their interactions in a way that we can advance the time in our simulation in larger steps than previous methods. This leads to not only these amazingly general and realistic simulations, but they also run faster. Furthermore, I am very happy about the fact that now we can not only simulate these difficult phenomena, but we don't even have to implement a technique for each of them. We can take this one and simulate a wide variety of fluid solid interactions. What a time to be alive! This episode has been supported by weights and biases. In this post, they show you how to use their system to train a deep reinforcement learner to become adapt at the card pole balancing problem. This is the algorithm that was able to master a Tari breakout that we often talk about in this series. Weight and biases provides tools to track your experiments in your deep learning projects. Their system is designed to save you a ton of time and money, and it is actively used in projects at prestigious labs, such as OpenAI, Toyota Research, GitHub, and more. And the best part is that if you have an open source, academic, or personal project, you can use their tools for free. It really is as good as it gets. Make sure to visit them through wnb.com slash papers, or click the link in the video description to start tracking your experiments in 5 minutes. Our thanks to weights and biases for their long standing support and for helping us make better videos for you. Thanks for watching and for your generous support, and I'll see you next time.
|
Dear Fellow Scholars, this is two-minute papers with Dr. Karajol Nehfaher. Today, we are living the advent of neural network-based image generation algorithms. What you see here is some super high-quality results from a technique developed by scientists at NVIDIA called Stalgant 2. Right? All of these were generated by a learning algorithm. And while generating images of this quality is a great achievement, but if we have an artistic vision, we wonder, can we bend these images to our will? Can we control them? Well, kind of, and one of the methods that enables us to do that is called image interpolation. This means that we have a reference image for style and a target image, and with this we can morph one human face into another. This is sufficient for some use cases, however, if we are looking for more elaborate edits, we hit a wall. Now, it's good that we already know what Stalgant is, because this new work builds on top of that and shows exceptional image editing and interpolation abilities. Let's start with the image editing part. With this new work, we can give anyone glasses, and a smile, or even better, transform them into a variant of the Mona Lisa. Beautiful! The authors of the paper call this process semantic diffusion. Now, let's have a closer look at the expression and post-change possibilities. I really like that we have fine-grained control over these parameters, and what's even better, we don't just have a start and end point, but all the intermediate images make sense and can stand on their own. This is great for pose and expression, because we can control how big of a smile we are looking for, or even better, we can adjust the age of the test subject with remarkable granularity. Let's go all out! I like how Mr. Comberbatch looks nearly the same as a baby, we might have a new mathematical definition for babyface right there, and apparently Mr. DiCaprio scores a bit lower on that, and I would say that both results are quite credible. Very cool! And now, onto image interpolation. What does this new work bring to the table in this area? Previous techniques are also pretty good at morphing, until we take a closer look at them. Let's continue our journey with three interpolation examples with increasing difficulty. Let's see the easy one first. I was looking for a morphing example with long hair, you will see why right away. This is how the older method did. Uh oh, one more time. Do you see what I see? If I stop the process here, you see that this is an intermediate image that doesn't make sense. The hair over the forehead just suddenly vanishes into the ether. Now, let's see how the new method deals with this issue. Wow, much cleaner, and I can stop nearly anywhere and leave the process with a usable image. Easy example, checkmark. Now, let's see an intermediate level example. Let's go from an old black and white Einstein photo to a recent picture with colors and stop the process at different points, and... Yes, I prefer the picture created with the new technique close to every single time. Do you agree? Let me know in the comments below. Intermediate example, checkmark. And now onwards to the hardest, nastiest example. This is going to sound impossible, but we are going to transform the Eiffel Tower into the Tower Bridge. Yes, that sounds pretty much impossible. So let's see how the conventional interpolation technique did here. Well, that's not good. I would argue that nearly none of the images showcased here would be believable if we stopped the process and took them out. And let's see the new method. Hmm, that makes sense. We start with one tower, then two towers grow from the ground, and look. Wow, the bridge slowly appears between them. That was incredible. While we look at some more results, what really happened here? At the risk of simplifying the contribution of this new paper, we can say that during interpolation, it ensures that we remain within the same domain for the intermediate images. Intuitively, as a result, we get less nonsense in the outputs and can pull off morphing not only between human faces, but even go from a black and white photo to a colored one. And what's more, it can even deal with completely different building types. Or, you know, just transform people into Mona Lisa variants. Absolutely amazing. What a time to be alive. What you see here is a report of this exact paper we have talked about which was made by Wades and Biasis. I put a link to it in the description. Make sure to have a look. I think it helps you understand this paper better. Wades and Biasis provides tools to track your experiments in your deep learning projects. Their system is designed to save you a ton of time and money and it is actively used in projects at prestigious labs such as OpenAI, Toyota Research, GitHub and more. And the best part is that if you have an open source, academic or personal project, you can use their tools for free. It really is as good as it gets. Make sure to visit them through WNB.com slash papers or click the link in the video description to start tracking your experiments in five minutes. Our thanks to Wades and Biasis for their long standing support and for helping us make better videos for you. Thanks for watching and for your generous support and I'll see you next time.
|
Dear Fellow Scholars, this is two-minute papers with Dr. Karo Ejone Fahir. If we study the laws of fluid motion and implement them in a computer program, we can create and enjoy these beautiful fluid simulations. And not only that, but today, with the amazing progress in computer graphics research, we can even enrich our physics simulations with anisotropic damage and elasticity. So, what does that mean exactly? This means that we can simulate more extreme topological changes in these virtual objects. This leads to better material separation when the damage happens. So, it appears that today we can do a great deal, but these techniques are pretty complex and take quite a while to compute and they typically run on your processor. That's a pity, because many powerful consumer computers also have a graphics card, and if we could restate some of these algorithms to be able to run them on those, they would run significantly faster. So, do we have any hope for that? Well, today's paper promises a new particle data structure that is better suited for number crunching on the graphics card and is, hence, much faster than its predecessors. As a result, this runs the material point method, the algorithm that is capable of simulating these wondrous things you are seeing here, not only on your graphics card, but the work on one problem can also be distributed between many, many graphics cards. This means that we get crushing concrete, falling soil, candy balls, sand armadillos, oh my, you name it, and all this much faster than before. Now, since these are some devilishly detailed simulations, please do not expect several frames per second kind of performance, we are still in the second's per frame region, but we are not that far away. For instance, hold onto your papers, because this candy ball example contains nearly 23 million particles, and despite that, it runs in about 4 seconds per frame on a system equipped with 4 graphics cards. 4 seconds per frame. My goodness, if somebody told this to me today without showing me this paper, I would have not believed a word of it. But there is more. You know what? Let's double the number of particles and pull up this damn break scene. What you see here is 48 million particles that runs in 15 seconds per frame. Let's do even more. These sand armadillos contain a total of 55 million particles and take about 30 seconds per frame. And in return, look at that beautiful mixture of the two sand materials. And with half a minute per frame, that's a great deal. I'll take this any day of the week. And if we wish to simulate crushing this piece of concrete with a hydraulic press, that will take nearly 100 million particles. Just look at that footage. This is an obscene amount of detail, and the price to be paid for this is nearly 4 minutes of simulation time for every frame that you see on the screen here. 4 minutes you say. 1. That's a little more than expected. Why is that? We had several seconds per frame for the others, not minutes per frame. Well, it is because the particle count matters a great deal, but that's not the only consideration for such a simulation. For instance, here you see with delta T something that we call time step size. The smaller this number is, the tinier the time steps with which we can advance the simulation when computing every interaction and hence the more steps there are to compute. In simpler words, generally, time step size is also an important factor in the computation time and the smaller this is, the slower the simulation will be. As you see, we have to simulate 5 times more steps to make sure that we don't miss any particle interactions and hence this takes much longer. Now, this one appears to be perhaps the simplest simulation of the bunch, isn't it? No, no, no, quite the opposite. If you have been holding onto your paper so far, now squeeze that paper and watch carefully. There we go. There we go. Why do there we go? These are just around 6000 bombs, which is not a lot, however, wait a minute. Each bomb is a collection of particles giving us a total of not immediately 6000, but a whopping 134 million particle simulation and hence we may think that it's nearly impossible to perform in a reasonable amount of time. The time steps are not that far apart for this one so we can do it in less than 1 minute per frame. This was nearly impossible when I started my PhD and today less than a minute for one frame. It truly feels like we are living in a science fiction world. What a time to be alive. I also couldn't resist creating a slow-motion version of some of these videos so if this is something that you wish to see, make sure to visit our Instagram page in the video description for more. What you see here is an instrumentation for a previous paper that we covered in this series which was made by weights and biases. I think organizing these experiments really showcases the usability of their system. Wates and biases provides tools to track your experiments in your deep learning projects. Their system is designed to save you a ton of time and money and it is actively used in projects at prestigious labs such as OpenAI, Toyota Research, GitHub and more. And the best part is that if you have an open source, academic or personal project, you can use their tools for free. It really is as good as it gets. Make sure to visit them through wnb.com slash papers or click the link in the video description to start tracking your experiments in 5 minutes. Our thanks to weights and biases for their long-standing support and for helping us make better videos for you. Thanks for watching and for your generous support and I'll see you next time.
|
Dear Fellow Scholars, this is two-minute papers with Dr. Carlos John Aifahir. Have you ever had a moment where you took the perfect photo, but upon closer inspection, there was this one annoying thing that ruined the whole picture? Well, why not just take a learning algorithm to erase those cracks in the facade of a building or a photo bombing ship? Or to even reimagine ourselves with different high-colors, we can try one of the many research works that are capable of something that we call image-impainting. What you see here is the legendary patch-match algorithm at work, which, believe it or not, is a handcrafted technique from more than ten years ago. Later, scientists at NVIDIA published a more modern impainter that uses a learning-based algorithm to do this more reliably and for a greater variety of images. This all work really well, but the common denominator for these techniques is that they all work on impainting still images. Would this be a possibility for video? Like removing a moving object or a person from a video. Is this possible or is it science fiction? Let's see if these learning-based techniques can really do more. And now, hold on to your papers because this new work can really perform proper impainting for video. Let's give it a try by highlighting this human. And pro-tip also highlight the shadowy region for impainting to make sure that not only the human, but its silhouette also disappears from the footage. And look, wow! Let's look at some other examples. Now that's really something because video is much more difficult due to the requirement of temporal coherence, which means that it's not nearly enough if the images are impainted really well individually, they also have to look good if we weave them together into a video. You will hear and see more about this in a moment. Not only that, but if we highlight a person, this person not only needs to be impainted, but we also have to track the boundaries of this person throughout the footage and then impaint a moving region. We get some help with that, which I will also talk about in a moment. Now, as you see here, these all work extremely well, and believe it or not, you have seen nothing yet because so far, another common denominator in these examples was that we highlighted regions inside the video. But that's not all. If you have been holding onto your paper so far, now squeeze that paper because we can also go outside and expand our video, spatially, with even more content. This one is very short, so I will keep looping it. Are you ready? Let's go. Wow! My goodness! The information from inside of the video frames is reused to infer what should be around the video frame and all this in a temporalic coherent manner. Now, of course, this is not the first technique to perform this, so let's see how it compares to the competition by erasing this bear from the video footage. The remnants of the bear are visible with a wide selection of previously published techniques from the last few years. This is true even for these four methods from last year. And let's see how this new method did on the same case. Yup! Very good. Not perfect, we still see some flickering. This is the temporal coherence example or the leg thereof that I have promised earlier. But now, let's look at this example with the BMX rider. We see similar performance with the previous techniques. And now, let's have a look at the new one. Now, that's what I'm talking about. Not a trace left from this person, the only clue that we get in reconstructing what went down here is the camera movement. It truly feels like we are living in a science fiction world. What a time to be alive! Now these were the qualitative results, and now let's have a look at the quantitative results. In other words, we saw the videos, now let's see what the numbers say. We could talk all day about the peak signal to noise ratios, or structural similarity, or other ways to measure how good these techniques are, but you will see in a moment that it is completely unnecessary. Why is that? Well, you see here that the second best results are underscored and highlighted with blue. As you see, there is plenty of competition as the blues are all over the place. But there is no competition at all for the first place, because this new method smokes the competition in every category. This was measured on a data set by the name Densely Annotated Video segmentation Davis, in short, this contains 150 video sequences, and it is annotated, which means that many of the objects are highlighted throughout this video, so for the cases in this data set, we don't have to deal with the tracking ourselves. I am truly out of ideas as to what I should wish for two more papers down the line. Maybe not only removing the tennis player, but putting myself in there as a proxy. We can already grab a controller and play as if we were real characters in real broadcast footage, so who really knows? Anything is possible. Let me know in the comments what you have in mind for potential applications, and what you would be excited to see two more papers down the line. This episode has been supported by Lambda GPU Cloud. If you are looking for inexpensive Cloud GPUs for AI, check out Lambda GPU Cloud. They recently launched Quadro RTX 6000, RTX 8000, and V100 instances, and hold onto your papers because Lambda GPU Cloud can cost less than half of AWS and Azure. Plus they are the only Cloud service with 48GB RTX 8000. And researchers at organizations like Apple, MIT, and Caltech in using Lambda Cloud instances, workstations, or servers. Make sure to go to LambdaLabs.com slash papers to sign up for one of their amazing GPU instances today. Our thanks to Lambda for their long-standing support and for helping us make better videos for you. Thanks for watching and for your generous support, and I'll see you next time.
|
Dear Fellow Scholars, this is 2 Minute Papers with Dr. Karojejona Ifehir. Let's talk about video super resolution. The problem statement is simple, in goes a course video, the technique analyzes it, guesses what's missing, and out comes a detailed video. You know the CSI thing, and hence, however, of course, reliably solving this problem is anything that simple. This previous method is called TACOGAN and it is able to give us results that are often very close to reality. It is truly amazing how much this technique understands the world around us from just this training set of low and high resolution videos. However, as amazing as super resolution is, it is not the most reliable way of delivering high quality images in many real-time applications, for instance, video games. Note that in these applications, we typically have more data at our disposal, but in return, the requirements are also higher. We need high quality images, at least 60 times per second, temporal coherence is a necessity, or in other words, no jarring jumps and flickering is permitted. And hence, one of the techniques often used in these cases is called super sampling. At the risk of simplifying the term, super sampling means that we split every pixel into multiple pixels to compute a more detailed image and then display that to the user. And does this work? Yes, it does, it works wonderfully, but it requires a lot more memory and computation, therefore, it is generally quite expensive. So, our question today is, is it possible to use these amazing learning-based algorithms to do it a little smarter? Let's have a look at some results from a recent paper that uses a neural network to perform super sampling at a more reasonable computational cost. Now, in goes the low resolution input and my goodness, like magic, outcomes this wonderful, much more detailed result. And here is the reference, which is the true higher resolution image. Of course, the closer the neural super sampling is to this, the better. And as you see, this is indeed really close and much better than the pixelated inputs. Let's do one more. Wow! This is so close, I feel we are just a couple papers away from a result being indistinguishable from the real reference image. Now, we noted that this previous method has access to more information than the previously showcased super resolution method. It looks at not just one frame, but a few previous frames as well. Can use an estimation of the motion of each pixel over time and also gets depth information. This can be typically produced inexpensively with any major game engine. So how much of this data is required to train this neural network? Hold on to your papers because 80 videos were used and the training took approximately one and a half days on one Titan V, which is an expensive but commercially available graphics card. And no matter because this step only has to be done once and depending on how high the resolution of the output should be with the faster version of the technique, the up sampling step takes from 8 to 18 milliseconds, so this runs easily in real time. Now of course this is not the only modern super sampling method, this topic is subject to a great deal of research, so let's see how it compares to others. Here you see the results with TAAU, the temporal up sampling technique used in Unreal Engine and industry standard game engine. And look, this neural super sampler is significantly better at anti-aliasing or in other words, smoothing these jagged edges and not only that, but it also resolves many more of the intricate details of the image. Temporal coherence has also improved a great deal as you see the video output is much smoother for the new method. This paper also contains a pond more comparisons against recent methods, so make sure to have a look. Like many of us, I would love to see a comparison against Nvidia's DLSS solution, but I haven't been able to find a published paper on the later versions of this method. I remain very excited about seeing that too. And for now, the future of video game visuals and other real time graphics applications is looking as excited as it's ever been. What a time to be alive. This episode has been supported by weights and biases. In this post, they show you how to use their system to explore what exact code changes were made between two machine learning experiments. Weight and biases provides tools to track your experiments in your deep learning projects. Their system is designed to save you a ton of time and money and it is actively used in projects at prestigious labs such as OpenAI, Toyota Research, GitHub and more. The best part is that if you have an open source, academic or personal project, you can use their tools for free. It really is as good as it gets. Make sure to visit them through wnb.com slash papers or click the link in the video description to start tracking your experiments in five minutes. Our thanks to weights and biases for their long standing support and for helping us make better videos for you. Thanks for watching and for your generous support and I'll see you next time.
|
Dear Fellow Scholars, this is Two Minute Papers with Dr. Karajon Aifahir. This glorious paper from about 7 years ago was about teaching digital creatures to walk, the numbers here showcase the process of learning over time and it is clear that the later generations did much better than the earlier ones. These control algorithms are not only able to teach these creatures to walk, but they are quite robust against perturbations as well, or more simply put, we can engage in one of the favorite pastimes of a computer graphics researcher, which is, of course, throwing boxes at a character and seeing how well it can take it. This one has done really well. Well, kind of. Now we just noted that this is a computer graphics paper and an amazing one at that, but it does not yet use these incredible new machine learning techniques that just keep getting better year by year. To see these agents could learn to inhabit a given body, but I am wondering what would happen if we would suddenly change their bodies on the fly. Could previous methods handle it? Unfortunately, not really, and I think it is fair to say that an intelligent agent should have the ability to adapt when something changes. Therefore, our next question is how far have we come in these seven years? Can these new machine learning methods help us create a more general agent that could control not just one body, but a variety of different bodies? Let's have a look at today's paper and with that, let the fun begin. This initial agent was blessed with reasonable body proportions, but of course, we can't just leave it like that. Yes, much better. And look, all of these combinations can still work properly and all of them use the same one reinforcement learning algorithm. And do not think for a second that this is where the fun ends. No, no, no. Now, hold on to your papers and let's engage in these horrific asymmetric changes. There is no way that the same algorithm could be given this body and still be able to walk. Goodness, look at that. It is indeed still able to walk. If you have been holding on to your papers, good. Now, squeeze that paper because after adjusting the height, it could still not only walk, but even dance. But it goes further. Do you remember the crazy asymmetric experiment for the legs? Let's do something like that with thickness. And as a result, they can still not only walk, but even perform gymnastic moves. Woohoo! Now it's great that one algorithm can adapt to all of these body shapes, but it would be reasonable to ask how much do we have to wait for it to adapt? Have a look here. Are you seeing what I am seeing? We can make changes to the body on the fly and the AI adapts to it immediately. No retraining or parameter tuning is required. And that is the point where I fell off the chair when I read this paper. What a time to be alive. And now, scholars bring in the boxes. Haha! It can also inhabit dogs and fish, and we can also have some fun with them as we grab a controller and control them in real time. The technique is also very efficient as it requires very little memory and computation, not to mention that we only have to train one controller for many body types instead of always retraining after each change to the body. However, of course, this algorithm isn't perfect. One of its key limitations is that it will not do well if the body shapes we are producing stray too far away from the ones contained in the training set. But let's leave some space for the next follow-up paper too. And just one more thing that didn't quite fit into this story. Every now and then, I get these heartwarming messages from you fellow scholars noting that you've been watching the series for a while and decided to turn your lives around and go back to study more and improve. Good work, Mr. Moonat. That is absolutely amazing and reading these messages are a true delight to me. Please, keep them coming. What you see here is an instrumentation for a previous paper that we covered in this series which was made by weights and biases. I think organizing these experiments really showcases the usability of their system. Wates and biases provides tools to track your experiments in your deep learning projects. Their system is designed to save you a ton of time and money and it is actively used in projects at prestigious labs such as OpenAI, Toyota Research, GitHub and more. And the best part is that if you have an open source, academic or personal project, you can use their tools for free. It really is as good as it gets. Make sure to visit them through wnb.com slash papers or click the link in the video description to start tracking your experiments in 5 minutes. Our thanks to weights and biases for their long standing support and for helping us make better videos for you. Thanks for watching and for your generous support and I'll see you next time.
|
Dear Fellow Scholars, this is Two Minute Papers with Dr. Kato Jolnai-Fehir. I have been yearning for a light transport paper and goodness. Was I ecstatic when reading this one? And by the end of the video, I hope you will be too. And now, we only have to go through just about 30 years of light transport research. As some of you know, if we immerse ourselves into the art of light transport simulations, we can use our computers to simulate millions and millions of light rays and calculate how they get absorbed or scattered off of our objects in a virtual scene. Initially, we start out with a really noisy image, and as we add more rays, the image gets clearer and clearer over time. The time it takes for these images to clean up depends on the complexity of the geometry and our material models, but it typically takes a while. This micro-planet scene mostly contains vegetation, which are map objects. These, we also refer to as diffuse materials, this typically converge very quickly. As you see, we get meaningful progress on the entirety of the image within the first two minutes of the rendering process. And remember, in light transport simulations, noise is public enemy number one. This used a technique called path tracing. Let's refer to it as the OK technique. And now, let's try to use path tracing, this OK technique to render this torus in a glass enclosure. This is the first two minutes of the rendering process, and it does not look anything like the previous scene. The previous one was looking pretty smooth after just two minutes, whereas here you see this is indeed looking very grim. We have lots of these fireflies, which will take us up to a few days of computation time to clean up, even if we have a modern, powerful machine. So, why did this happen? The reason for this is that there are tricky cases for specular light transport that take many, many millions, if not billions, of light rays to compute properly. Specular here means mirror-like materials, those can get tricky, and this torus that has been enclosed in there is also not doing too well. So, this was path tracing, the OK technique. Now, let's try a better technique called metropolis light transport. This method is the result of a decade of research and is much better in dealing with difficult scenes. This particular variant is a proud Hungarian algorithm by a scientist called Trobok Kelaeman, and his colleagues at the Technical University of Budapest. For instance, here is a snippet of our earlier paper on a similarly challenging scene. This is how the OK path tracer did, and in the same amount of time, this is what metropolis light transport, the better technique could do. This was a lot more efficient, so let's see how it does with the torus. Now that's unexpected. This is indeed a notoriously difficult scene to render, even for metropolis light transport, the better technique. As you see, the reflected light patterns that we also refer to as caustics on the floor are much cleaner, but the torus is still not giving up. Let's jump another 15 years of light transport research and use a technique that goes by the name manifold exploration. Let's call this the best technique. Wow! Look at how beautifully it improves the image. It is not only much cleaner, but also converges much more gracefully. It doesn't go from a noisy image to a slightly less noisy image, but almost immediately gives us a solid baseline and new, cleaned up paths also appear over time. This technique is from 2012, and it truly is mind-boggling how good it is. This technique is so difficult to understand and implement that to the best of my knowledge, the number of people who can and have implemented it properly is exactly one. And that one person is Van Sajakob, one of the best minds in the game, and believe it or not, he wrote this method as a PhD student in light transport research. And today, as a professor at EPF R Switzerland, he and his colleagues set out to create a technique that is as good as manifold exploration, the best technique but is much simpler. Well, good luck with that I thought when skimming through the paper. Let's see how it did. For instance, we have some caustics at the bottom of a pool of water, has expected lots of firefly noise with the OK past tracer, and now hold onto your papers, and here is the note technique. Just look at that. It can do so much better in the same amount of time. Let's also have a look at this scene with lots and lots of specular microgeometry or in other words, glints. This is also a nightmare to render. With the OK past tracer, we have lots of flickering from one frame to the next, and here you see the result with the note technique. Perfect. So, it is indeed possible to take the best technique, manifold exploration, and reimagine it in a way that ordinary humans can also implement. Huge congratulations to the authors of this work, that I think is a crown achievement in large transport research. And that's why I was ecstatic when I first read through this incredible paper. Make sure to have a look at the paper, and you will see how they borrowed a nice little trick from a recent work in nuclear physics to tackle this problem. The presentation of the paper and the talk video with the details is also brilliant, and I urge you to have a look at it in the video description. This whole thing got me so excited I was barely able to follow sleep for several days now. What a time to be alive. Now, while we look through some more results from the paper, if you feel a little stranded at home and are thinking that this light transport thing is pretty cool, I held a master-level course on this topic at the Technical University of Vienna. Since I was always teaching it to a handful of motivated students, I thought that the teachings shouldn't only be available for a privileged few who can afford a college education, but the teachings should be available for everyone. Free education for everyone. That's what I want. So, the course is available free of charge for everyone, no strings attached, so make sure to click in the video description to get started. We write a full-light simulation program from scratch and learn about physics, the world around us, and more. Also, note that my former PhD advisor, Michal Wimmer, is looking to hire a postdoctoral researcher in this area, which is an amazing opportunity to push this field forward. The link is available in the video description. This episode has been supported by Lambda GPU Cloud. If you're looking for inexpensive Cloud GPUs for AI, check out Lambda GPU Cloud. They've recently launched Quadro RTX 6000, RTX 8000, and V100 instances, and hold onto your papers because Lambda GPU Cloud can cost less than half of AWS and Asia. Plus, they are the only Cloud service with 48GB RTX 8000. Join researchers at organizations like Apple, MIT, and Caltech in using Lambda Cloud instances, workstations, or servers. Make sure to go to LambdaLabs.com slash papers to sign up for one of their amazing GPU instances today. Our thanks to Lambda for their long-standing support and for helping us make better videos for you. Thanks for watching and for your generous support, and I'll see you next time.
|
Dear Fellow Scholars, this is two minute papers with Dr. Karajon Aifahir. Style Transfer is an interesting problem in machine learning research where we have two input images, one for content and one for style and the output is our content image reimagined with this new style. The cool part is that the content can be a photo straight from our camera and the style can be a painting which leads to super fun and really good looking results. We have seen plenty of papers doing variations of style transfer, but I always wonder, can we push this concept further? And the answer is yes. For instance, few people know that style transfer can also be done for video. First, we record a video with our camera, then take a still image from the video and apply our artistic style to it. Then our style will be applied to the entirety of the video. The main advantage of this new method compared to previous ones is that they either take too long or we have to run an expensive pre-training step. With this new one, we can just start drawing and see the output results right away. But it gets even better. Due to the interactive nature of this new technique, we can even do this live. All we need to do is change our input drawing and it transfers the new style to the video as fast as we can draw. This way, we can refine our input style for as long as we wish, or until we find the perfect way to stylize the video. And there is even more. If this works interactively, then it has to be able to offer an amazing workflow where we can capture a video of ourselves live and mark it up as we go. Let's see. Oh wow, just look at that. It is great to see that this new method also retains temporal consistency over a long time frame, which means that even if the marked up keyframe is from a long time ago, it can still be applied to the video and the outputs will show minimal flickering. And note that we can not only play with the colors, but with the geometry too. Look, we can warp the style image and it will be reflected in the output as well. I bet there is going to be a follow-up paper on more elaborate shape modifications as well. And this new work improves upon previous methods in even more areas. For instance, this is a method from just one year ago and here you see how it struggled with contour based styles. Here's a keyframe of the input video and here's the style that we wish to apply to it. Later, this method from last year seems to lose not only the contours, but a lot of visual detail is also gone. So, how did the new method do in this case? Look, it not only retains the contours better, but a lot more of the sharp details remain in the outputs. Amazing. Now note that this technique also comes with some limitations. For instance, there is still some temporal flickering in the outputs and in some cases separating the foreground and the background is challenging. But really, such incredible progress in just one year and I can only imagine what this method will be capable of two more papers down the line. What a time to be alive. Make sure to have a look at the paper in the video description and you will see many additional details. For instance, how you can just partially fill in some of the keyframes with your style and still get an excellent result. This episode has been supported by weights and biases. In this post, they show you how to test and explore putting bounding boxes around objects in your photos. Weight and biases provides tools to track your experiments in your deep learning projects. Their system is designed to save you a ton of time and money and it is actively used in projects at prestigious labs such as OpenAI, Toyota Research, GitHub and more. And the best part is that if you have an open source, academic or personal project, you can use their tools for free. It really is as good as it gets. Make sure to visit them through wnb.com slash papers or click the link in the video description to start tracking your experiments in five minutes. Our thanks to weights and biases for their long-standing support and for helping us make better videos for you. Thanks for watching and for your generous support and I'll see you next time.
|
Dear Fellow Scholars, this is Two Minute Papers with Dr. Karo Zsolnai-Fehir. Due to popular requests, today we will talk about mural link, Elon Musk's Neural Engineering Company that he created to develop brain machine interfaces. And your first question likely is, why talk about mural link now? There was a recent event and another one last year as well, why did I not cover that? Well, the launch event from last year indeed promised a great deal. In this series, we often look at research works that are just one year apart and marvel at the difference scientists have been able to make in the tiny tiny time frame. So first, let's talk about their paper from 2019, which will be incredible. And then, see how far they have come in a year, which, as you will see, is even more incredible. The promise is to be able to read and write information too and from the brain. To accomplish this, as of 2019, they used this robot to insert the electrodes into your brain tissue. You can see the insertion process here. From the close-up image, you might think that this is a huge needle, but in fact, this needle is extremely tiny, you can see a penny for scale here. This is the story of how this rat got equipped with a USB port. As this process is almost like inserting microphones into its brain, now we are able to read the neural signals of this rat. Normally, these are analog signals which are read and digitized by neural links implant, and now this brain data is represented as a digital signal. Well, at first, this looks a bit like gibberish. Do we really have to undergo a brain surgery to get a bunch of these quickly curves? What do these really do for us? We can have the neural link chip analyze these signals and look for action potentials in them. These are also referred to as spikes because of their shape. That sounds a bit better, but still, what does this do for us? Let's see. Here we have a person with a mouse in their hand. This is an outward movement with a mouse, and then reaching back. Simple enough. Now, what you see here below is the activity of an example neuron. When nothing is happening, there is some firing, but not much activity, and now, look, when reaching out, this neuron fires a great deal, and suddenly, when reaching back again, nearly no activity. This means that this neuron is tuned for an outward motion, and this other one is tuned for the returning motion. And all this is now information that we can read in real time, and the more neurons we can read, the more complex motion we can read. Absolutely incredible. However, this is still a little difficult to read, so let's order them by what kind of motion makes them excited. And there we go. Suddenly, this is a much more organized way to present all this neural activity, and now we can detect what kind of motion the brain is thinking about. This was the reading part, and that's just the start. What is even cooler is that we can just invert this process, read this spiking activity, and just by looking at these, we can reconstruct the motion the human wishes to perform. With this, brain machine interfaces can be created for people with all kinds of disabilities, where the brain can still think about the movements, but the connection to the rest of the body is severed. Now, these people only have to think about moving, and then, the neuraling device will read it, and perform the cursor movement for them. It really feels like we live in a science fiction world. And all this signal processing is now possible automatically, and in real time, and all we need for this is this tiny, tiny chip that takes just a few square millimeters. And don't forget, that is just version 1 from 2019. Now, onwards to the 2020 event, where it gets even better. The neuraling device has been placed into Gertrude, the Pixbrain, and here you see it in action. We see the rest of you here, and luckily, we already know what it means, this lays bare the neural action potentials before our eyes, or, in other words, which neuron is spiking, and exactly when. Below, with blue, you see these activities summed up for our convenience, and this way, you will not only see, but here it too, as these neurons are tuned for snout boops. In other words, you will see, and here, that the more the snout is stimulated, the more neural activity it will show. Let's listen. And all this is possible today, and in real time. That was one of the highlights of the 2020 progress update event, but it went further. Much further. Look, this is a pig on a treadmill, and here you see the brain signal readings. This signal marked with a circle shows where a joint or limb is about to move, where the other dimmer colored signal is the chip's prediction, as to what is about to happen. It takes into consideration periodicity, and predicts higher frequency movement, like these sharp turns really well. The two are almost identical, and that means exactly what you think it means. Today, we can not only read and write, but even predict what the pig's brain is about to do. And that was the part where I fell off the chair when I watched this event live. You can also see the real and predicted world space positions for these body parts as well. Very close. Now note that there is a vast body of research in brain machine interfaces, and many of these things were possible in lab conditions, and Muralinx Quest here is to make them accessible for a wider audience within the next decade. If this project further improves at this rate, it could help many paralyzed people around the world live a longer and more meaningful life, and the neural enhancement aspect is also not out of question. Just thinking about summoning your Tesla might also summon it, which sounds like science fiction, and based on these results, you see that it may even be one of the simplest tasks for a neuraling chip in the future. And who knows, one day, maybe, with this device, these videos could be beamed into your brain much quicker, and this series would have to be renamed from two minute papers to two second papers, or maybe even two microsecond papers. They might actually fit into two minutes, like the title says, now that would truly be a miracle. Huge thanks to scientists at Muralinx for our discussions about the concepts described in this video, and then ensuring that you get accurate information. This is one of the reasons why our coverage of the 2020 event is way too late compared to many mainstream media outlets, which leads to a great deal less views for us, but it doesn't matter. We are not maximizing views here, we are maximizing learning. Note that they are also hiring, if you wish to be a part of their vision and work with them, make sure to apply. The link is available in the video description. This episode has been supported by weights and biases. In this post, they show you how to train and compare powerful, modern language models, such as Bert and distale Bert from the Hagenkface library. Wates and biases provides tools to track your experiments in your deep learning projects. Their system is designed to save you a ton of time and money, and it is actively used in projects at prestigious labs such as OpenAI, Toyota Research, GitHub, and more. And the best part is that if you have an open source, academic, or personal project, you can use their tools for free. It really is as good as it gets. Make sure to visit them through wnbe.com slash papers, or click the link in the video description to start tracking your experiments in five minutes. Our thanks to weights and biases for their long-standing support and for helping us make better videos for you. Thanks for watching and for your generous support, and I'll see you next time.
|
Dear Fellow Scholars, this is two-minute papers with Dr. Karajol Neyfahir. Approximately five months ago, we talked about a technique called Neural Radiance Fields or Nerf in short that worked on a 5D Neural Radiance Field representation. So, what does this mean exactly? What this means is that we have three dimensions for location and two for view direction or in short, the input is where we are in space, give it to a neural network to learn it and synthesize new previously unseen views of not just the materials in the scene, but the entire scene itself. In short, it can learn and reproduce entire real-world scenes from only a few views by using neural networks. And the results were just out of this world. Look, it could deal with many kinds of matte and glossy materials and even refractions worked quite well. It also understood depth so accurately that we could use it for augmented reality applications where we put a new virtual object in the scene and it correctly determined whether it is in front of or behind the real objects in the scene. However, not everything was perfect. In many cases, it had trouble with scenes with variable lighting conditions and lots of occluders. You might ask, is that really a problem? Well, imagine a use case of a tourist attraction that a lot of people take photos of and we then have a collection of photos taken during a different time of the day and of course with a lot of people around. But hey, remember that this means an application where we have exactly these conditions. A wide variety of illumination changes and occluders. This is exactly what Nerf was not too good at. Let's see how it did on such a case. Yes, we see both a drop changes in the illumination and the remnants of the fogs occluding the Brandenburg Gate as well. And this is where this new technique from scientists at Google Research by the name NerfW shines. It takes such a photo collection and tries to reconstruct the whole scene from it which we can again render from new viewpoints. So how well did the new method do in this case? Let's see. Wow, just look at how consistent those results are. So much improvement in just six months of time. This is unbelievable. This is how I did in a similar case with the Trevi fountain. Absolutely beautiful. And what is even more beautiful is that since it has variation in the viewpoint information, we can change these viewpoints around as the algorithm learned to reconstruct the scene itself. This is something that the original Nerf technique could also do, however, what it couldn't do is the same with illumination. Now we can also change the lighting conditions together with the viewpoint. This truly showcases a deep understanding of illumination and geometry. That is not trivial at all. For instance, while loading this scene into this mural re-rendering technique from last year, it couldn't tell whether we see just color variations on the same geometry or if the geometry itself is changing. And look, this new technique does much better on cases like this. So clean. Now that we have seen the images, let's see what the numbers say for these scenes. The NRW is the neural re-rendering technique we just saw and the other one is the Nerf paper from this year. The abbreviations show different ways of computing the output images, the up and down arrows show whether they are subject to maximization or minimization. They are both relatively close, but when we look at the new method, we see one of the rare cases where it wins decisively, regardless of what we are measuring. Incredible. This paper is truly a great leap in just a few months, but of course not everything is perfect here. The technique may fail to reconstruct regions that are only visible on just a few photos in the input dataset. The training still takes from hours to days. I take this as an interesting detail more than a limitation since the training only has to be done once and then using the technique can take place very quickly. But with that, there you go. A neural algorithm that understands lighting, geometry, can disentangle the tool, and reconstruct real-world scenes from just a few photos. It truly feels like we are living in a science fiction world. What a time to be alive. What you see here is an instrumentation for a previous paper that we covered in this series which was made by weights and biases. I think organizing these experiments really showcases the usability of their system. Wates and biases provides tools to track your experiments in your deep learning projects. Their system is designed to save you a ton of time and money and it is actively used in projects at prestigious labs such as OpenAI, Toyota Research, GitHub, and more. And the best part is that if you have an open source, academic, or personal project, you can use their tools for free. It really is as good as it gets. Make sure to visit them through wnbe.com slash papers or click the link in the video description to start tracking your experiments in five minutes. Our thanks to weights and biases for their long-standing support and for helping us make better videos for you. Thanks for watching and for your generous support and I'll see you next time.
|
Dear Fellow Scholars, this is Two Minute Papers with Dr. Karo Zsolnai-Fehir. Approximately a year ago, we talked about an absolutely amazing paper by the name VittuGame, in which we could grab a controller and become video game characters. It was among the first introductory papers to tackle this problem, and in this series, we always say that two more papers down the line and it will be improved significantly. So let's see what's in store and this time just one more paper down the line. This new work offers an impressive value proposition, which is to transform a real tennis match into a realistic looking video game that is controllable. This includes synthesizing not only movements, but also what effect the movements have on the ball as well. So how do we control this? And now hold on to your papers because we can specify where the next shot would land with just one click. For instance, we can place this red dot here. And now just think about the fact that this doesn't just change where the ball should go, but the trajectory of the ball should be computed using a physical model and the kind of shot the tennis player has to perform for the resulting ball trajectory to look believable. This physical model even contains the ball's spin velocity and the magnus effect created by the spin. The entire chain of animations has to be correct and that's exactly what happens here. With blue, we can also specify the position the player has to await in to hit the ball next. And these virtual characters don't just look like their real counterparts, they also play like them. You see the authors analyze the playstyle of these athletes and build a heat map that contains information about their usual shot placements for the forehand and backhand shots separately, the average velocities of these shots and even their favored recovery positions. If you have a closer look at the paper, you will see that they not only include this kind of statistical knowledge into their system, but they really went the extra mile and included common tennis strategies as well. So how does it work? Let's look under the hood. First, it looks at broadcast footage from which annotated clips are extracted that contain the movement of these players. If you look carefully, you see this red line on the spine of the player and some more. These are annotations that tell the AI about the pose of the players. It builds a database from these clips and chooses the appropriate piece of footage for the action that is about to happen, which sounds great in theory, but in a moment you will see that this is not nearly enough to produce a believable animation. For instance, we also need a rendering step which has to adjust this footage to the appropriate perspective as you see here. But we have to do way more to make this work. Look, without additional considerations, we get something like this. Not good. So what happened here? Well, given the fact that the source data sets contain matches that are several hours long, they therefore contain many different lighting conditions. With this, visual glitches are practically guaranteed to happen. To address this, the paper describes a normalization step that can even these changes out. How well does this do? It's job. Let's have a look. This is the unnormalized case. This short sequence appears to contain at least four of these glitches, all of which are quite apparent. And now, let's see the new system after the normalization step. Yep, that's what I'm talking about. But these are not the only considerations the authors had to take to produce these amazing results. You see, oftentimes, quite a bit of information is missing from these frames. Our season fellow scholars know not to despair because we can reach out to image-in-painting methods to address this. These can fill in missing details in images with sensible information. You can see NVIDIA's work from two years ago that could do this reliably for a great variety of images. These new work uses a learning-based technique called image-to-image translation to fill in these details. Of course, the advantages of this new system are visible right away and so are its limitations. For instance, temporal coherence could be improved, meaning that the tennis records can appear or disappear from one frame to another. The sprites are not as detailed as they could be, but none of this really matters. What matters is that now, what's been previously impossible is now possible and two more papers down the line, it is very likely that all of these issues will be ironed out. What a time to be alive! This episode has been supported by Lambda GPU Cloud. If you're looking for inexpensive Cloud GPUs for AI, check out Lambda GPU Cloud. They've recently launched Quadro RTX 6000, RTX 8000, and V100 instances, and hold onto your papers because Lambda GPU Cloud can cost less than half of AWS and Asia. Plus, they are the only Cloud service with 48GB RTX 8000. Join researchers at organizations like Apple, MIT, and Caltech in using Lambda Cloud instances, workstations or servers. Make sure to go to lambdalabs.com slash papers to sign up for one of their amazing GPU instances today. Thanks to Lambda for their long-standing support and for helping us make better videos for you. Thanks for watching and for your generous support, and I'll see you next time.
|
Here's fellow scholars, this is two-minute papers with Dr. Karri Zsolnai-Fehir approximately seven months ago, we discussed an AI-based technique called StyleGam2, which could synthesize images of human faces for us. As a result, none of the faces that you see here are real, all of them were generated by this technique. The quality of the images and the amount of detail they're in is truly stunning. We could also exert artistic control over these outputs by combining aspects of multiple faces together, and as the quality of these images improve over time, we think more and more about new questions to ask about them. And one of those questions is, for instance, how original are the outputs of these neural networks? Can these really make something truly unique? And believe it or not, this paper gives us a fairly good answer to that. One of the key ideas in this work is that in order to change the outputs, we have to change the model itself. Now that sounds a little nebulous, so let's have a look at an example. First, we choose a rule that we wish to change. In our case, this will be the towers. We can ask the algorithm to show us matches to this concept. And indeed, it highlights the towers on the images we haven't marked up yet, so it indeed understands what we meant. Then we highlight the tree as a goal, place it accordingly onto the water, and a few seconds later, there we go. The model has been reprogrammed such that instead of powers, it would make trees. Something original has emerged here, and look, not only on one image, but on multiple images at the same time. Now have a look at these human faces. By the way, none of them are real, and we're all synthesized by Stuyagen 2, the method you saw at the start of the video. Some of them do not appear to be too happy about the research progress in machine learning, but I am sure that this paper can put a smile on their faces. Let's select the ones that aren't too happy, then copy a big smile and paste it onto their faces. And see if it works. It does. Wow! Let's flick between the before and after images and see how well the changes are adapted to each of the target faces. Truly excellent work. And now on to eyebrows. Hold onto your papers while we choose a few of them. And now I hope you agree that this mustache would make a glorious replacement for them. And... There we go. Perfect. And note that with this, we are violating Betteridge's law of headlines again in this series because the answer to our central question is a resounding yes. These neural networks can indeed create truly original works, and what's more, even entire data sets that haven't existed before. Now at the start of the video, we noted that instead of editing images, it edits the neural networks model instead. If you look here, we have a set of input images created by a generator network. Then as we highlight concepts, for instance, the watermark text here, we can look for the weights that contain this information and rewrite the network to accommodate this user request in this case to remove these patterns. Now that they are gone by selecting humans, we can again rewrite the network weights to add more of them. And finally, the signature tree-trick from earlier can take place. The key here is that if we change one image, then we have a new and original image, but if we change the generator model itself, we can make thousands of new images in one go. Or even a full data set. And perhaps the trickiest part of this work is minimizing the effect on other weights while we reprogrammed the ones we wish to change. Of course, there will always be some collateral damage, but the results, in most cases, still seem to remain intact. Make sure to have a look at the paper to see how it's done exactly. Also, good news, the authors also provided an online notebook where you can try this technique yourself. If you do, make sure to read the instructions carefully and regardless of whether you get successes or failure cases, make sure to post them in the comments section here. In research, both are useful information. So, after the training step has taken place, neural networks can be rewired to make sure they create truly original works and all this on not one image, but on a mass scale. What a time to be alive! What you see here is an instrumentation of this exact paper we have talked about which was made by weights and biases. Wates and biases provides tools to track your experiments in your deep learning projects. Their system is designed to save you a ton of time and money and it is actively used in projects at prestigious labs such as OpenAI, Toyota Research, GitHub and more. And the best part is that if you have an open source, academic or personal project, you can use their tools for free. It really is as good as it gets. Make sure to visit them through wnba.com slash papers or click the link in the video description to start tracking your experiments in 5 minutes. Our thanks to weights and biases for their long standing support and for helping us make better videos for you. Thanks for watching and for your generous support and I'll see you next time.
|
Dear Fellow Scholars, this is two-minute papers with Dr. Karajol Naifahir. In this series, we often talk about smoke and fluid simulations and sometimes the examples showcase a beautiful smoke plume but not much else. However, in real production environments, these simulations often involve complex scenes with many objects that interact with each other and therein lies the problem. Computing these interactions is called coupling and it is very difficult to get right, but it is necessary for many of the scenes you will see throughout this video. This new graphics paper builds on a technique called the lattice Boltzmann method and promises a better way to compute this two-way coupling. For instance, in this simulation, two-way coupling is required to compute how this fiery smoke trail propels the rocket upward. So, coupling means interaction between different kinds of objects, but what about the two-way part? What does that mean exactly? Well, first, let's have a look at one-way coupling. As the box moves here, it has an effect on the smoke plume around it. This example also showcases one-way coupling where the falling plate stirs up the smoke around it. The parts with the higher Reynolds numbers showcase more perbulent flows. Typically, that's the real good stuff if you ask me. And now onto two-way coupling. In this case, similarly to previous ones, the boxes are allowed to move the smoke, but the added two-way coupling part means that now the smoke is also allowed to blow away the boxes. What's more, the vertices here on the right were even able to suspend the red box in the air for a few seconds. An excellent demonstration of a beautiful phenomenon. Now, let's look at the previous example with the dropping plate and see what happens. Yes, indeed, as the plate drops, it moves the smoke, and as the smoke moves, it also blows away the boxes. Woohoo! Due to improvements in the coupling computation, it also simulates these kinds of vertices much more realistically than previous works. Just look at all this magnificent progress in just two years. So, what else can we do with all this? What are the typical scenarios that require accurate two-way coupling? Well, for instance, it can perform an incredible tornado simulation that you see here, and there is an alternative view where we only see the objects moving about. So all this looks good, but really, how do we know how accurate this technique is? And now comes my favorite part, and this is when we let reality be our judge and compare the simulation results with real-world experiments. Hold onto your papers while you observe the real experiment here on the left. And now, the algorithmic reproduction of the same scene here. How close are they? Goodness. Very, very close. I will stop the footage at different times so we can both evaluate it better. Love it. The technique can also undergo the wind tunnel test. Here is the real footage. And here is the simulation. And it is truly remarkable how close this is able to match it, and I was wondering that even though someone who has been doing fluids for a while now is someone cropped this part of the image and told me that it is real-world footage, I would have believed it in a second. Absolute insanity. So how much do we have to wait to compute a simulation like this? Well, great news. It uses your graphics card, which is typically the case for the more rudimentary fluid simulation algorithms out there, but the more elaborate ones typically don't support it, or at least not without a fair amount of additional effort. The quickest example was this as it was simulated in less than 6 seconds, which I find to be mind-blowing. The smoke simulation with box movements in a few seconds, I am truly out of words. The rocket launch scene took the longest with 16 hours, while the falling plate example with the strong draft that threw the boxes around with about 4.5 hours of computation time. The results depend greatly on Delta T, which is the size of the time steps, or in other words in how small increments we can advance time when creating these simulations to make sure we don't miss any important interactions. You see here that in the rocket example, we have to simulate roughly 100,000 steps for every second of video footage. No wonder it takes so long. We have an easier time with this scene, where these time steps can be 50 times larger without losing any detail, and hence it goes much faster. The great resolution also matters a great deal, which specifies how many spatial points the simulation has to take place in. The higher the resolution of the grid, the larger the region we can cover, and the more details we can simulate. As most research works, this technique doesn't come without limitations, however. It is less accurate if we have simulations involving thin rods and shells, and typically uses 2 to 3 times more memory than a typical simulator program. If these are the only trade-offs to create all this marvelous footage, sign me up this very second. Overall, this paper is extraordinarily well written, and of course it has been accepted to the SIGGRAPH conference, one of the most prestigious scientific venues in computer graphics research. Huge congratulations to the authors, and if you wish to see something beautiful today, make sure to have a look at the paper itself in the video description. Truly stunning work. What a time to be alive! This episode has been supported by Lambda GPU Cloud. If you're looking for inexpensive Cloud GPUs for AI, check out Lambda GPU Cloud. They've recently launched Quadro RTX 6000, RTX 8000, and V100 instances, and hold onto your papers because Lambda GPU Cloud can cost less than half of AWS and Asia. Plus they are the only Cloud service with 48GB RTX 8000. And researchers at organizations like Apple, MIT, and Caltech in using Lambda Cloud instances, workstations or servers. Make sure to go to lambdalabs.com slash papers to sign up for one of their amazing GPU instances today. Our thanks to Lambda for their long-standing support and for helping us make better videos for you. Thanks for watching and for your generous support, and I'll see you next time.
|
Dear Fellow Scholars, this is 2 Minute Papers with Dr. Karajon Aifahir. In 2017, so more than 300 episodes ago, we talked about an algorithm that took a 3D model of a complex object and would give us an easy to follow step-by-step breakdown on how to draw it. Automated drawing tutorials, if you will. This was a handcrafted algorithm that used graph theory to break these 3D objects into smaller, easier to manage pieces, and since then, learning algorithms have improved so much that we started looking more and more to the opposite direction. And that opposite direction would be giving a crew drawing to the machine and getting a photorealistic image. Now, that sounds like science fiction, until we realize that scientists at Nvidia already had an amazing algorithm for this around 1.5 years ago. In that work, the input was a labeling which we can draw ourselves and the output is a hopefully photorealistic landscape image that adheres to these labels. I love how first only the silhouette of the rock is drawn, so we have this hollow thing on the right that is not very realistic and then it is now filled in with the bucket tool and there you go. And next thing you know, you have an amazing looking landscape image. It was capable of much, much more, but what it couldn't do is synthesize human faces this way. And believe it or not, this is what today's technique is able to do. Look, in goes our crew'd sketch as a guide image and out comes a nearly photorealistic human face that matches it. Interestingly, before we draw the hair itself, it gives us something as a starting point, but if we choose to, we can also change the hair shape and the outputs will follow our drawing really well. But it goes much further than this as it boasts a few additional appealing features. For instance, it not only refines the output as we change our drawing, but since one crew'd input can be mapped to many many possible people, these output images can also be further or directed with these sliders. According to the included user study, journeyman users mainly appreciated the variety they can achieve with this algorithm. If you look here, you can get a taste of that while professionals were more excited about the controllability aspect of this method. That was showcased with the footage with the sliders. Another really cool thing that it can do is called face copy paste where we don't even need to draw anything and just take a few aspects of human faces that we would like to combine. And there you go. Absolutely amazing. This work is not without failure cases, however. You have probably noticed, but the AI is not explicitly instructed to match the eye colors where some asymmetry may arise in the output. I am sure this will be improved just one more paper down the line and I am really curious where digital artists will take these techniques in the near future. The objective is always to get out of the way and help the artist spend more time bringing their artistic vision to life and spend less time on the execution. This is exactly what these techniques can help with. What a time to be alive. What you see here is an instrumentation for a previous paper that we covered in this series which was made by weights and biases. I think organizing these experiments really showcases the usability of their system. Wates and biases provides tools to track your experiments in your deep learning projects. Your system is designed to save you a ton of time and money and it is actively used in projects at prestigious labs such as OpenAI, Toyota Research, GitHub and more. And the best part is that if you have an open source, academic or personal project, you can use their tools for free. It really is as good as it gets. Make sure to visit them through wnb.com slash papers or click the link in the video description to start tracking your experiments in 5 minutes. Our thanks to weights and biases for their long standing support and for helping us make better videos for you. Thanks for watching and for your generous support and I'll see you next time.
|
Dear Fellow Scholars, this is two-minute papers with Dr. Karo Zsolnai-Fehir. If we write the laws of fluid motion into a computer program, we can create beautiful water simulations, like the one you see here. However, with all the progress in computer graphics research, we can not only simulate the water volume itself, but there are also efficient techniques to add foam, spray, and bubbles to this simulation. The even crazier thing is that this paper from 8 years ago can do all three in one go, and is remarkably simple for what it does. Just look at this heavenly footage all simulated on a computer by using Blender, a piece of free and open source software, and the flip fluids plugin. But all this has been possible for quite a while now, so what happened in the 8 years since this paper has been published? How has this been improved? Well, it's good to have bubbles in our simulation, however, in real life, bubbles have their individual densities and can coalesce at a moment's notice. This technique is able to simulate these events, and you will see that it offers much, much more. Now, let's marvel at three different phenomena in this simulation. First, the bubbles here are less dense than the water, and hence start to rise, then look at the interaction with the air. Now, after this, the bubbles that got denser than the water start sinking again, and all this can be done on your computer today. What a beautiful simulation! And now, hold on to your papers because this method also adds simulating air pressure, which opens up the possibility for an interaction to happen at a distance. Look, first we start pushing the piston here. The layer of air starts to push the fluid, which weighs on the next air pocket, and so on. Such a beautiful phenomenon! And let's not miss the best part. When we pull the piston back, the emerging negative flux starts drawing the liquid back. One more time. Simulating all this efficiently is quite a technical marvel. When reading through the paper, I was very surprised to see that it is able to incorporate this air compression without simulating the air gaps themselves. A simulation without simulation, if you will. Let's simulate pouring water through the neck of the water cooler with a standard already existing technique. For some reason, it doesn't look right, does it? So, what's missing here? We see a vast downward flow of liquid, therefore there also has to be a vast upward flow of air at the same time, but I don't see any of that here. Let's see how the new simulation method handles this. We start the downflow, and yes, huge air bubbles are coming up, creating this beautiful, glugging effect. I think I now have a good guess as to what scientists are discussing over the water cooler in Professor Christopher Batty's research group. So, how long do we have to wait to get these results? You see, the quality of the outputs is nearly the same as the reference simulation, however, it takes less than half the amount of time to produce it. Admittedly, these simulations still take a few hours to complete, but it is absolutely amazing that this beautiful, complex phenomena can be simulated in a reasonable amount of time, and you know the drill, two more papers down the line, and it will be improved significantly. But we don't necessarily need a bubbly simulation to enjoy the advantages of this method. In this scene, we see a detailed splash, where the one on the right here was simulated with a new method, and it also matches the reference solution, and it was more than three times faster. If you have a look at the paper in the video description, you will see how it simplifies the simulation by finding a way to identify regions of the simulation domain where not a lot is happening, and course on the simulation there. These are the green regions that you see here, and the paper refers to them as F-Fine regions. As you see, the progress in computer graphics and fluid simulation research is absolutely stunning, and these amazing papers just keep coming out year after year. What a time to be alive! This episode has been supported by weights and biases. In this post, they show you how to use their system to train a neural network architecture called EfficientNet. The whole thing is beautifully explained there, so make sure to click the link in the video description. Weight and biases provides tools to track your experiments in your deep learning projects. Their system is designed to save you a ton of time and money, and it is actively used in projects at prestigious labs such as OpenAI, Toyota Research, GitHub, and more. And the best part is that if you have an open source, academic, or personal project, you can use their tools for free. It really is as good as it gets. Make sure to visit them through www.nba.com slash papers, or click the link in the video description to start tracking your experiments in five minutes. Our thanks to weights and biases for their long-standing support, and for helping us make better videos for you. Thanks for watching and for your generous support, and I'll see you next time.
|
Dear Fellow Scholars, this is two-minute papers with Dr. Karojejol Nefaher. In early 2019, a learning-based technique appeared that could perform common natural language processing operations, for instance, answering questions, completing texts, reading comprehension, summarization, and more. This method was developed by scientists at OpenAI and they called it GPT2. A follow-up paper introduced a more capable version of this technique called GPT3, and among many incredible examples, it could generate website layouts from a written description. The key idea in both cases was that we would provide it an incomplete piece of text, and it would try to finish it. However, no one said that these neural networks have to only deal with text information, and sure enough, in this work, scientists at OpenAI introduced a new version of this method that tries to complete not text, but images. The problem statement is simple. We give it an incomplete image, and we ask the AI to fill in the missing pixels. That is, of course, an immensely difficult task, because these images made the picked any part of the world around us. It would have to know a great deal about our world to be able to continue the images, so how well did it do? Let's have a look. This is undoubtedly a cat. And look, see that white part that is just starting. The interesting part has been cut out of the image. What could that be? A piece of paper, or something else? Now, let's leave the dirty work to the machine and ask it to finish it. Wow, a piece of paper indeed, according to the AI, and it even has text on it. But the text has a heading section and a paragraph below it too. Truly excellent. You know what is even more excellent? Perhaps the best part. It also added the indirect illumination on the fur of the cat, meaning that it sees that a blue room surrounds it, and therefore some amount of the color bleeds onto the fur of the cat, making it blower. I am a light transport researcher by trade, so I spend the majority of my life calculating things like this, and I have to say that this looks quite good to me. Absolutely amazing attention to detail. But it had more ideas. What's this? The face of the cat has been finished quite well in fact, but the rest, I am not so sure. If you have an idea what this is supposed to be, please let me know in the comments. And here go the rest of the results. All quite good. And the true, real image has been concealed for the algorithm. This is the reference solution. Let's see the next one. Oh my, scientists at OpenAI pulled no punches here, this is also quite nasty. How many stripes should this continue with? Zero, maybe, in any case, this solution is not unreasonable. I appreciate the fact that it continued the shadows of the humans. Next one. Yes, more stripes, great, but likely, a few too many. We are the remainder of the solutions, and the true reference image again. Let's have a look at this water droplet example too. We humans know that since we see the remnants of some ripples over there too, there must be a splash, but does the AI know? Oh yes, yes it does. Amazing. And the true image. Now, what about these little creatures? The first continuation finishes them correctly and puts them on a twig. The second one involves a stone. The third is my favorite, hold on to your papers, and look at this. They stand in the water and we can even see their mirror images. Wow. The fourth is a branch, and finally, the true reference image. This is one of its best works I have seen so far. There are some more results and note that these are not cherry-picked, or in other words, there was no selection process for the results. Nothing was discarded. This came out from the AI as you see them. There is a link to these and to the paper in the video description, so make sure to have a look and let me know in the comments if you have found something interesting. So what about the size of the neural network for this technique? Well, it contains from 1.5 to about 7 billion parameters. Let's have a look together and find out what that means. These are the results from the GPT2 paper, the previous version of the text processor, on a challenging reading comprehension test as a function of the number of parameters. As you see, around 1.5 billion parameters, which is roughly similar to GPT2, it learned a great deal, but its understanding was nowhere near the level of human comprehension. However, as they grew the network, something incredible happened. Non-trivial capabilities started to appear as we approached 100 billion parameters. Look, it nearly matched the level of humans, and all this was measured on a nasty reading comprehension test. So this image GPT has the number of parameters that is closer to GPT2 than GPT3, so we can maybe speculate that the next version could be, potentially, another explosion in capabilities. I can't wait to have a look at that. What a time to be alive! This episode has been supported by weights and biases. In this post, they show you how to use their system to visualize which part of the image your neural network looks at before it concludes that it is a cat. You can even try an example in an interactive notebook through the link in the video description. It's and biases provides tools to track your experiments in your deep learning projects. Their system is designed to save you a ton of time and money, and it is actively used in projects at prestigious labs such as OpenAI, Toyota Research, GitHub, and more. And the best part is that if you have an open source, academic, or personal project, you can use their tools for free. It really is as good as it gets. Make sure to visit them through wnbe.com slash papers, or click the link in the video description to start tracking your experiments in 5 minutes. Our thanks to weights and biases for their long-standing support and for helping us make better videos for you. Thanks for watching and for your generous support, and I'll see you next time.
|
Dear Fellow Scholars, this is two-minute papers with Dr. Karajona Ifahir, the research field of image translation with the aid of learning algorithms has been on fire lately. For instance, this earlier technique would look at a large number of animal faces and could interpolate between them or, in other words, blend one kind of dog into another breed. But that's not all because it could even transform dogs into cats or create these glorious plump cats and cheetahs. The results were absolutely stunning. However, it would only work on the domains it was trained on. In other words, it could only translate to and from species that it took the time to learn about. This new method offers something really amazing. It can handle multiple domains or multiple breeds, if you will, even ones that it hadn't seen previously. That sounds flat out impossible, so let's have a look at some results. This dog will be used as content, therefore the output should have a similar pose, but its breed has to be changed to this one. But there is one little problem. And that problem is that the AI has never seen this breed before. This will be very challenging because we only see the head of the dog used for style. Should the body of the dog also get curly hair? You only know if you know this particular dog breed, or if you're smart and can infer missing information by looking at other kinds of dogs. Let's see the result. Incredible. The remnants of the leash also remain there in the output results. It also did a nearly impeccable job with this bird, where again the style image is from a previously unseen breed. Now, this is of course a remarkably difficult problem domain, translating into different kinds of animals that you know nothing about, apart from a tiny, cropped image, this would be quite a challenge even for a human. However, this one is not the first technique to attempt to solve it, so let's see how it stacks up against a previous method. This one is from just a year ago, and you will see in a moment how much this field has progressed since then. For instance, in this output we get two dogs which seems to be a mix of the content and the style dog. And while the new method still seems to have some structural issues, the dog type and the pose is indeed correct. The rest of the results also appear to be significantly better. But what do you think? Did you notice something weird? Let me know in the comments below. And now, less transition into image interpolation. This will be a touch more involved than previous interpolation efforts. You see, in this previous paper we had a source and a target image, and the AI was asked to generate intermediate images between them. Simple enough. In this case, however, we have not two but three images as an input. There will be a content image. This will provide the high-level features, such as pose, and its style is going to transition from this to this. The goal is that the content image remains intact while transforming one breed or species into another. This particular example is one of my favorites. Such a beautiful transition and surely not all, but many of the intermediate images could stand on their own. Again, the style images are from unseen species. Not all cases do this well with the intermediate images, however. Here we start with one eye because the content and the style image have one eye visible while the target style of the owl has two. How do we solve that? Of course, with nuclear vision. Look. Very amusing. Loving this example, especially how impossible it seems because the owl is looking into the camera with both eyes while we see its backside below the head. If it looked to the side like the input content image, this might be a possibility, but with this contorted body posture, I am not so sure, so I'll give it a pass on this one. So there you go, transforming one known animal into a different one that the AI has never seen before. And it is already doing a more than formidable job at that. What a time to be alive. This episode has been supported by Lambda GPU Cloud. If you're looking for inexpensive Cloud GPUs for AI, check out Lambda GPU Cloud. They've recently launched Quadro RTX 6000, RTX 8000 and V100 instances and hold onto your papers because Lambda GPU Cloud can cost less than half of AWS and Asia. Plus, they are the only Cloud service with 48GB RTX 8000. Join researchers at organizations like Apple, MIT and Caltech in using Lambda Cloud instances, workstations or servers. Make sure to go to lambdalabs.com slash papers to sign up for one of their amazing GPU instances today. Thanks to Lambda for their long-standing support and for helping us make better videos for you. Thanks for watching and for your generous support and I'll see you next time.
|
Dear Fellow Scholars, this is two-minute papers with Dr. Karojejona Ifehir. Let's talk about video super-resolution. The problem statement is simple, in goes a course video, the technique analyzes it, guesses what's missing, and out comes a detailed video. However, of course, reliably solving this problem is anything but simple. When learning based algorithms were not nearly as good as they are today, this problem was mainly handled by handcrafted techniques, but they had their limits. After all, if we don't see something too well, how could we tell what's there? And this is where new learning based methods, especially this one, come into play. This is a hard enough problem for even a still image, yet this technique is able to do it really well, even for videos. Let's have a look. The eye color for this character is blurry, but we see that it likely has a greenish, blueish color, and if we gave this problem to a human, this human would know that we are talking about the eye of another human and we know roughly what this should look like in reality. A human would also know that this must be a bridge and finish the picture. But, what about computers? The key is that if we have a learning algorithm that looks at the course and find version of the same video, it will hopefully learn what it takes to create a detailed video when given a poor one, which is exactly what happened here. As you see, we can give it very little information and it was able to add a stunning amount of detail to it. Now, of course, super resolution is a highly studied field these days, therefore it is a requirement for a good paper to compare to quite a few previous works. Let's see how it stacks up against those. Here, we are given a blocky image of this garment and this is the reference image that was coarsened to create this input. The reference was carefully hidden from the algorithms and only we have it. Previous works could add some details, but the results were nowhere near as good as the reference. So what about the new method? My goodness, it is very close to the real deal. Previous methods also had trouble resolving the details of this region, where the new method again, very close to reality. It is truly amazing how much this technique understands the work around us from just this training set of low and high resolution videos. Now, if you have a closer look at the author list, you see that Nils Terrey is also there. He is a fluid and smoke person, so I thought there had to be an angle here for smoke simulations. And yep, there we go. To even have a fighting chance of understanding the importance of this sequence, let's go back to Nils' earlier works, which is one of the best papers ever written, Wavelet Terbillens. That's a paper from 12 years ago. Now, some of the more seasoned fellow scholars among you know that I bring this paper up every chance I get, but especially now that it connects to this work we are looking at. You see, Wavelet Terbillens was an algorithm that could take a coarse smoke simulation after it has been created and added find it test with. In fact, so many fine details that creating the equivalently high resolution simulation would have been near impossible at the time. However, it did not work with images, it required knowledge about the inner workings of the simulator. For instance, it would need to know about the velocities and pressures at different points in this simulation. Now, this new method can do something very similar and all it does is just look at the image itself and improve it without even looking into the simulation data. Even though the flaws in the output are quite clear, the fact that it can add find details to a rapidly moving smoke plume is an incredible feat. If you look at the comparison against CycleGan, a technique from just three years ago, this is just a few more papers down the line and you see that this has improved significantly. And the new one is also more careful with temporal coherence or, in other words, there is no flickering arising from solving the adjacent frames in the video differently. Very good. And if we look a few more papers down the line, we may just get a learning based algorithm that does so well at this task that we would be able to rewatch any old footage in super high quality. What a time to be alive! What you see here is an instrumentation for a previous paper that we covered in this series which was made by weights and biases. I think organizing these experiments really showcases the usability of their system. Weight and biases provides tools to track your experiments in your deep learning projects. Their system is designed to save you a ton of time and money and it is actively used in projects at prestigious labs such as OpenAI, Toyota Research, GitHub and more. And the best part is that if you have an open source, academic or personal project, you can use their tools for free. It really is as good as it gets. Make sure to visit them through wnb.com slash papers or click the link in the video description to start tracking your experiments in 5 minutes. Our thanks to weights and biases for their long standing support and for helping us make better videos for you. Thanks for watching and for your generous support and I'll see you next time.
|
Dear Fellow Scholars, this is two minute papers with Dr. Karojona Ifehir. When we look at the cover page of a magazine, we often see lots of well-made, but also idealized photos of people. Idealized here means that the photographer made them in a studio where they can add or remove light sources and move them around to bring out the best from their models. But most photos are not made in the studio, they are made out there in the wild where the lighting is what it is and we can't control it too much. So with that today, our question is what if we could change the lighting after the photo has been made? This work proposes a cool technique to perform exactly that by enabling us to edit the shadows on a portrait photo that we would normally think of deleting. Many of these have to do with the presence of shadows and you can see here that we can really edit these after the photo has been taken. However, before we start taking a closer look at the editing process, we have to note that there are different kinds of shadows. One, there are shadows cast on us by external objects, let's call them foreign shadows and there is self-shadowing which comes from the models own facial features. Let's call those facial shadows. So why divide them into two classes? For example, because we typically seek to remove foreign shadows and edit facial shadows. The removal part can be done with a learning algorithm provided that we can teach it with a lot of training data. Let's think about ways to synthesize such a large data set. Let's start with the foreign shadows. We need image pairs of test subjects with and without shadows to have a neural network learn about their relations. This removing shadows is difficult without further interfering with the image, the authors opted to do it the other way around. In other words, they take a clean photo of the subject, that's the one without the shadows and then add shadows to it algorithmically. Very cool. And the results are not bad at all and get this, they even accounted for subsurface scattering which is the scattering of light under our skin. That makes a great deal of a difference. This is a reference from a paper we wrote with scientists at the University of Zaragoza and the Activision Blizzard Company to add this beautiful effect to their games. Here is a shadow edge without subsurface scattering, quite dark. And with subsurface scattering, you see this beautiful glowing effect. Subsurface scattering indeed makes a great deal of difference around hard shadow edges, so huge thumbs up for the authors for including an approximation of that. However, the synthesized photos are still a little suspect. We can still tell that they are synthesized. And that is kind of the point. Our question is can a neural network still learn the difference between a clean and the shadowy photo despite all this? And as you see, the problem is not easy. These methods did not do too well on these examples when you compare them to the reference solution. And let's see this new method. Wow, I can hardly believe my eyes, nearly perfect, and it did learn all this on not real but synthetic images. And believe it or not, this was only the simpler part. Now comes the hard part. Let's look at how well it performs at editing the facial shadows. We can pretend to edit both the size and the intensity of these light sources. The goal is to have a little more control over the shadows in these photos, but whatever we do with them, the outputs still have to remain realistic. Here are the before and after results. The facial shadows have been weakened, and depending on our artistic choices, we can also soften the image a great deal. Absolutely amazing. As a result, we now have a two-step algorithm that first removes foreign shadows and is able to soften the remainder of the facial shadows creating much more usable portrait photos of our friends and all this after the photo has been made. What a time to be alive. Now of course, even though this technique convincingly beats previous works, it is still not perfect. The algorithm may fail to remove some highly detailed shadows. You can see how the shadow of the hair remains in the output. In this other output, the hair shadows are handled a little better. There is some dampening, but the symmetric nature of the facial shadows here put the output results in an interesting no-man's land where the opacity of the shadow has been decreased, but the result looks unnatural. I can't wait to see how this method will be improved two more papers down the line. I will be here to report on it to you, so make sure to subscribe and hit the bell icon to not miss out on that. This episode has been supported by weights and biases. In this post, they show you how to use their system to train distributed models with cares. You get all the required source code to make this happen, so make sure to click the link in the video description. Weight and biases provides tools to track your experiments in your deep learning projects. Their system is designed to save you a ton of time and money, and it is actively used in projects at prestigious labs such as OpenAI, Toyota Research, GitHub, and more. And the best part is that if you have an open source, academic, or personal project, you can use their tools for free. It really is as good as it gets. Make sure to visit them through wnb.com slash papers, or click the link in the video description to start tracking your experiments in 5 minutes. Our thanks to weights and biases for their long standing support and for helping us make better videos for you. Thanks for watching and for your generous support and I'll see you next time.
|
Dear Fellow Scholars, this is Two Minute Papers with Dr. Karajola Ifehir, a computer graphics paper from approximately three years ago was able to simulate the motion of these bubbles and even these beautiful collision events between them in a matter of milliseconds. This was approximately 300 episodes ago and in this series we always say that two more papers down the line and this will be improved significantly. So now, once again, here goes another one of those two minute papers, moments of truth. Now three years later, let's see how this field evolved. Let's fire up this new technique that will now simulate the evolution of two cube shaped droplets for us. In reality, modern nature would make sure that the surface area of these droplets is minimized. Let's see. Yes, droplets form immediately so the simulation program understands surface tension and the collision event is also simulated beautifully. A+. However, this was possible with previous methods, for instance a paper by the name surface-only liquids could also pull it off so what's new here? Well, let's look under the hood and find out. Oh yes, this is different. You see, normally if we do this breakdown, we get triangle meshes. This is typically how these surfaces are represented, but I don't see any meshes here, I see particles. Great, but what does this enable us to do? Look here. If we break down the simulation of this beautiful fluid polygon, we see that there is not only one kind of particle here, there are three kinds. With light blue, we see sheet particles, the yellow ones are filament particles, and if we look inside, with dark blue here, you see volume particles. With these building blocks and the proposed new simulation method, we can create much more sophisticated surface tension related phenomena. So let's do exactly that. For instance, here you see soap membranes stretching due to wind flows. They get separated, lots of topological changes take place, and the algorithm handles it correctly. In another example, this soap bubble has been initialized with a hole, and you can see it cascading through the entire surface. Beautiful work. And after we finish the simulation of these fluid chains, we can look under the hood and see how the algorithm thinks about this piece of fluid. Once again, with dark blue, we have the particles that represent the inner volume of the water chains, there is a thin layer of sheet particles holding them together. What a clean and beautiful visualization. So how much do we have to wait to get these results? A bit. Simulating this fluid chain example took roughly 60 seconds per frame. This droplet on a plane example runs approximately 10 times faster than that, it only needs 6.5 seconds for each frame. This was one of the cheaper scenes in the paper, and you may be wondering which one was the most expensive. This water bell took almost 2 minutes for each frame. And here, when you see this breakdown from the particle color coding, you know exactly what we are looking at. Since part of this algorithm runs on your processor, and a different part on your graphics card, there is plenty of room for improvements in terms of the computation time for a follow-up paper. And I cannot wait to see these beautiful simulations in real time 2 more papers down the line. What a time to be alive. I also couldn't resist creating a slow motion version of some of these videos if this is something that you wish to see, make sure to click our Instagram page link in the video description for more. This episode has been supported by Linode. Linode is the world's largest independent cloud computing provider. Linode gives you full back and access to your server, which is your step up to powerful, fast, fully configurable cloud computing. Linode also has one click apps that streamline your ability to deploy websites, personal VPNs, game servers, and more. If you need something as small as a personal online portfolio, Linode has your back, and if you need to manage tons of clients' websites and reliably serve them to millions of visitors, Linode can do that too. What's more, they offer affordable GPU instances featuring the Quadro RTX 6000, which is tailor-made for AI, scientific computing, and computer graphics projects. If only I had access to a tool like this while I was working on my last few papers. To receive $20 in credit on your new Linode account, visit linode.com slash papers, or click the link in the video description and give it a try today. Our thanks to Linode for supporting the series and helping us make better videos for you. Thanks for watching and for your generous support, and I'll see you next time.
|
Dear Fellow Scholars, this is two-minute papers with Dr. Karajon Aifahir, approximately three years ago, a magical learning-based algorithm appeared that was capable of translating a photorealistic image of a zebra into a horse or the other way around could transform apples into oranges and more. Later, it became possible to do this even without the presence of a photorealistic image where all we needed was a segmentation map. This segmentation map provides labels on what should go where, for instance, this should be the road, here will be trees, traffic signs, other vehicles and so on. And the output was a hopefully photorealistic video and you can see here that the results were absolutely jaw-dropping. However, look, as time goes by, the backside of the car morphs and warps over time creating unrealistic results that are inconsistent even on the short term. In other words, things change around from second to second and the AI does not appear to remember what it did just a moment ago. This kind of consistency was solved surprisingly well in a follow-up paper from a video in which an AI would look at the footage of a video game, for instance, Pac-Man for approximately 120 hours, we could shut down the video game and the AI would understand the rules so well that it could recreate the game that we could even play with. It had memory and used it well and therefore it could enforce a notion of world consistency or, in other words, if we return to a state of the game that we visited before, it will remember to present us with very similar information. So the question naturally arises would it be possible to create a photorealistic video from the segmentation maps that is also consistent? And in today's paper, researchers at MVIDIA propose a new technique that requests some additional information, for instance, a depth map that provides a little more information on how far different parts of the image are from the camera. Much like the Pac-Man paper, this also has memory and I wonder if it is able to use it as well as that one did. Let's test it out. This previous work is currently looking at a man with a red shirt, with slowly look away, this regard the warping and when we go back, hey, do you see what I see here? The shirt became white. This is not because the person is one of those artists who can change their clothes in less than a second, but because this older technique did not have a consistent internal model of the world. Now, let's see the new one. Once again, we start with the red shirt, look away, and then, yes, same red to blue gradient. Excellent. So it appears that this new technique also reuses information from previous frames efficiently. It is finally able to create a consistent video with much less morphing and warping and even better. We have these advantages, consistency property, where if you look at something that we looked at before, we will see very similar information there. But there is more. Additionally, it can also generate scenes from new viewpoints, which we also refer to as neural rendering. And as you see, the two viewpoints show similar objects so the consistency property holds here too. And now, hold onto your papers because we do not necessarily have to produce these semantic maps ourselves. We can let the machines do all the work by firing up a video game that we like, request that the different object classes are colored differently and get this input for free. And then, the technique generated a photorealistic video from the game graphics. Absolutely amazing. Now note that it is not perfect. For instance, it has a different notion of time as the clouds are changing in the background rapidly. And look, at the end of the sequence, we get back to our starting point and the first frame that we saw is very similar to the last one. The consistency works here too. Very good. I have no doubt that two more papers down the line and this will be even better. And for now, we can create consistent photorealistic videos even if all we have is freely obtained video game data. What a time to be alive. If you are a researcher or a startup looking for cheap GPU compute to run these algorithms, check out Lambda GPU Cloud. I've talked about Lambda's GPU workstations in other videos and I'm happy to tell you that they are offering GPU cloud services as well. The Lambda GPU Cloud can train ImageNet to 93% accuracy for less than $19. Lambda's web-based IDE lets you easily access your instance right in your browser. And finally, hold onto your papers because the Lambda GPU Cloud costs less than half of AWS and Azure. Make sure to go to lambdalabs.com slash papers and sign up for one of their amazing GPU instances today. Our thanks to Lambda for helping us make better videos for you. Thanks for watching and for your generous support and I'll see you next time.
|
Dear Fellow Scholars, this is two-minute papers with Dr. Károly Zsolnai-Fehér, perhaps the best part of being a computer graphics researcher, is creating virtual worlds on a daily basis and computing beautiful simulations in these worlds. And what you see here is not one, but two kinds of simulations. One is a physics simulation that computes how these objects move and a light transport simulation that computes how these objects look. In this video, we will strictly talk about the physics simulation part of what you see on the screen. To simulate this beautiful phenomena, many recent methods build on top of a technique called the material point method. This is a hybrid simulation technique that uses both particles and grids to create these beautiful animations, however, when used by itself, we can come up with a bunch of cases that it cannot simulate properly. One such example is cracking and tearing phenomena which has been addressed in this great paper that we discussed earlier this year. With this method, we could smash Oreos, candy crabs, pumpkins, and much, much more. It even supported tearing this bread apart. This already looks quite convincing, and this series, I always say, two more papers down the line and it will be improved significantly. Today, we are going to have a two-minute papers moment of truth because this is a follow-up work from Joshua Waupper, the first author of the previous bread paper, and you can immediately start holding onto your papers because this work is one of the finest I have seen as of late. With this, we can enrich our simulations with anisotropic damage and elasticity. So what does that mean exactly? This means that it supports more extreme topological changes in these virtual objects. This leads to better material separation when the damage happens. For instance, if you look here on the right, this was done by a previous method. For the first site, it looks good, there is some bouncy behavior here, but the separation line is a little too clean. Now, let's have a look at the new method. Woohoo! Now, that's what I'm talking about. Let's have another look. I hope you now see what I meant by the previous separation line being a little too clean. Remarkably, it also supports changing a few intuitive parameters like etha, the crack propagation speed, which we can use to further tailor the simulation to our liking. Artists are going to love this. We can also play with the young modulus, which describes the material's resistance against fractures. On the left, it is quite low and makes the material tear apart easily much like a sponge. As we increase it a bit, we get a stiffer material, which gives us this glorious, floppy behavior. Let's increase it even more and see what happens. Yes, it is more resistant against damage, however, in return, it gives us some more vibrations after the break. It is not only realistic, but it also gives us the choice with these parameters to tailor our simulation results to our liking. Absolutely incredible. Now then, if you have been holding onto your paper so far, now squeeze that paper because previous methods were only capable of tearing off a small piece or only a strip of this virtual pork, let's see what this new work will do. Yes, it can also simulate peeling off an entire layer. Glorious. But it's not the only thing we can peel. It can also deal with small pieces of this mozzarella cheese. I must admit that I have never done this myself, so this will be the official piece of homework for me and for the more curious minds out there after watching this video. Let me know in the comments if it went the same way in your kitchen as it did in the simulation here. You get extra credit if you pose the picture too. And finally, if we tear this piece of meat apart, you see that it takes into consideration the location of the fibers and the tearing takes place not in an arbitrary way, but much like in reality, it tears along the muscle fibers. So, how fast is it? We still have to wait a few seconds for each frame in these simulations. None of them took too long. There is a fish-tearing experiment in the paper that went very quickly. Half a second for each frame is a great deal. The pork experiment took nearly 40 seconds for each frame and the most demanding experiments involved a lance and bones. Frankly, they were a little too horrific to be included here, even for virtual bodies, but if you wish to have a look, make sure to click the paper in the video description. But wait, are you seeing what I am seeing? These examples took more than a thousand times longer to compute. Goodness, how can that be? Look, as you see here, in these cases, the Delta T step is extremely tiny, which means that we have to advance the simulation with tiny, tiny time steps that takes much longer to compute. How tiny? Quite. In this case, we have to advance the simulation one millionth of a second at a time. The reason for this is that bones have an extremely high stiffness, which makes this method much less efficient. And of course, you know the drill two more papers down the line and this may run interactively on a consumer machine at home. So, what's the verdict? A algorithm design A plus, exposition A plus, quality of presentation A double plus, and it's still Mr. Walper's third paper in computer graphics. Unreal. And we researchers even get paid to create beautiful works like this. I also couldn't resist creating a slow motion version of some of these videos, so if this is something that you wish to see, make sure to visit our Instagram page in the video description for more. This episode has been supported by Snap Inc. What you see here is Snap ML, a framework that helps you bring your own machine learning models to Snapchats AR lenses. You can build augmented reality experiences for Snapchats hundreds of millions of users and help them see the world through a different lens. You can also apply to Snap's AR creator residency program with a proposal of how you would use Lens Studio for a creative project. If selected, you could receive a grant between 1 to $5,000 and work with Snap's technical and creative teams to bring your ideas to life. It doesn't get any better than that. Make sure to go to the link in the video description and apply for their residency program and try Snap ML today. Our thanks to Snap Inc for helping us make better videos for you. Thanks for watching and for your generous support and I'll see you next time.
|
Dear Fellow Scholars, this is two-minute papers with Dr. Karo Zsolnai-Fehir. In early 2019, a learning-based technique appeared that could perform common natural language processing operations, for instance, answering questions, completing text, reading comprehension, summarization, and more. This method was developed by scientists at OpenAI and they called it GPT2. The goal was to be able to perform this task with as little supervision as possible. This means that they unleashed this algorithm to read the internet and the question was, what would the AI learn during this process? That is a tricky question. And to be able to answer it, have a look at this paper from 2017, where an AI was given a bunch of Amazon product reviews and the goal was to teach it to be able to generate new ones or continue a review when given one. Then, something unexpected happened. The finished neural network used surprisingly few neurons to be able to continue these reviews. They noticed that the neural network has built up a knowledge of not only language, but also built a sentiment detector as well. This means that the AI recognized that in order to be able to continue a review, it not only needs to learn English, but also needs to be able to detect whether the review seems positive or not. If we know that we have to complete a review that seems positive from a small snippet, we have a much easier time doing it well. And now, back to GPT2, as it was asked to predict the next character in sentences of not reviews, but of any kind, we asked what this neural network would learn. Well, now we know that of course, it learns whatever it needs to learn to perform the sentence completion properly. And to do this, it needs to learn English by itself, and that's exactly what it did. It also learned about a lot of topics to be able to discuss them well. What topics? Let's see. We gave it a try and I was somewhat surprised when I saw that it was able to continue a two-minute paper script even though it seems to have turned into a history lesson. What was even more surprising is that it could shoulder the two-minute papers test, or, in other words, I asked it to talk about the nature of fluid simulations and it was caught cheating red-handed. But then, it continued in a way that was not only coherent, but had quite a bit of truth to it. Note that there was no explicit instruction for the AI apart from it being unleashed on the internet and reading it. And now, the next version appeared by the name GPT3. This version is now more than 100 times bigger, so our first question is how much better can an AI get if we increase the size of a neural network? Let's have a look together. These are the results on a challenging reading comprehension test as a function of the number of parameters. As you see, around 1.5 billion parameters, which is roughly equivalent to GPT2, it has learned a great deal, but its understanding is nowhere near the level of human comprehension. However, as we grow the network, something incredible happens. Non-trivial capabilities start to appear as we approach the 100 billion parameters. Look, it nearly matched the level of humans. My goodness! This was possible before, but only with neural networks that are specifically designed for a narrow test. In comparison, GPT3 is much more general. Let's test that generality and have a look at 5 practical applications together. One, open AI made this AI accessible to a lucky few people, and it turns out it has read a lot of things on the internet which contains a lot of code, so it can generate website layouts from a written description. Two, it also learned how to generate properly formatted plots from a tiny prompt written in plain English, not just one kind, many kinds. Perhaps to the joy of technical PhD students around the world, three, it can properly pipe set mathematical equations from a plain English description as well. Four, it understands the kind of data we have in a spreadsheet, in this case population and feels the missing parts correctly. And five, it can also translate a complex legal text into plain language, or the other way around, in other words, it can also generate legal text from our simple descriptions. And as you see here, it can do much, much more. I left a link to all of these materials in the video description. However, of course, this iteration of GPT also has its limitations. For instance, we haven't seen the extent to which these examples are cherry-picked, or, in other words, for every good output that we marvel at, there might have been one, or a dozen tries that did not come out well. We don't exactly know. But the main point is that working with GPT3 is a really peculiar process where we know that a vast body of knowledge lies within, but it only emerges if we can bring it out with properly written prompts. It almost feels like a new kind of programming that is open to everyone, even people without any programming, or technical knowledge. If a computer is a bicycle for the mind, then GPT3 is a fighter jet. Absolutely incredible. And to say that the paper is vast would be an understatement, we only scratch the surface of what it can do here, so make sure to have a look if you wish to know more about it. The link is available in the video description. I can only imagine what we will be able to do with GPT4 and GPT5 in the near future. What a time to be alive. What you see here is an instrumentation for a previous paper that we covered in this series, which was made by weights and biases. I think organizing these experiments really showcases the usability of their system. Weight and biases provides tools to track your experiments in your deep learning projects. Their system is designed to save you a ton of time and money, and it is actively used in projects at prestigious labs such as OpenAI, Toyota Research, GitHub, and more. And the best part is that if you have an open source, academic, or personal project, you can use their tools for free. It really is as good as it gets. Make sure to visit them through wnbe.com slash papers, or click the link in the video description to start tracking your experiments in 5 minutes. Our thanks to weights and biases for their long-standing support and for helping us make better videos for you. Thanks for watching and for your generous support, and I'll see you next time.
|
Dear Fellow Scholars, this is two minute papers with Dr. Karo Zonai-Fehir. I recommend that you immediately start to hold onto your papers because this work is about creating physics simulations in higher than three spatial dimensions. The research in this paper is used to create this game that takes place in a 4D world and it is just as beautiful as it is confusing. And, I'll be honest, the more we look, the less this world seems to make sense for us. So let's try to understand what is happening here together. You see an example of a 4D simulation here. The footage seems very cool, but I can't help but notice that sometimes things just seem to disappear into the ether. In other cases they suddenly appear from nowhere. Hmm, how can that happen? Let's decrease the dimensionality of the problem and suddenly we will easily understand how this is possible. Here we take a 2D slice of our 3D world, start the simulation and imagine that we only see what happens on this slice. We see nothing else just the slice. That sounds fine, things move around freely, nothing crazy going on, and suddenly, hey, things slowly disappear. The key in this example is that luckily we can also look at the corresponding 3D simulation on your display. You can see not only the 2D slice, but everything around it. So, the playbook is as follows. If something disappears, it means that it went back or forward in the 3rd dimension. So what we see on the slice is not the object itself, but its intersection with the 3D object. The smaller the intersection, the more we see the side of the sphere and the object appears to be smaller or even better. Look, the ball is even floating in the air. When we look at the 3D world, we see that this only appears like that, but in reality, what happens is that we see the side of the sphere. Really cool. We now understand that colliding can create a mirage that seems as if objects were suddenly receding, disappearing, and even floating. Just imagine how confusing this would be if we were this 2D character. Actually, you don't even need to imagine that because this piece of research work was made to be able to create computer games where we are a 3D character in a 4D world so we can experience all of these confusing phenomena. However, we get some help in decoding this situation because even though this example runs in 3D, when something disappears into the ether, we can move along the 4th dimension and find it. So good. Floating behavior can also happen and now we know exactly why. But wait, in this 3D domain, even more confusing, new things happen. As you see here, in 3D, cubes seem to be changing shape and the reason for this is the same, we just see a higher dimensional object's intersection with our lower dimensional space. But the paper does not only extend 3D rigid body dynamic simulations to not only 2D, but to any higher dimension. You see, this kind of footage can only be simulated because it also proposes a way to compute collisions, static and kinetic friction and similar physical forces in arbitrary dimensions. And now, hold onto your papers because this framework can also be used to make amazing mind-bending game puzzles where we can miraculously unbind seemingly impossible configurations by reaching into the 4th dimension. This is one of those crazy research works that truly cannot be bothered by recent trends and hot topics and I mean it in the best possible way. It creates its own little world and invites us to experience it and it truly is a site to behold. I can only imagine how one of these soft body or fluid simulations would look in higher dimensions and boy, would I love to see some follow-up papers perform something like this. If you wish to see more, make sure to have a look at the paper in the video description and I also put a link to one 4D game that is under development and one that you can buy and play today. Huge congratulations to Mark Tandbosch who got this single author paper accepted to see-graph perhaps the most prestigious computer graphics conference. Not only that but I am pretty sure that I have never seen a work from an indie game developer presented as a C-graph technical paper. Bravo! This episode has been supported by weights and biases. In this post they show you how to use their system with tensile board and visualize your results with it. You can even try an example in an interactive notebook through the link in the video description. Wates and biases provides tools to track your experiments in your deep learning projects. Their system is designed to save you a ton of time and money and it is actively used in projects at prestigious labs such as OpenAI, Toyota Research, GitHub and more. And the best part is that if you have an open source, academic or personal project, you can use their tools for free. It really is as good as it gets. Make sure to visit them through wnb.com slash papers or click the link in the video description to start tracking your experiments in 5 minutes. Our thanks to weights and biases for their long standing support and for helping us make better videos for you. Thanks for watching and for your generous support and I'll see you next time.
|
Dear Fellow Scholars, this is 2 Minute Papers with Dr. Károly Zsolnai-Fehir in computer games and all kinds of applications where we are yearning for realistic animation, we somehow need to tell our computers how the different joints and body parts of these virtual characters are meant to move around over time. Since the human eye is very sensitive to even the tiniest inaccuracies, we typically don't program these motions by hand, but instead we often record a lot of real-time motion capture data in a studio and try to reuse that in our applications. Previous techniques have tackled quadruped control and we can even teach bipeds to interact with their environment in a realistic manner. Today we will have a look at an absolutely magnificent piece of work where the authors carved out a smaller subproblem and made a solution for it that is truly second to none. And this subproblem is simulating virtual characters playing basketball. Like with previous works, we are looking for realism in the movement and for games it is also a requirement that the character responds to our controls well. However, the key challenge is that all we are given is 3 hours of unstructured motion capture data. That is next to nothing and from this next to nothing, a learning algorithm has to learn to understand these motions so well that it can weave them together even when a specific movement combination is not present in this data. That is quite a challenge. Compared to many other works, this data is really not a lot so I am excited to see what value we are getting out of these 3 hours. At first I thought we would get only very rudimentary motions and boy was I dead wrong on that one. We have control over this character and can perform these elaborate maneuvers and it remains very responsive even if we mesh the controller like a madman producing these sharp turns. As you see it can handle these cases really well. And not only that but it is so well done we can even dribble through a set of obstacles leading to a responsive and enjoyable gameplay. Now about these dribbling behaviors, do we get only one boring motion or not? Not at all it was able to mine out not just one but many kinds of dribbling motions and is able to weave them into other moves as soon as we interact with the controller. This is already very convincing especially from just 3 hours of unstructured motion capture data. But this paper is just getting started. Now hold on to your papers because we can also shoot and catch the ball, move it around that is very surprising because it has looked at so little shooting data. Let's see yes less than 7 minutes. My goodness and it keeps going what I have found even more surprising is that it can handle unexpected movements which I find to be even more remarkable given the limited training data. These crazy corner cases are typically learnable when they are available in abundance in the training data which is not the case here. Amazing. When we compare these motions to a previous method we see that both the character and the ball's movement is much more lively. For instance here you can see that the face function neural network pfnn in short almost makes it seem like the ball has to stick to the hand of the player for an unhealthy amount of time to be able to create these motions. It doesn't happen at all with the new technique and remember this new method is also much more responsive to the player's controls and thus more enjoyable not only to look at but to play with. This is an aspect that is hard to measure but it is not to be underestimated in the general playing experience. Just imagine what this research area will be capable of not in a decade but just two more papers down the line. Loving it. Now at the start of the video I noted that the authors carved out a small use case which is training an AI to weave together basketball motion capture data in a manner that is both realistic and controllable. However, many times in research we look at a specialized problem and during that journey we learn general concepts that can be applied to other problems as well. That is exactly what happened here as you see parts of this technique can be generalized for quadruped control as well. Because good boy is pacing and running around beautifully. And you guessed right our favorite biped from the previous paper is also making an introduction. I am absolutely spellbound by this work and I hope that now you are too. Can't wait to see this implemented in newer games and other real time applications. What a time to be alive. This episode has been supported by weights and biases. In this post they show you how to use their system with PyTorch lightning and decouple your science code from your engineering code and visualize your models. You can even try an example in an interactive notebook through the link in the video description. Weight and biases provides tools to track your experiments in your deep learning projects. Their system is designed to save you a ton of time and money and it is actively used in projects at prestigious labs such as OpenAI, Toyota Research, GitHub and more. And the best part is that if you have an open source academic or personal project you can use their tools for free. It really is as good as it gets. Make sure to visit them through wnb.com slash papers or click the link in the video description to start tracking your experiments in five minutes. Thanks for watching and for your generous support and I'll see you next time.
|
Dear Fellow Scholars, this is two minute papers with Dr. Kato Jona-Ifahir. Approximately two years ago, we covered the work where a learning-based algorithm was able to read the Wi-Fi signals in a room to not only locate a person in a building, but even estimate their pose. An additional property of this method was that, as you see here, it does not look at images, but radio signals which also traverse in the dark and therefore, this pose estimation also works well in poor lighting conditions. Today's paper offers a learning-based method for a different, more practical data completion problem and it mainly works on image sequences. You see, we can give this one a short image sequence with obstructions, for instance, the fence here. And it is able to find and remove this obstruction, and not only that, but it can also show us what is exactly behind the fence. How is that even possible? Well, note that we mentioned that the input is not an image, but an image sequence, a short video, if you will. This contains the scene from different viewpoints and is one of the typical cases where if we would give all this data to a human, this human would take a long, long time, but would be able to reconstruct what is exactly behind the fence because this data was visible from other viewpoints. But of course, clearly, this approach would be prohibitively slow and expensive. The cool thing here is that this learning-based method is capable of doing this automatically. But it does not stop there. I was really surprised to find out that it even works for video outputs as well, so if you did not have a clear sight of the tiger in the zoo, do not despair. Just use this method and there you go. When looking at the results of techniques like this, I always try to look only at the output and try to guess where the fence was, obstructing it. With many simpler image-impainting techniques, this is easy to tell if you look for it, but here I can't see a trace. Can you? Let me know in the comments. Admittedly, the resolution of this video is not very high, but the results look very reassuring. This can also perform reflection removal and some of the input images are highly contaminated by these reflected objects. Let's have a look at some results. You can see here how the technique decomposes the input into two images, one with the reflection and one without. The results are clearly not perfect, but they are easily good enough to make my brain focus on the real background without being distracted by the reflections. This was not the case with the input at all. Bravo! This use case can also be extended for videos and I wonder how much temporal coherence I can expect in the output. In other words, if the technique solves the adjacent frames too differently, flickering is introduced in the video and this effect is the bane of many techniques that are otherwise really good on still images. Let's have a look. There is a tiny bit of flickering, but the results are surprisingly consistent. It also does quite well when compared to previous methods, especially when we are able to provide multiple images as an input. Now note that it says hours without online optimization. What could that mean? This online optimization step is a computationally expensive way to further improve separation in the outputs and with that the authors propose a quicker and a slower version of the technique. This one, without the online optimization step, runs in just a few seconds and if we add this step we will have to wait approximately 15 minutes. I had to read the table several times because researchers typically bring the best version of their technique to the comparisons and it is not the case here. Even the quicker version smokes the competition. Loving it. Note that if you have a look at the paper in the video description, there are, of course, more detailed comparisons against other methods as well. If these AR glasses that we hear so much about come to fruition in the next few years, having an algorithm for real time, glare, reflection and obstruction removal would be beyond amazing, which really live in a science fiction world. What a time to be alive. This episode has been supported by Snap Inc. What you see here is Snap ML, a framework that helps you bring your own machine learning models to Snapchats AR lenses. You can build augmented reality experiences for Snapchats hundreds of millions of users and help them see the world through a different lens. You can also apply to Snap's AR Creator Residency program with a proposal of how you would use Lens Studio for a creative project. If selected, you could receive a grant between 1 to $5,000 and work with Snap's technical and creative teams to bring your ideas to life. It doesn't get any better than that. Make sure to go to the link in the video description and apply for their Residency program and try Snap ML today. Our thanks to Snap Inc for helping us make better videos for you. Thanks for watching and for your generous support and I'll see you next time.
|
Dear Fellow Scholars, this is Two Minute Papers with Dr. Karo Zonai-Fehir. Today, we have a selection of learning-based techniques that can generate images of photorealistic human faces for people that don't exist. These techniques have come a long way over the last few years, so much so that we can now even edit these images to our liking by, for instance, putting a smile on their faces, taking them older or younger, adding or removing a beard, and more. However, most of these techniques are still lacking in two things. One is diversity of outputs and two generalization to multiple domains. Typically, the ones that work on multiple domains don't perform too well on most of them. This new technique is called StarGand2 and addresses both of these issues. Let's start with the humans. In the footage here, you see a lot of interpolation between test subjects, which means that we start out from a source person and generate images that morph them into the target subjects, not in any way, but in a way that all of the intermediate images are believable. In these results, many attributes from the input subject, such as pose, nose type, mouth shape, and position, are also reflected on the output. I like how the motion of the images on the left reflects the state of the interpolation. As this slowly takes place, we can witness how the reference person grows out of a beard. But, we are not nearly done yet. We noted that another great advantage of this technique is that it works for multiple domains, and this means, of course, none other than us looking at cats morphing into dogs and other animals. In these cases, I see that the algorithm picks up the gaze direction, so this generalizes to even animals. That's great. What is even more great is that the face shape of the tiger appears to have been translated to the photo of this cat, and if we have a bigger cat as an input, the output will also give us this lovely and a little plump creature. And look, here the cat in the input is occluded in this target image, but, that is not translated to the output image. The AI knows that this is not part of the cat, but an occlusion. Imagine what it would take to prepare a handcrafted algorithm to distinguish these features. My goodness. And now, onto dogs. What is really cool is that in this case, bend the ears, have their own meaning, and we get several versions of the same dog breed with or without them. And it can handle a variety of other animals too. I could look at these all day. And now, to understand why this works so well, we first have to understand what a latent space is. Here you see an example of a latent space that was created to be able to browse through fonts and even generate new ones. This method essentially tries to look at a bunch of already existing fonts and tries to boil them down into the essence of what makes them different. It is a simpler, often incomplete, but more manageable representation for a given domain. This domain can be almost anything, for instance, you see another technique that does something similar with material models. Now the key difference in this new work compared to previous techniques is that it creates not one latent space, but several of these latent spaces for different domains. As a result, it can not only generate images in all of these domains, but can also translate different features, for instance ears, eyes, noses from a cat to a dog or a cheetah in a way that makes sense. And the results look like absolute witchcraft. Now since the look on this cheetah's face indicates that it has had enough of this video, just one more example before we go. As a possible failure case, have a look at the ears of this cat. It seems to be in a peculiar, midway land between a pointy and a bent ear, but it doesn't quite look like any of them. What do you think? Maybe some of you cat people can weigh in on this. Let me know in the comments. What you see here is an instrumentation of this exact paper we have talked about, which was made by weights and biases. I think organizing these experiments really showcases the usability of their system. Weight and biases provides tools to track your experiments in your deep learning projects. Their system is designed to save you a ton of time and money, and it is actively used in projects at prestigious labs such as OpenAI, Toyota Research, GitHub, and more. And the best part is that if you have an open source, academic, or personal project, you can use their tools for free. It really is as good as it gets. Make sure to visit them through wnbe.com slash papers, or click the link in the video description to start tracking your experiments in five minutes. Our thanks to weights and biases for their long-standing support and for helping us make better videos for you. Thanks for watching and for your generous support, and I'll see you next time.
|
Dear Fellow Scholars, this is 2 Minute Papers with Dr. Karo Zsolnai-Fehir. We hear more and more about RGBD images these days. These are photographs that are endowed with depth information which enable us to do many wondrous things. For instance, this method was used to end out self-driving cars with depth information and word reasonably well. And this other one provides depth maps that are so consistent we can even add some AR effects to it and today's paper is going to show what 3D photography is. However, first we need not only color but depth information in our images to perform these. You see, phones with depth scanners already exist and even more are coming as soon as this year. But even if you have a device that only gives you 2D color images, don't despair, there is plenty of research on how we can estimate these depth maps even if we have very limited information. And with proper depth information we can now create these 3D photographs where we get even more information out of one still image. We can look behind objects and see things that we wouldn't see otherwise. Beautiful parallax effects appear as objects at different distances move different amounts as we move the camera around. You see that the foreground changes a great deal. The buildings in the background less so and the hills behind them even less so. These photos truly come alive with this new method. An earlier algorithm, the legendary patch match method from more than a decade ago, could perform something that we call image in painting. Image in painting means looking at what we see in these images and trying to fill in missing information with data that makes sense. The key difference here is that this new technique uses a learning method and does this image in painting in 3D and it not only fills in color but depth information as well. What a crazy, amazing idea. However, this is not the first method to perform this so how does it compare to other research works. Let's have a look together. Previous methods have a great deal of warping and distortions on the bathtub here. And if you look at the new method you see that it is much cleaner. There is still a tiny bit of warping but it is significantly better. The dog head here with this previous method seems to be bobbing around a great deal. While the other methods also have some problems with it, look at these two. And if you look at how the new method handles it, it is significantly more stable and you see that these previous techniques are from just one or two years ago. It is unbelievable how far we have come since. Bravo. So this was a qualitative comparison or in other words we looked at the results. What about the quantitative differences? What do the numbers say? Look at the PSNR column here, this means the peak signal to noise ratio. This is subject to maximization as the up arrow denotes here. The higher the better. The difference is between one half to two and a half points when compared to previous methods, which does not sound like a lot at all. So what happened here? Note that PSNR is not a linear but a logarithmic scale. So this means that a small numeric difference typically translates to a great deal of difference in the images, even if the numeric difference is just 0.5 points on the PSNR scale. However, if you look at SSIM, the structure of similarity metric, all of them are quite similar and the previous technique appears to be even winning here. But this was a method that worked the dog head and the individual comparisons, the new method came out significantly better than this. So what is going on here? Well, have a look at this metric, LPIPS, which was developed at the UC Berkeley, OpenAI, and Adobe Research. At the risk of simplifying the situation, this uses a neural network to look at an image and uses its inner representation to decide how close the two images are to each other. And loosely speaking, it kind of thinks about the differences as we humans do and is an excellent tool to compare images. And sure enough, this also concludes that the new method performs best. However, this method is still not perfect. There is some flickering going on behind these fences. The transparency of the glass here isn't perfect, but witnessing this huge leap in the quality of results in such little time is truly a sight to behold. What a time to be alive! I started this series to make people feel how I feel when I read these papers and I really hope that it goes through with this paper. Absolutely amazing! What is even more amazing is that with a tiny bit of technical knowledge, you can run the source code in your browser, so make sure to have a look at the link in the video description. Let me know in the comments how it went. What you see here is an instrumentation of this exact paper we have talked about which was made by weights and biases. I think organizing these experiments really showcases the usability of their system. Wates and biases provides tools to track your experiments in your deep learning projects. Their system is designed to save you a ton of time and money and it is actively used in projects at prestigious labs such as OpenAI, Toyota Research, GitHub and more. And the best part is that if you have an open source, academic or personal project, you can use their tools for free. It really is as good as it gets. Make sure to visit them through wnb.com slash papers or click the link in the video description to start tracking your experiments in five minutes. Our thanks to weights and biases for their long-standing support and for helping us make better videos for you. Thanks for watching and for your generous support. And I'll see you next time.
|
Dear Fellow Scholars, this is two-minute papers with Dr. Karoizhou-nai-Fehir. When watching science fiction movies, we often encounter crazy devices and technologies that don't really exist or sometimes, once, that are not even possible to make. For instance, reconstructing sound from vibrations would be an excellent example of that and could make a great novel with the Secret Service trying to catch dangerous criminals. Except that it has already been done in real-life research. I think you can imagine how surprised I was when I first saw this paper in 2014 that showcased a result where a camera looks at this bag of chips and from these tiny, tiny vibrations, it could reconstruct the sounds in the room. Let's listen. Mary had a little lamb whose fleece was white as snow and everywhere that Mary went, that lamb was stored to go. I was wrong in looking at these beautiful photos of her and everywhere that Mary was, I was stored to go. Yes, this indeed sounds like science fiction. But 2014 was a long, long time ago and since then we have a selection of powerful learning algorithms and the question is, was the next idea that sounded completely impossible a few years ago, which is now possible? Well, what about looking at silent footage from a speaker and trying to guess what they were saying? Checkmark that sounds absolutely impossible to me, yet this new technique is able to produce the entire tube of this speech after looking at a video footage of the leap movements. Let's listen. Between the wave light, the frequency and the speed of electromagnetic radiation. In fact, the product of the wavelength at the frequency is its speed. Wow. So, the first question is, of course, what was used as the training data? It used a dataset with lecture videos and chess commentary from five speakers and make no mistake, it takes a ton of data from these speakers, about 20 hours from each, but it uses video that was shot in a natural setting, which is something that we have in abundance on YouTube and other places on the internet. Note that the neural network works on the same speakers it was trained on and was able to learn their gestures and leap movements remarkably well. However, this is not the first work attempting to do this, so let's see how it compares to the competition. Set white with i7 soon. Set white with i7 soon. The new one is very close to the true spoken sentence. Let's look at another one. Erfendowance of action-famed decision. Eight field guns were captured in position. Note that there are gestures, a reasonable amount of head movement and other factors that play and the algorithm still does amazingly well. Potential applications of this could be video conferencing in zones where we have to be silent, giving a voice to people with the inability to speak, do to a phonia or other conditions, or potentially fixing a piece of video footage where parts of the speech signal are corrupted. In these cases, the gaps could be filled with such a technique. Look. Let's look at a cell potential of 0.5 volts for an oxidation of bromide by permanganate. The question I have is what pH would cause this voltage? Would it be a pH? Now, let's have a look under the hood. If we visualize the activations within this neural network, we see that it found out that it mainly looks at the mouth of the speaker. That is, of course, not surprising. However, what is surprising is that the other regions, for instance, around the forehead and eyebrows, are also important to the attention mechanism. Perhaps this could mean that it also looks at the gestures of the speaker and uses that information for the speech synthesis. I find this aspect of the work very intriguing and would love to see some additional analysis on that. There is so much more in the paper, for instance, I mentioned giving a voice to people with a phonia which should not be possible because we are training these neural networks for a specific speaker, but with an additional speaker embedding step, it is possible to pair up any speaker with any voice. This is another amazing work that makes me feel like we are living in a science fiction world. I can only imagine what we will be able to do with this technique two more papers down the line. If you have any ideas, feel free to speculate in the comments section below. What a time to be alive! This episode has been supported by Snap Inc. What you see here is Snap ML, a framework that helps you bring your own machine learning models to Snapchat's A or Lenses. You can build augmented reality experiences for Snap Chets hundreds of millions of users and help them see the world through a different lens. You can also apply to Snap's AR Creator Residency program with a proposal of how you would use Lens Studio for a creative project. If selected, you could receive a grant between 1 to $5,000 and work with Snap's technical and creative teams to bring your ideas to life. It doesn't get any better than that. Make sure to go to the link in the video description and apply for their residency program and try Snap ML today. Our thanks to Snap Inc for helping us make better videos for you. Thanks for watching and for your generous support and I'll see you next time.
|
Dear Fellow Scholars, this is 2 Minute Papers with Dr. Karajol Naifahir. I think this might be it. This paper is called Monte Carlo Geometry Processing and in my opinion this may be one of the best if not the best computer graphics paper of the year. There will be so many amazing results I first thought this one paper could be the topic of 10 2 Minute Papers videos but I will attempt to cram everything into this one video. It is quite challenging to explain so please bear with me I'll try my best. To even have a fighting chance at understanding what is going on here first we start out with one of the most beautiful topics in computer graphics which is none other than light transport. To create a beautiful light simulation we have to solve something that we call the rendering equation. For practical cases it is currently impossible to solve it like we usually solve any other equation. However, what we can do is that we can choose a random ray, simulate its path as it bounces around in the scene and compute how it interacts with the objects and materials within this scene. As we do it with more and more rays we get more information about what we see in the scene that's good but look it is noisy. As we add even more rays this noise slowly evens out and in the end we get a perfectly clean image. This takes place with the help of a technique that we call Monte Carlo integration which involves randomness. At the risk of oversimplifying the situation in essence this technique says that we cannot solve the problem but we can take samples from the problem and if we do it in a smart way eventually we will be able to solve it. In light transport the problem is the rendering equation which we cannot solve but we can take samples one sample is simulating one ray. If we have enough rays we have a solution. However, it hasn't always been like this. Before Monte Carlo integration light transport was done through a technique called radiosity. The key issue with radiosity was that the geometry of the scene had to be sliced up into many small pieces and the light scattering events had to be evaluated between these small pieces. It could not handle all light transport phenomena and the geometry processing part was a major headache and Monte Carlo integration was a revelation that breathed new life into this field. However there are still many geometry processing problems that include these headaches and hold on to your papers because this paper shows us that we can apply Monte Carlo integration to many of these problems too. For instance one it can resolve the rich external and internal structure of this end. With traditional techniques this would normally take more than 14 hours and 30 gigabytes of memory but if we apply Monte Carlo integration to this problem we can get a somewhat noisy preview of the result in less than one minute. Of course over time as we compute more samples the noise clears up and we get this beautiful final result. And the concept can be used for so much more it truly makes my head spin. Let's discuss six amazing applications while noting that there are so many more in the paper which you can and should check out in the video description. For instance two it can also compute a CT scan of the infamous shovel nose frog that you see here and instead of creating the full 3D solution we only have to compute a 2D slice of it which is much much cheaper. 3. It can also edit these curves and note that the key part is that we can do that without the major headache of creating an intermediate triangle mass geometry for it. 4. It also supports denoising techniques so we don't have to compute too many samples to get a clear image or piece of geometry. 5. Performing hamholes hodge decomposition with this method is also possible. This is a technique that is used widely in many domains for instance it is responsible to ensure the stability of many fluid simulation programs and this technique can also produce these decompositions. And interestingly here it is used to represent 3D objects without the usual triangle meshes that we use in computer graphics. 6. It supports multiple important sampling as well. This means that if we have multiple sampling strategies, multiple ways to solve a problem that have different advantages and disadvantages it combines them in a way that we get the best of all of them. We had a mega episode on multiple important sampling it has lots of amazing uses in light transport simulations so if you would like to hear more make sure to check that out in the video description. But wait these are all difficult problems. One surely needs a PhD and years of experience in computer graphics to implement this right? When seeing a work like this we often ask okay it does something great but how complex is it? How many days do I have to work to re-implement it? Please take a guess and let me know what the guess was in the comment section. And now what you see here is none other than the source code for the core of the method and what's even more a bunch of implementations of it already exist. And if you see that the paper has been re-implemented around day one you know it's good. So no wonder this paper has been accepted to SIGRF perhaps the most prestigious computer graphics conference. It is immensely difficult to get a paper accepted there and I would say this one more than earned it. Huge congratulations to the first author of the paper Rohan Saunee he is currently a PhD student and note that this was his second published paper. Unreal. Such a great leap for the field in just one paper. Also congratulations to Professor Keenam Krain who advised this project and many other wonderful works in the last few years. I cannot wait to see what they will be up to next and I hope that now you are just as excited. This episode has been supported by Linode. Linode is the world's largest independent cloud computing provider. Linode gives you full back and access to your server which is your step up to powerful, fast, fully configurable cloud computing. Linode also has one click apps that streamline your ability to deploy websites, personal VPNs, game servers and more. If you need something as small as a personal online portfolio, Linode has your back and if you need to manage tons of clients websites and reliably serve them to millions of visitors Linode can do that too. But more, they offer affordable GPU instances featuring the Quadro RTX 6000 which is tailor-made for AI, scientific computing and computer graphics projects. If only I had access to a tool like this while I was working on my last few papers. To receive $20 in credit on your new Linode account, visit linode.com slash papers or click the link in the video description and give it a try today. Thanks to Linode for supporting the series and helping us make better videos for you. Thanks for watching and for your generous support and I'll see you next time.
|
Dear Fellow Scholars, this is two-minute papers with Dr. Karajjona Ifehir. When we, humans, look at an image or a piece of video footage, we understand the geometry of the objects in there so well that if we had the time and patience, we could draw a depth map that describes the distance of each object from the camera. This goes without saying. However, what does not go without saying is that if we could teach computers to do the same, we could do incredible things. For instance, this learning-based technique creates real-time defocus effects for virtual reality and computer games, and this one performs this Can Burns effect in 3D or in other words, zoom and pan around in a photograph, but with a beautiful twist because in the meantime, it also reveals the depth of the image. With this data, we can even try to teach self-driving cars about depth perception to enhance their ability to navigate around safely. However, if you look here, you see two key problems. One, it is a little blurry and there are lots of fine details that it couldn't resolve and it is flickering. In other words, there are a drop changes from one image to the next one which shouldn't be there as the objects in the video feed are moving smoothly. Smooth motion should mean smooth depth maps and it is getting there, but it is still not the case here. So, I wonder if we could teach a machine to perform this task better. And more importantly, what new wondrous things can we do if we pull this off? This new technique is called consistent video depth estimation and it promises smooth and detailed depth maps that are of much higher quality than what previous works offer. And now, hold onto your papers because finally, these maps contain enough detail to open up the possibility of adding new objects to the scene or even flood the room with water or add many other really cool video effects. All of these will take the geometry of the existing real world objects, for instance cats into consideration. Very cool. The reason why we need such a consistent technique to pull this off is because if we have this flickering in time that we've seen here, then the depth of different objects suddenly bounces around over time even for a stationary object. This means that if one frame the ball would be in front of the person when in the next one it would suddenly think that it has to put the ball behind them and then in the next one front again creating a not only drawing but quite unconvincing animation. What is really remarkable is that due to the consistency of the technique, none of that happens here. Love it. Here are some more results where you can see that the outlines of the objects in the depth map are really crisp and follow the changes really well over time. The snowing example here is one of my favorites and it is really convincing. However, there are still a few spots where we can find some visual artifacts. For instance, as the subject is waving there is lots of fine high frequency data around the fingers there and if you look at the region behind the head closely you find some more issues or you can find that some balls are flickering on the table as we move the camera around. Compare that to previous methods that could not do nearly as good as this and now we have something that is quite satisfactory. I can only imagine how good this will get two more papers down the line and for the meantime we'll be able to run these amazing effects even without having a real depth camera. What a time to be alive. This episode has been supported by weights and biases. In this post, latent space shows you how reports from weights and biases are central to managing their machine learning workflow and how to use their reports to quickly identify and debug issues with their deep learning models. Wates and biases provides tools to track your experiments in your deep learning projects. Their system is designed to save you a ton of time and money and it is actively used in projects at prestigious labs such as OpenAI, Toyota Research, GitHub and more. And the best part is that if you have an open source, academic or personal project you can use their tools for free. It really is as good as it gets. Make sure to visit them through wnbe.com slash papers or click the link in the video description to start tracking your experiments in five minutes. Our thanks to weights and biases for their long standing support and for helping us make better videos for you. Thanks for watching and for your generous support and I'll see you next time.
|
Dear Fellow Scholars, this is two-minute papers with Dr. Karajona Ifehir. We have recently explored a few neural network-based learning algorithms that could perform material editing, physics simulations, and more. As some of these networks have hundreds of layers and often thousands of neurons within these layers, they are almost unfathomably complex. At this point, it makes sense to ask, can we understand what is going on inside these networks? Do we even have a fighting chance? Luckily, today, visualizing the inner workings of neural networks is a research subfield of its own, and the answer is, yes, we learn more and more every year. But there is also plenty more to learn. Earlier, we talked about a technique that we called activation maximization, which was about trying to find an input that makes a given neuron as excited as possible. This gives us some cues as to what the neural network is looking for in an image. A later work that proposes visualizing spatial activations gives us more information about these interactions between two or even more neurons. You see here with the dots that it provides us a dense sampling of the most likely activations and this leads to a more complete, bigger picture view of the inner workings of the neural network. This is what it looks like if we run it on one image. It also provides us with way more extra value because so far, we have only seen how the neural network reacts to one image, but this method can be extended to see its reaction to not one, but one million images. You can see an example of that here. Later, it was also revealed that some of these image detector networks can assemble something that we call a pose invariant dog head detector. What this means is that it can detect a dog head in many different orientations and look. You see that it gets very excited by all of these good boys plus this squirrel. Today's technique offers us an excellent tool to look into the inner workings of a convolution on your own network that is very capable of image-related operations, for instance image classification. The task here is that we have an input image of a mug or a red panda and the output should be a decision from the network that yes, what we are seeing is indeed a mug or a panda or not. They apply something that we call a convolutional filter over an image which tries to find interesting patterns that differentiate objects from each other. You can see how the outputs are related to the input image here. As you see, the neurons in the next layer will be assembled as a combination of the neurons from the previous layer. When we use the term deep learning, we typically refer to neural networks that have two or more of these inner layers. Each subsequent layer is built by taking all the neurons in the previous layer which select for the features relevant to what the next neuron represents, for instance the handle of the mug and inhibits everything else. To make this a little clearer, these previous work tried to detect whether we have a car in an image by using these neurons. Here, the upper part looks like a car window, the next one resembles a car body and the bottom of the third neuron clearly contains a wheel detector. This is the information that the neurons in the next layer are looking for. In the end, we make a final decision as to whether this is a panda or a mug by adding up all the intermediate results. The blue or this part is the more relevant this neuron is in the final decision. Here the neural network concludes that this doesn't look like a liveboat or a ladybug at all, but it looks like pizza. If we look at the other sums, we see that the school bus and orange are not hopeless candidates, but still the neural network does not have much doubt whether this is a pizza or not. And the best part is that you can even try it yourself in your browser if you click the link in the video description, run these simulations and even upload your own image. Make sure that you upload or link something that belongs to one of these classes on the right to make this visualization work. So clearly, there is plenty more work for us to do to properly understand what is going on under the hood of neural networks, but I hope this quick rundown showcased how many facets there are to this neural network visualization subfield and how exciting it is. Make sure to post your experience in the comments section whether the classification worked well for you or not. And if you wish to see more videos like this, make sure to subscribe and hit the bell icon to not miss future videos. What you see here is an instrumentation for a previous paper that we covered in this series which was made by weights and biases. I think organizing these experiments really showcases the usability of their system. Weight and biases provides tools to track your experiments in your deep learning projects. Their system is designed to save you a ton of time and money and it is actively used in projects at prestigious labs such as OpenAI, Toyota Research, GitHub and more. And the best part is that if you have an open source, academic or personal project, you can use their tools for free. It really is as good as it gets. Make sure to visit them through wnbe.com slash papers or just click the link in the video description to start tracking your experiments in 5 minutes. Our thanks to weights and biases for their long-standing support and for helping us make better videos for you. Thanks for watching and for your generous support and I'll see you next time.
|
Dear Fellow Scholars, this is Two Minute Papers with Dr. Karajol Naifahir. Today, I am going to try to tell you the glorious tale of AI-based human-faced generation and showcase an absolutely unbelievable new paper in this area. Early in this series, we covered a stunning paper that showcased a system that could not only classify an image, but write a proper sentence on what is going on and could cover even highly non-trivial cases. You may be surprised, but this thing is not recent at all. This is four-year-old news. Insanity. Later, researchers turned this whole problem around and performed something that was previously thought to be impossible. They started using these networks to generate photorealistic images from a written text description. We could create new bird species by specifying that it should have orange legs and a short yellow bill. Then, researchers at Nvidia recognized and addressed two shortcomings. One was that the images were not that detailed and two, even though we could input text, we couldn't exert too much artistic control over the results. In came Styrgan to the rescue, which was then able to perform both of these difficult tasks really well. Furthermore, there are some features that are highly localized as we exert control over these images. You can see how this part of the teeth and eyes were pinned to a particular location and the algorithm just refuses to let it go, sometimes to the detriment of its surroundings. A follow-up work titled Styrgan 2 addresses all of these problems in one go. So, Styrgan 2 was able to perform near perfect synthesis of human faces and remember, none of these people that you see here really exist. Quite remarkable. So, how can we improve this magnificent technique? Well, this nowhere can do so many things I don't even know where to start. First and most important, we now have much more intuitive artistic control over the output images. We can add or remove a beard, make the subject younger or older, change their hairstyle, make their hairline recede, put a smile on their face, or even make their nose point here. Absolute witchcraft. So, why can we do all this with this new method? The key idea is that it is not using a generative adversarial network again in short. Again means two competing neural networks where one is trained to generate new images and the other one is used to tell whether the generated images are real or fake. Gans dominated this field for a long while because of their powerful generation capabilities, but on the other hand, they are quite difficult to train and we only have limited control over its output. Among other changes, this work disassembles the generator network into F and G and the discriminator network into E and D or in other words adds an encoder and the coder network here. Why? The key idea is that the encoder compresses the image data down into a representation that we can edit more easily. This is the land of beards and smiles or in other words, all of these intuitive features that we can edit exist here and when we are done, we can decompress the output with the decoder network and produce these beautiful images. This is already incredible. But what else can we do with this new architecture? A lot more. For instance, too, if we add a source and destination subjects, their course, middle or fine styles can also be mixed. What does that mean exactly? The course part means that high-level attributes like pose, hairstyle and face shape will resemble the source subject. In other words, the child will remain a child and inherit some of the properties of the destination people. However, as we transition to the fine from source part, the effect of the destination subject will be stronger and the source will only be used to change the color scheme and microstructure of this image. Interestingly, it also changes the background of the subject. Three, it can also perform image interpolation. This means that we have these four images as starting points and it can compute intermediate images between them. You see here that as we slowly become bill gates, somewhere along the way, glasses appear. Now note that interpolating between images is not difficult in the slightest and has been possible for a long, long time. All we need to do is just compute average results between these images. So, what makes a good interpolation process? Well, we are talking about good interpolation when each of the intermediate images make sense and can stand on their own. I think this technique does amazingly well at that. I'll stop the process at different places you can see for yourself and let me know in the comments if you agree or not. I also kindly thank the authors for creating more footage just for us to showcase in this series. That is a huge honor. Thank you so much. A note that Staggan 2 appeared around December of 2019 and now this paper by the name Adversarial Layton Autoencorders appeared only four months later. Four months later. My goodness, this is so much progress in so little time it truly makes my head spin. What a time to be alive. But you see here is an instrumentation of this exact paper we have talked about which was made by weights and biases. I think organizing these experiments really showcases the usability of their system. Weight and biases provides tools to track your experiments in your deep learning projects. Their system is designed to save you a ton of time and money and it is actively used in projects at prestigious labs such as OpenAI, Toyota Research, GitHub and more. And the best part is that if you have an open source, academic or personal project, you can use their tools for free. It really is as good as it gets. Make sure to visit them through www.nb.com slash papers or just click the link in the video description to start tracking your experiments in five minutes. Our thanks to weights and biases for their long standing support and for helping us make better videos for you. Thanks for watching and for your generous support and I'll see you next time.
|
Dear Fellow Scholars, this is two-minute papers with Dr. Karojona Ifahir. Neural network-based learning methods are capable of wondrous things these days. They can do classification, which means that they can look at an image and the output is a decision whether we see a dog or a cat or a sentence that describes an image. In the case of DeepMind's AI playing Atari games, the input is the video footage of the game and the output is an action that decides what we do with our character next. In OpenAI's amazing drug box paper, the input was a style of someone, a music genre, and lyrics and the output was a waveform or in other words a song we can listen to. But a few hundred episodes ago we covered a paper from 2015 where scientists at DeepMind asked the question, what if we would get these neural networks to output not sentences, decisions, waveforms or any of that sort, what if the output would be a computer program? Can we teach a neural network programming? I was convinced that the answer is no until I saw these results. So what is happening here? The input is a scratch pad where we are performing multi-digit addition in front of the curious size of the neural network. And if it has looked for long enough, it was indeed able to reproduce a computer program that could eventually perform addition. It could also perform sorting and would even be able to rotate the images of these cars into a target position. It was called a neural programmer, interpreter, and of course it was slow and a bit inefficient, but no matter because it could finally make something previously impossible, possible. That is an amazing leap. So why are we talking about this work from 2015? Well, apart from the fact that there are many amazing works that are timeless and this is one of them, in this series I always say two more papers down the line and it will be improved significantly. So here is the two minute papers moment of truth. How has this area improved with this follow-up work? Let's have a look at this paper from scientists at Nvidia that implements a similar concept for computer games. So how is that even possible? Normally, if we wish to write a computer game, we first envision the game in our mind, then we sit down and do the programming. But this new paper does this completely differently. Now, hold on to your papers because this is a neural network based method that first looks at someone playing the game and then it is able to implement the game so that it not only looks like it, but it also behaves the same way to our key presses. You see it at work here. Yes, this means that we can even play with it and it learns the internal rules of the game and the graphics just by looking at some gameplay. Note that the key part here is that we are not doing any programming by hand, the entirety of the program is written by the AI. We don't need access to the source code or the internal workings of the game as long as we can just look at it, it can learn the rules. Everything truly behaves as expected, we can even pick up the capsule and eat the ghosts as well. This already sounds like science fiction and we are not nearly done yet. There are additional goodies. It has memory and uses it consistently. In other words, things don't just happen arbitrarily. If we return to a state of the game that we visited before, it will remember to present us with very similar information. It also has an understanding of foreground and background, dynamic and static objects as well, so we can experiment with replacing these parts thereby re-skining our games. It still needs quite a bit of data to perform all this as it has looked at approximately 120 hours of footage of the game being played. However, now something is possible that was previously impossible. And of course, two more papers down the line, this will be improved significantly, I am sure. I think this work is going to be one of those important milestones that remind us that many of the things that we had handcrafted methods for will, over time, be replaced with these learning algorithms. They already know the physics of fluids, or in other words, they are already capable of looking at videos of these simulations and learn the underlying physical laws, and they can demonstrate having learned general knowledge of the rules by being able to continue these simulations even if we change the scene around quite a bit. In light transport research, we also have decades of progress in simulating how rays of light interact with scenes and we can create these beautiful images. Parts of these algorithms, for instance, noise filtering, are already taken over by AI-based techniques and I can't help but feel that a bigger tidal wave is coming. This tidal wave will be an entirely AI driven technique that will write the code for the entirety of the system. Sure, the first ones will be limited, for instance, this is a newer renderer from one of our papers that is limited to this scene and lighting setup, but you know the saying, two more papers down the line and it will be an order of magnitude better. I can't wait to tell all about it to you with a video when this happens. Make sure to subscribe and hit the bell icon to not miss any follow-up works. Goodness, I love my job. What a time to be alive! This episode has been supported by weights and biases. In this post, they show you how to visualize molecular structures using their system. Weight and biases provides tools to track your experiments in your deep learning projects. Their system is designed to save you a ton of time and money and it is actively used in projects at prestigious labs such as OpenAI, Toyota Research, GitHub and more. And the best part is that if you have an open source, academic or personal project, you can use their tools for free. It really is as good as it gets. Make sure to visit them through www.com-slash-papers or just click the link in the video description to start tracking your experiments in 5 minutes. Our thanks to weights and biases for their long-standing support and for helping us make better videos for you. Thanks for watching and for your generous support, and I'll see you next time.
|
Dear Fellow Scholars, this is not two-minute papers with Dr. Karo Zonai Fahir, due to popular demand this is a surprise video with the talk of our new paper that we just published. This was the third and last paper in my PhD thesis and hence this is going to be a one-off video that is longer and a tiny bit more technical, I am keenly aware of it but I hope you'll enjoy it. Let me know in the comments when you have finished the video and worry not, all the upcoming videos are going to be in the usual two-minute papers format. The paper and the source code are all available in the video description. And now let's dive in. In a previous paper our goal was to populate this scene with over a hundred materials with a learning-based technique and create a beautiful planet with rich vegetation. The results looked like this. One of the key elements to accomplish this was to use a neural renderer or in other words the decoder network that you see here which took a material shader description as an input and predicted its appearance thereby replacing the renderer we used in the project. It had its own limitations for instance it was limited to this scene with a fixed lighting setup and only the material properties were subject to change. But in return it mimicked the global illumination renderer rapidly and faithfully. And in this new work our goal was to take a different vantage point and help artists with general image processing knowledge to perform material synthesis. Now this sounds a little nebulous so let me explain. One of the key ideas is to achieve this with a system that is meant to take images from its own renderer like the ones you see here. But of course we produce these ourselves so obviously we know how to do it so this is not very useful yet. However the twist is that we only start out with an image of this source material and then load it into a RESTOR image editing program like Photoshop and edit it to our liking and just pretend that this is achievable with our renderer. As you see many of these target images in the middle are results of poorly executed edits. For instance the stitched specular highlight in the first example isn't very well done and neither is the background of the gold target image in the middle. However in the next step our method proceeds to find a photorealistic material description that when rendered resembles this target image and works well even in the presence of these poorly executed edits. The whole process executes in 20 seconds. To produce a mathematical formulation for this problem we started with this. We have an input image t and edited to our liking to get the target image t with a tilt. Now we are looking for a shader parameter set x that when rendered with the phi operator approximates the edited image. The constraint below stipulates that we would remain within the physical boundaries for each parameter for instance albedos between 0 and 1 proper indices of refraction and so on. So how do we deal with phi? We used the previously mentioned neural renderer to implement it otherwise this optimization process would take 25 hours. Later we made an equivalent and constrained reformulation of this problem to be able to accommodate a greater set of optimizers. This all sounds great on paper and works reasonably well for materials that can be exactly matched with this shader like this one. This optimizer based solution can achieve it reasonably well. But unfortunately for more challenging cases as you see the target image on the lower right the optimizer's output leaves much to be desired. Note again that the result on the upper right is achievable with the shader while the lower right is a challenging imaginary material that we are trying to achieve. The fact that this is quite difficult is not a surprise because we have an only near and non-convex optimization problem which is also high dimensional. So this optimization solution is also quite slow but it can start inching towards the target image. As an alternative solution we also developed something that we call an inversion network this addresses the adjoint problem of neural rendering or in other words we show it the edited input image and outcomes the shader that would produce this image. We have trained 9 different neural network architectures for this problem which sounds great so how well did it work? Well we found out that none of them are really satisfactory for more difficult edits because all of the target images are far far outside of the training domain. We just cannot prepare the networks to be able to handle the rich variety of edits that come from the artist. However some of them are one could say almost usable for instance number one and five are not complete failures and note that these solutions are provided instantly. So we have two techniques none of them are perfect for our task a fast and approximate solution with the inversion networks and a slower optimizer that can slowly inch towards the target image. Our key insight here is that we can produce a hybrid method that fuses the two solutions together. The workflow goes as follows. We take an image of the initial source material and edit it to our liking to get this target image. Then we create a course prediction with a selection of inversion networks to initialize the optimizer with the prediction of one of these neural networks preferably a good one so the optimizer can start out from a reasonable initial guess. So how well does this hybrid method work? I'll show you in a moment here we start out with an achievable target image and then try two challenging image editing operations. This image can be reproduced perfectly as long as the inversion process works reliably. Unfortunately as you see here this is not the case. In the first row using the optimizer and the inversion networks separately we get results that fail to capture the specular highlight properly. In the second row we have deleted the specular highlight on the target image on the right and replaced it with a completely different one. I like to call this the Franken BRDF and it would be amazing if we could do this but unfortunately both the optimizer and the inversion networks flounder. Another thing that would be really nice to do is deleting the specular highlight and filling the image via image in painting. This kind of works with the optimizer but you'll see in a moment that it's not nearly as good as it could be. And now if you look carefully you see that our hybrid method outperforms both of these techniques in each of the three cases. In the paper we report results on a dozen more cases as well. We make an even stronger claim in the paper where we say that these results are close to the global optimum. You see the results of this hybrid method if you look at the intersection of now the meet and and then they are highlighted with the red ellipses. The records in the table show the RMS errors and are subject to minimization. With this you see that this goes neck and neck with the global optimizer which is highlighted with green. In summary our technique runs in approximately 20 seconds works for specular highlight editing, image blending, stitching, in painting and more. The proposed method is robust works even in the presence of poorly edited images and can be easily deployed in already existing rendering systems and allows for rapid material prototyping for artists working in the industry. It is also independent of the underlying principle shader so you can also add your own and expect it to work well as long as the neural renderer works reliably. A key limitation of the work is that it only takes images in this canonical scene with a carved sphere material sample but we can juxture that it can be extended to be more general and propose a way to do it in the paper. Make sure to have a closer look if you are interested. The teaser image of this paper is showcased in the 2020 computer graphics forum cover page. The whole thing is also quite simple to implement and we provide the source code, pre-trained networks on our website and all of them are under a permissive license. Thank you so much for watching this and the big thanks to Petavanka and Michaal Vima for advising this work.
|
Dear Fellow Scholars, this is two-minute papers with Dr. Karajona Ifaher. If you have been watching this series for a while, you know very well that I love learning algorithms and fluid simulations. But do you know what I like even better? Learning algorithms applied to fluid simulations so I couldn't be happier with today's paper. We can create wondrous fluid simulations like the ones you see here by studying the laws of fluid motion from physics and write a computer program that contains these laws. As you see, the amount of detail we can simulate with these programs is nothing short of amazing. However, I just mentioned neural networks. If we can write a simulator that runs the laws of physics to create these programs, why would we need learning based algorithms? The answer is in this paper that we have discussed about 300 episodes ago. The goal was to show a neural network video footage of lots and lots of fluid and smoke simulations and have it learn how the dynamics work to the point that it can continue and guess how the behavior of a smoke path would change over time. We stopped the video and it would learn how to continue it, if you will. This definitely is an interesting take as normally we use neural networks to solve problems that are otherwise close to impossible to tackle. For instance, it is very hard if not impossible to create a handcrafted algorithm that detects cats reliably because we cannot really write down the mathematical description of a cat. However, these days we can easily teach a neural network to do that. But this test here is fundamentally different. Here, the neural networks are applied to solve something that we already know how to solve, especially given that if we use a neural network to perform this task, we have to train it, which is a long and arduous process. I hope to have convinced you that this is a bad, bad idea. Why would anyone bother to do that? Does this make any sense? Well, it does make a lot of sense. And the reason for that is that this training step only has to be done once and afterwards querying the neural network that is predicting what happens next in the simulation runs almost immediately. This takes way less time than calculating all the forces and pressures in the simulation while retaining high quality results. So we suddenly went from thinking that an idea is useless to being amazing. What are the weaknesses of the approach? Generalization. You see, these techniques, including a newer variant that you see here, can give us detailed simulations in real time or close to real time, but if we present them with something that is far outside of the cases that they had seen in the training domain, they will fail. This does not happen with our handcrafted techniques only to AI-based methods. So, onwards to this new technique, and you will see in just a moment that the key differentiator here is that its generalization capabilities are just astounding. Look here. Predicted results match the true simulation quite well. Let's look at it in slow motion too so we can evaluate it a little better. Looking great. But we have talked about superior generalization, so what about that? Well, it can also handle sand and goop simulations so that's a great step beyond just water and smoke. And now, have a look at this one. This is a scene with the boxes it has been trained on. And now, let's ask it to try to simulate the evolution of significantly different shapes. Wow. It not only does well with these previously unseen shapes, but it also handles their interactions really well. But there is more. We can also train it on a tiny domain with only a few particles, and then it is able to learn general concepts that we can reuse to simulate a much bigger domain and also with more particles. Fantastic. But there is even more. We can train it by showing how water behaves on these water ramps, and then let's remove the ramps and see if it understands what it has to do with all these particles. Yes, it does. Now let's give it something more difficult. I want more ramps. Yes. And now, even more ramps. Yes, I love it. Let's see if it can do it with sand too. Here's the ramp for the training. And let's try an hourglass now. Absolute witchcraft. And we are even being paid to do this. I can hardly believe this. The reason why you see so many particles in many of these views is because if we look under the hood, we see that the paper proposes a really cool graph-based method that represents the particles and they can pass messages to each other over these connections between them. This leads to a simple, general, and accurate model that truly is a force to be reckoned with. Now, this is a great leap in neural network-based physics simulations, but of course, not everything is perfect here. Its generalization capabilities have their limits. For instance, over longer timeframes, solids may get incorrectly deformed. However, I will quietly note that during my college years, I was also studying the beautiful Navier Stokes equations and even as a highly motivated student, it took several months to understand the theory and write my first fluid simulator. You can check out the thesis and the source code in the video description if you are interested. And to see that these neural networks could learn something very similar in a matter of days, every time I think about this, she goes round down my spine. Absolutely amazing. What a time to be alive! This episode has been supported by Lambda. If you are a researcher or a startup looking for cheap GPU compute to run these algorithms, check out Lambda GPU Cloud. I've talked about Lambda's GPU workstations in other videos and I'm happy to tell you that they are offering GPU Cloud services as well. The Lambda GPU Cloud can train ImageNet to 93% accuracy for less than $19. Lambda's web-based IDE lets you easily access your instance right in your browser. And finally, hold onto your papers because the Lambda GPU Cloud costs less than half of AWS and Azure. Make sure to go to lambdaleps.com slash papers and sign up for one of their amazing GPU instances today. Thanks for watching and for your generous support and I'll see you next time.
|
Dear Fellow Scholars, this is Two Minute Papers with Dr. Karajon Aifahir. Today, I will attempt to tell you a glorious tale about AI-based music generation. You see, there is no shortage of neural network-based methods that can perform physics simulations, style transfer, deepfakes, and a lot more applications where the training data is typically images or video. If the training data for a neural network is in pure text, it can learn about that. If the training data is waveforms and music, it can learn that too. Wait, really? Yes. In fact, let's look at two examples and then dive into today's amazing paper. In this earlier work, by the name, Look, Listen, and Learn, two scientists, a deep mind, set out to look at a large number of videos with sound. You see here that there is a neural network for processing the vision and one for the audio information. That sounds great, but what are these heatmaps? These were created by this learning algorithm, and they show us which part of the image is responsible for the sounds that we hear in the video. The harder the color, the more sounds are expected from a given region. It was truly amazing that it didn't automatically look for humans and colored them red in the heatmap. There are cases where the humans are expected to be the source of the noise, for instance, in concerts where, in other cases, they don't emit any noise at all. It could successfully identify these cases. This still feels like science fiction to me, and we covered this paper in 2017 approximately 250 episodes ago. You will see that we have come a long, long way since. We often say that these neural networks should try to embody general learning concepts. That's an excellent, and in this case, testable statement, so let's go ahead and have a look under the hood of this vision and audio processing neural networks, and, yes, they are almost identical. Some parameters are not the same because they have been adapted to the length and dimensionality of the incoming data, but the key algorithm that we run for the learning is the same. Later, in 2018, DeepMind published a follow-up work that looks at performances on the piano from the masters of the past and learns to play in their style. A key differentiating factor here was that it did not do what most previous techniques do, which was looking at the score of the performance. These older techniques knew what to play, but not how to play these notes, and these are the nuances that truly make music come alive. This method learned from raw audio waveforms and thus could capture much, much more of the artistic style. Let's listen to it, and in the meantime, you can look at the composers it has learned from to produce these works. However, in 2019, OpenAI recognized the text-based music synthesizers can not only look at a piece of score, but can also continue it, thereby composing a new piece of music, and what's more, they could even create really cool blends between genres. Listen, as their AI starts out from the first six notes of the Chopin piece and transitions into a pop style with a bunch of different instruments entering a few seconds in. Very cool. The score-based techniques are a little lacking in nuance, but can do magical, genre mixing, and more, whereas the waveform-based techniques are more limited, but can create much more sophisticated music. Are you thinking what I am thinking? Yes, you have guessed right. Hold on to your papers, because in OpenAI's new work, they tried to fuse the two concepts together, or, in other words, take a genre, an artist, and even lyrics as an input, and it would create a song for us. Let's marvel at a few curated samples together. The genre, artist, and lyrics information will always be on the screen. Wow, I am speechless. Love the AI-based lyrics, too. This has the nuance of waveform-based techniques with the versatility of score-based methods. Glorious. If you look in the video description, you will find a selection of uncurated music samples as well. It does what it does by compressing the raw audio waveform into a compact representation. In this space, it is much easier to synthesize new patterns after which we can decompress it to get the output waveforms. It has also learned to group up and cluster a selection of artists which reflects how the AI thinks about them. There is so much cool stuff in here that it would be worthy of a video of its own. Note that it currently takes 9 hours to generate 1 minute of music, and the network was mainly trained on western music, and only speaks English, but you know, as we always say around here, 2 more papers down the line, and it will be improved significantly. I cannot wait to report on them, should any follow-up works appear, so make sure to subscribe and hit the bell icon to not miss it. What a time to be alive. What you see here is an instrumentation of this exact paper we have talked about which was made by weights and biases. I think organizing these experiments really showcases the usability of their system. Weight and biases provides tools to track your experiments in your deep learning projects. Their system is designed to save you a ton of time and money, and it is actively used in projects at prestigious labs such as OpenAI, Toyota Research, GitHub, and more. And the best part is that if you are an academic or have an open source project, you can use their tools for free. It really is as good as it gets. Make sure to visit them through wnb.com slash papers, or just click the link in the video description, and you can get a free demo today. Our thanks to weights and biases for their long standing support, and for helping us make better videos for you. Thanks for watching, and for your generous support, and I'll see you next time.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.