Dataset Viewer
file_path
stringlengths 29
100
| content
stringlengths 0
26.3k
|
|---|---|
ManimML_helblazer811/LICENSE.md
|
MIT License
Copyright (c) 2022 Alec Helbling
Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
|
ManimML_helblazer811/setup.py
|
from setuptools import setup, find_packages
setup(
name="manim_ml",
version="0.0.17",
description=("Machine Learning Animations in python using Manim."),
packages=find_packages(),
)
|
ManimML_helblazer811/Readme.md
|
# ManimML
<a href="https://github.com/helblazer811/ManimMachineLearning">
<img src="assets/readme/ManimMLLogo.gif">
</a>
[](https://github.com/helblazer811/ManimMachineLearning/blob/main/LICENSE.md)
[](https://img.shields.io/github/v/release/helblazer811/ManimMachineLearning)
[](https://pepy.tech/project/manim-ml)
ManimML is a project focused on providing animations and visualizations of common machine learning concepts with the [Manim Community Library](https://www.manim.community/). Please check out [our paper](https://arxiv.org/abs/2306.17108). We want this project to be a compilation of primitive visualizations that can be easily combined to create videos about complex machine learning concepts. Additionally, we want to provide a set of abstractions which allow users to focus on explanations instead of software engineering.
*A sneak peak ...*
<img src="assets/readme/convolutional_neural_network.gif">
## Table of Contents
- [ManimML](#manimml)
- [Table of Contents](#table-of-contents)
- [Getting Started](#getting-started)
- [Installation](#installation)
- [First Neural Network](#first-neural-network)
- [Guide](#guide)
- [Setting Up a Scene](#setting-up-a-scene)
- [A Simple Feed Forward Network](#a-simple-feed-forward-network)
- [Animating the Forward Pass](#animating-the-forward-pass)
- [Convolutional Neural Networks](#convolutional-neural-networks)
- [Convolutional Neural Network with an Image](#convolutional-neural-network-with-an-image)
- [Max Pooling](#max-pooling)
- [Activation Functions](#activation-functions)
- [More Complex Animations: Neural Network Dropout](#more-complex-animations-neural-network-dropout)
- [Citation](#citation)
## Getting Started
### Installation
First you will want to [install manim](https://docs.manim.community/en/stable/installation.html). Make sure it is the Manim Community edition, and not the original 3Blue1Brown Manim version.
Then install the package form source or
`pip install manim_ml`. Note: some recent features may only available if you install from source.
### First Neural Network
This is a visualization of a Convolutional Neural Network. The code needed to generate this visualization is shown below.
```python
from manim import *
from manim_ml.neural_network import Convolutional2DLayer, FeedForwardLayer, NeuralNetwork
# This changes the resolution of our rendered videos
config.pixel_height = 700
config.pixel_width = 1900
config.frame_height = 7.0
config.frame_width = 7.0
# Here we define our basic scene
class BasicScene(ThreeDScene):
# The code for generating our scene goes here
def construct(self):
# Make the neural network
nn = NeuralNetwork([
Convolutional2DLayer(1, 7, 3, filter_spacing=0.32),
Convolutional2DLayer(3, 5, 3, filter_spacing=0.32),
Convolutional2DLayer(5, 3, 3, filter_spacing=0.18),
FeedForwardLayer(3),
FeedForwardLayer(3),
],
layer_spacing=0.25,
)
# Center the neural network
nn.move_to(ORIGIN)
self.add(nn)
# Make a forward pass animation
forward_pass = nn.make_forward_pass_animation()
# Play animation
self.play(forward_pass)
```
You can generate the above video by copying the above code into a file called `example.py` and running the following in your command line (assuming everything is installed properly):
```bash
$ manim -pql example.py
```
The above generates a low resolution rendering, you can improve the resolution (at the cost of slowing down rendering speed) by running:
```bash
$ manim -pqh example.py
```
<img src="assets/readme/convolutional_neural_network.gif">
## Guide
This is a more in depth guide showing how to use various features of ManimML (Note: ManimML is still under development so some features may change, and documentation is lacking).
### Setting Up a Scene
In Manim all of your visualizations and animations belong inside of a `Scene`. You can make a scene by extending the `Scene` class or the `ThreeDScene` class if your animation has 3D content (as does our example). Add the following code to a python module called `example.py`.
```python
from manim import *
# Import modules here
class BasicScene(ThreeDScene):
def construct(self):
# Your code goes here
text = Text("Your first scene!")
self.add(text)
```
In order to render the scene we will run the following in the command line:
```bash
$ manim -pq -l example.py
```
<img src="assets/readme/setting_up_a_scene.png">
This will generate an image file in low quality (use `-h` for high quality).
For the rest of the tutorial the code snippets will need to be copied into the body of the `construct` function.
### A Simple Feed Forward Network
With ManimML we can easily visualize a simple feed forward neural network.
```python
from manim_ml.neural_network import NeuralNetwork, FeedForwardLayer
nn = NeuralNetwork([
FeedForwardLayer(num_nodes=3),
FeedForwardLayer(num_nodes=5),
FeedForwardLayer(num_nodes=3)
])
self.add(nn)
```
In the above code we create a `NeuralNetwork` object and pass a list of layers to it. For each feed forward layer we specify the number of nodes. ManimML will automatically piece together the individual layers into a single neural network. We call `self.add(nn)` in the body of the scene's `construct` method in order to add the neural network to the scene.
The majority of ManimML neural network objects and functions can be imported directly from `manim_ml.neural_network`.
We can now render a still frame image of the scene by running:
```bash
$ manim -pql example.py
```
<img src="assets/readme/a_simple_feed_forward_neural_network.png">
### Animating the Forward Pass
We can automatically render the forward pass of a neural network by creating the animation with the `neural_network.make_forward_pass_animation` method and play the animation in our scene with `self.play(animation)`.
```python
from manim_ml.neural_network import NeuralNetwork, FeedForwardLayer
# Make the neural network
nn = NeuralNetwork([
FeedForwardLayer(num_nodes=3),
FeedForwardLayer(num_nodes=5),
FeedForwardLayer(num_nodes=3)
])
self.add(nn)
# Make the animation
forward_pass_animation = nn.make_forward_pass_animation()
# Play the animation
self.play(forward_pass_animation)
```
We can now render with:
```bash
$ manim -pql example.py
```
<img src="assets/readme/animating_the_forward_pass.gif">
### Convolutional Neural Networks
ManimML supports visualizations of Convolutional Neural Networks. You can specify the number of feature maps, feature map size, and filter size as follows `Convolutional2DLayer(num_feature_maps, feature_map_size, filter_size)`. There are a number of other style parameters that we can change as well(documentation coming soon).
Here is a multi-layer convolutional neural network. If you are unfamiliar with convolutional networks [this overview](https://cs231n.github.io/convolutional-networks/) is a great resource. Additionally, [CNN Explainer](https://poloclub.github.io/cnn-explainer/) is a great interactive tool for understanding CNNs, all in the browser.
When specifying CNNs it is important for the feature map sizes and filter dimensions of adjacent layers match up.
```python
from manim_ml.neural_network import NeuralNetwork, FeedForwardLayer, Convolutional2DLayer
nn = NeuralNetwork([
Convolutional2DLayer(1, 7, 3, filter_spacing=0.32), # Note the default stride is 1.
Convolutional2DLayer(3, 5, 3, filter_spacing=0.32),
Convolutional2DLayer(5, 3, 3, filter_spacing=0.18),
FeedForwardLayer(3),
FeedForwardLayer(3),
],
layer_spacing=0.25,
)
# Center the neural network
nn.move_to(ORIGIN)
self.add(nn)
# Make a forward pass animation
forward_pass = nn.make_forward_pass_animation()
```
We can now render with:
```bash
$ manim -pql example.py
```
<img src="assets/readme/convolutional_neural_network.gif">
And there we have it, a convolutional neural network.
### Convolutional Neural Network with an Image
We can also animate an image being fed into a convolutional neural network by specifiying an `ImageLayer` before the first convolutional layer.
```python
import numpy as np
from PIL import Image
from manim_ml.neural_network import NeuralNetwork, FeedForwardLayer, Convolutional2DLayer, ImageLayer
image = Image.open("digit.jpeg") # You will need to download an image of a digit.
numpy_image = np.asarray(image)
nn = NeuralNetwork([
ImageLayer(numpy_image, height=1.5),
Convolutional2DLayer(1, 7, 3, filter_spacing=0.32), # Note the default stride is 1.
Convolutional2DLayer(3, 5, 3, filter_spacing=0.32),
Convolutional2DLayer(5, 3, 3, filter_spacing=0.18),
FeedForwardLayer(3),
FeedForwardLayer(3),
],
layer_spacing=0.25,
)
# Center the neural network
nn.move_to(ORIGIN)
self.add(nn)
# Make a forward pass animation
forward_pass = nn.make_forward_pass_animation()
```
We can now render with:
```bash
$ manim -pql example.py
```
<img src="assets/readme/convolutional_neural_network_with_an_image.gif">
### Max Pooling
A common operation in deep learning is the 2D Max Pooling operation, which reduces the size of convolutional feature maps. We can visualize max pooling with the `MaxPooling2DLayer`.
```python
from manim_ml.neural_network import NeuralNetwork, Convolutional2DLayer, MaxPooling2DLayer
# Make neural network
nn = NeuralNetwork([
Convolutional2DLayer(1, 8),
Convolutional2DLayer(3, 6, 3),
MaxPooling2DLayer(kernel_size=2),
Convolutional2DLayer(5, 2, 2),
],
layer_spacing=0.25,
)
# Center the nn
nn.move_to(ORIGIN)
self.add(nn)
# Play animation
forward_pass = nn.make_forward_pass_animation()
self.wait(1)
self.play(forward_pass)
```
We can now render with:
```bash
$ manim -pql example.py
```
<img src="assets/readme/max_pooling.gif">
### Activation Functions
Activation functions apply non-linarities to the outputs of neural networks. They have different shapes, and it is useful to be able to visualize the functions. I added the ability to visualize activation functions over `FeedForwardLayer` and `Convolutional2DLayer` by passing an argument as follows:
```python
layer = FeedForwardLayer(num_nodes=3, activation_function="ReLU")
```
We can add these to larger neural network as follows:
```python
from manim_ml.neural_network import NeuralNetwork, Convolutional2DLayer, FeedForwardLayer
# Make nn
nn = NeuralNetwork([
Convolutional2DLayer(1, 7, filter_spacing=0.32),
Convolutional2DLayer(3, 5, 3, filter_spacing=0.32, activation_function="ReLU"),
FeedForwardLayer(3, activation_function="Sigmoid"),
],
layer_spacing=0.25,
)
self.add(nn)
# Play animation
forward_pass = nn.make_forward_pass_animation()
self.play(forward_pass)
```
We can now render with:
```bash
$ manim -pql example.py
```
<img src="assets/readme/activation_functions.gif">
### More Complex Animations: Neural Network Dropout
```python
from manim_ml.neural_network import NeuralNetwork, FeedForwardLayer
from manim_ml.neural_network.animations.dropout import make_neural_network_dropout_animation
# Make nn
nn = NeuralNetwork([
FeedForwardLayer(3),
FeedForwardLayer(5),
FeedForwardLayer(3),
FeedForwardLayer(5),
FeedForwardLayer(4),
],
layer_spacing=0.4,
)
# Center the nn
nn.move_to(ORIGIN)
self.add(nn)
# Play animation
self.play(
make_neural_network_dropout_animation(
nn, dropout_rate=0.25, do_forward_pass=True
)
)
self.wait(1)
```
We can now render with:
```bash
$ manim -pql example.py
```
<img src="assets/readme/dropout.gif">
## Citation
If you found ManimML useful please cite it below!
```
@misc{helbling2023manimml,
title={ManimML: Communicating Machine Learning Architectures with Animation},
author={Alec Helbling and Duen Horng and Chau},
year={2023},
eprint={2306.17108},
archivePrefix={arXiv},
primaryClass={cs.LG}
}
```
|
ManimML_helblazer811/.github/FUNDING.yml
|
# These are supported funding model platforms
github: [helblazer811] # Replace with up to 4 GitHub Sponsors-enabled usernames e.g., [user1, user2]
patreon: # Replace with a single Patreon username
open_collective: # Replace with a single Open Collective username
ko_fi: # Replace with a single Ko-fi username
tidelift: # Replace with a single Tidelift platform-name/package-name e.g., npm/babel
community_bridge: # Replace with a single Community Bridge project-name e.g., cloud-foundry
liberapay: # Replace with a single Liberapay username
issuehunt: # Replace with a single IssueHunt username
otechie: # Replace with a single Otechie username
lfx_crowdfunding: # Replace with a single LFX Crowdfunding project-name e.g., cloud-foundry
custom: # Replace with up to 4 custom sponsorship URLs e.g., ['link1', 'link2']
|
ManimML_helblazer811/.github/workflows/black.yml
|
name: Lint
on: [push, pull_request]
jobs:
lint:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v2
- uses: psf/black@stable
|
ManimML_helblazer811/manim_ml/__init__.py
|
from argparse import Namespace
from manim import *
import manim
from manim_ml.utils.colorschemes.colorschemes import light_mode, dark_mode, ColorScheme
class ManimMLConfig:
def __init__(self, default_color_scheme=dark_mode):
self._color_scheme = default_color_scheme
self.three_d_config = Namespace(
three_d_x_rotation = 90 * DEGREES,
three_d_y_rotation = 0 * DEGREES,
rotation_angle = 75 * DEGREES,
rotation_axis = [0.02, 1.0, 0.0]
# rotation_axis = [0.0, 0.9, 0.0]
#rotation_axis = [0.0, 0.9, 0.0]
)
@property
def color_scheme(self):
return self._color_scheme
@color_scheme.setter
def color_scheme(self, value):
if isinstance(value, str):
if value == "dark_mode":
self._color_scheme = dark_mode
elif value == "light_mode":
self._color_scheme = light_mode
else:
raise ValueError(
"Color scheme must be either 'dark_mode' or 'light_mode'"
)
elif isinstance(value, ColorScheme):
self._color_scheme = value
manim.config.background_color = self.color_scheme.background_color
# These are accesible from the manim_ml namespace
config = ManimMLConfig()
|
ManimML_helblazer811/manim_ml/scene.py
|
from manim import *
class ManimML3DScene(ThreeDScene):
"""
This is a wrapper class for the Manim ThreeDScene
Note: the primary purpose of this is to make it so
that everything inside of a layer
"""
def __init__(self, *args, **kwargs):
super().__init__(*args, **kwargs)
def play(self):
""" """
pass
|
ManimML_helblazer811/manim_ml/diffusion/mcmc.py
|
"""
Tool for animating Markov Chain Monte Carlo simulations in 2D.
"""
from manim import *
import matplotlib
import matplotlib.pyplot as plt
from manim_ml.utils.mobjects.plotting import convert_matplotlib_figure_to_image_mobject
import numpy as np
import scipy
import scipy.stats
from tqdm import tqdm
import seaborn as sns
from manim_ml.utils.mobjects.probability import GaussianDistribution
######################## MCMC Algorithms #########################
def gaussian_proposal(x, sigma=0.3):
"""
Gaussian proposal distribution.
Draw new parameters from Gaussian distribution with
mean at current position and standard deviation sigma.
Since the mean is the current position and the standard
deviation is fixed. This proposal is symmetric so the ratio
of proposal densities is 1.
Parameters
----------
x : np.ndarray or list
point to center proposal around
sigma : float, optional
standard deviation of gaussian for proposal, by default 0.1
Returns
-------
np.ndarray
propossed point
"""
# Draw x_star
x_star = x + np.random.randn(len(x)) * sigma
# proposal ratio factor is 1 since jump is symmetric
qxx = 1
return (x_star, qxx)
class MultidimensionalGaussianPosterior:
"""
N-Dimensional Gaussian distribution with
mu ~ Normal(0, 10)
var ~ LogNormal(0, 1.5)
Prior on mean is U(-500, 500)
"""
def __init__(self, ndim=2, seed=12345, scale=3, mu=None, var=None):
"""_summary_
Parameters
----------
ndim : int, optional
_description_, by default 2
seed : int, optional
_description_, by default 12345
scale : int, optional
_description_, by default 10
"""
np.random.seed(seed)
self.scale = scale
if var is None:
self.var = 10 ** (np.random.randn(ndim) * 1.5)
else:
self.var = var
if mu is None:
self.mu = scipy.stats.norm(loc=0, scale=self.scale).rvs(ndim)
else:
self.mu = mu
def __call__(self, x):
"""
Call multivariate normal posterior.
"""
if np.all(x < 500) and np.all(x > -500):
return scipy.stats.multivariate_normal(mean=self.mu, cov=self.var).logpdf(x)
else:
return -1e6
def metropolis_hastings_sampler(
log_prob_fn=MultidimensionalGaussianPosterior(),
prop_fn=gaussian_proposal,
initial_location: np.ndarray = np.array([0, 0]),
iterations=25,
warm_up=0,
ndim=2,
sampling_seed=1
):
"""Samples using a Metropolis-Hastings sampler.
Parameters
----------
log_prob_fn : function, optional
Function to compute log-posterior, by default MultidimensionalGaussianPosterior
prop_fn : function, optional
Function to compute proposal location, by default gaussian_proposal
initial_location : np.ndarray, optional
initial location for the chain
iterations : int, optional
number of iterations of the markov chain, by default 100
warm_up : int, optional,
number of warm up iterations
Returns
-------
samples : np.ndarray
numpy array of 2D samples of length `iterations`
warm_up_samples : np.ndarray
numpy array of 2D warm up samples of length `warm_up`
candidate_samples: np.ndarray
numpy array of the candidate samples for each time step
"""
np.random.seed(sampling_seed)
# initialize chain, acceptance rate and lnprob
chain = np.zeros((iterations, ndim))
proposals = np.zeros((iterations, ndim))
lnprob = np.zeros(iterations)
accept_rate = np.zeros(iterations)
# first samples
chain[0] = initial_location
proposals[0] = initial_location
lnprob0 = log_prob_fn(initial_location)
lnprob[0] = lnprob0
# start loop
x0 = initial_location
naccept = 0
for ii in range(1, iterations):
# propose
x_star, factor = prop_fn(x0)
# draw random uniform number
u = np.random.uniform(0, 1)
# compute hastings ratio
lnprob_star = log_prob_fn(x_star)
H = np.exp(lnprob_star - lnprob0) * factor
# accept/reject step (update acceptance counter)
if u < H:
x0 = x_star
lnprob0 = lnprob_star
naccept += 1
# update chain
chain[ii] = x0
proposals[ii] = x_star
lnprob[ii] = lnprob0
accept_rate[ii] = naccept / ii
return chain, np.array([]), proposals
#################### MCMC Visualization Tools ######################
def make_dist_image_mobject_from_samples(samples, ylim, xlim):
# Make the plot
matplotlib.use('Agg')
plt.figure(figsize=(10,10), dpi=100)
print(np.shape(samples[:, 0]))
displot = sns.displot(
x=samples[:, 0],
y=samples[:, 1],
cmap="Reds",
kind="kde",
norm=matplotlib.colors.LogNorm()
)
plt.ylim(ylim[0], ylim[1])
plt.xlim(xlim[0], xlim[1])
plt.axis('off')
fig = displot.fig
image_mobject = convert_matplotlib_figure_to_image_mobject(fig)
return image_mobject
class Uncreate(Create):
def __init__(
self,
mobject,
reverse_rate_function: bool = True,
introducer: bool = True,
remover: bool = True,
**kwargs,
) -> None:
super().__init__(
mobject,
reverse_rate_function=reverse_rate_function,
introducer=introducer,
remover=remover,
**kwargs,
)
class MCMCAxes(Group):
"""Container object for visualizing MCMC on a 2D axis"""
def __init__(
self,
dot_color=BLUE,
dot_radius=0.02,
accept_line_color=GREEN,
reject_line_color=RED,
line_color=BLUE,
line_stroke_width=2,
x_range=[-3, 3],
y_range=[-3, 3],
x_length=5,
y_length=5
):
super().__init__()
self.dot_color = dot_color
self.dot_radius = dot_radius
self.accept_line_color = accept_line_color
self.reject_line_color = reject_line_color
self.line_color = line_color
self.line_stroke_width = line_stroke_width
# Make the axes
self.x_length = x_length
self.y_length = y_length
self.x_range = x_range
self.y_range = y_range
self.axes = Axes(
x_range=x_range,
y_range=y_range,
x_length=x_length,
y_length=y_length,
x_axis_config={"stroke_opacity": 0.0},
y_axis_config={"stroke_opacity": 0.0},
tips=False,
)
self.add(self.axes)
@override_animation(Create)
def _create_override(self, **kwargs):
"""Overrides Create animation"""
return AnimationGroup(Create(self.axes))
def visualize_gaussian_proposal_about_point(self, mean, cov=None) -> AnimationGroup:
"""Creates a Gaussian distribution about a certain point
Parameters
----------
mean : np.ndarray
mean of proposal distribution
cov : np.ndarray
covariance matrix of proposal distribution
Returns
-------
AnimationGroup
animation of creating the proposal Gaussian distribution
"""
gaussian = GaussianDistribution(
axes=self.axes, mean=mean, cov=cov, dist_theme="gaussian"
)
create_guassian = Create(gaussian)
return create_guassian
def make_transition_animation(
self,
start_point,
end_point,
candidate_point,
show_dots=True,
run_time=0.1
) -> AnimationGroup:
"""Makes an transition animation for a single point on a Markov Chain
Parameters
----------
start_point: Dot
Start point of the transition
end_point : Dot
End point of the transition
show_dots: boolean, optional
Whether or not to show the dots
Returns
-------
AnimationGroup
Animation of the transition from start to end
"""
start_location = self.axes.point_to_coords(start_point.get_center())
end_location = self.axes.point_to_coords(end_point.get_center())
candidate_location = self.axes.point_to_coords(candidate_point.get_center())
# Figure out if a point is accepted or rejected
# point_is_rejected = not candidate_location == end_location
point_is_rejected = False
if point_is_rejected:
return AnimationGroup(), Dot().set_opacity(0.0)
else:
create_end_point = Create(end_point)
line = Line(
start_point,
end_point,
color=self.line_color,
stroke_width=self.line_stroke_width,
buff=-0.1
)
create_line = Create(line)
if show_dots:
return AnimationGroup(
create_end_point,
create_line,
lag_ratio=1.0,
run_time=run_time
), line
else:
return AnimationGroup(
create_line,
lag_ratio=1.0,
run_time=run_time
), line
def show_ground_truth_gaussian(self, distribution):
""" """
mean = distribution.mu
var = np.eye(2) * distribution.var
distribution_drawing = GaussianDistribution(
self.axes, mean, var, dist_theme="gaussian"
).set_opacity(0.2)
return AnimationGroup(Create(distribution_drawing))
def visualize_metropolis_hastings_chain_sampling(
self,
log_prob_fn=MultidimensionalGaussianPosterior(),
prop_fn=gaussian_proposal,
show_dots=False,
true_samples=None,
sampling_kwargs={},
):
"""
Makes an animation for visualizing a 2D markov chain using
metropolis hastings samplings
Parameters
----------
axes : manim.mobject.graphing.coordinate_systems.Axes
Manim 2D axes to plot the chain on
log_prob_fn : function, optional
Function to compute log-posterior, by default MultidmensionalGaussianPosterior
prop_fn : function, optional
Function to compute proposal location, by default gaussian_proposal
initial_location : list, optional
initial location for the markov chain, by default None
show_dots : bool, optional
whether or not to show the dots on the screen, by default False
iterations : int, optional
number of iterations of the markov chain, by default 100
Returns
-------
animation : AnimationGroup
animation for creating the markov chain
"""
# Compute the chain samples using a Metropolis Hastings Sampler
mcmc_samples, warm_up_samples, candidate_samples = metropolis_hastings_sampler(
log_prob_fn=log_prob_fn,
prop_fn=prop_fn,
**sampling_kwargs
)
# print(f"MCMC samples: {mcmc_samples}")
# print(f"Candidate samples: {candidate_samples}")
# Make the animation for visualizing the chain
transition_animations = []
# Place the initial point
current_point = mcmc_samples[0]
current_point = Dot(
self.axes.coords_to_point(current_point[0], current_point[1]),
color=self.dot_color,
radius=self.dot_radius,
)
create_initial_point = Create(current_point)
transition_animations.append(create_initial_point)
# Show the initial point's proposal distribution
# NOTE: visualize the warm up and the iterations
lines = []
warmup_points = []
num_iterations = len(mcmc_samples) + len(warm_up_samples)
for iteration in tqdm(range(1, num_iterations)):
next_sample = mcmc_samples[iteration]
# print(f"Next sample: {next_sample}")
candidate_sample = candidate_samples[iteration - 1]
# Make the next point
next_point = Dot(
self.axes.coords_to_point(
next_sample[0],
next_sample[1]
),
color=self.dot_color,
radius=self.dot_radius,
)
candidate_point = Dot(
self.axes.coords_to_point(
candidate_sample[0],
candidate_sample[1]
),
color=self.dot_color,
radius=self.dot_radius,
)
# Make a transition animation
transition_animation, line = self.make_transition_animation(
current_point, next_point, candidate_point
)
# Save assets
lines.append(line)
if iteration < len(warm_up_samples):
warmup_points.append(candidate_point)
# Add the transition animation
transition_animations.append(transition_animation)
# Setup for next iteration
current_point = next_point
# Overall MCMC animation
# 1. Fade in the distribution
image_mobject = make_dist_image_mobject_from_samples(
true_samples,
xlim=(self.x_range[0], self.x_range[1]),
ylim=(self.y_range[0], self.y_range[1])
)
image_mobject.scale_to_fit_height(
self.y_length
)
image_mobject.move_to(self.axes)
fade_in_distribution = FadeIn(
image_mobject,
run_time=0.5
)
# 2. Start sampling the chain
chain_sampling_animation = AnimationGroup(
*transition_animations,
lag_ratio=1.0,
run_time=5.0
)
# 3. Convert the chain to points, excluding the warmup
lines = VGroup(*lines)
warm_up_points = VGroup(*warmup_points)
fade_out_lines_and_warmup = AnimationGroup(
Uncreate(lines),
Uncreate(warm_up_points),
lag_ratio=0.0
)
# Make the final animation
animation_group = Succession(
fade_in_distribution,
chain_sampling_animation,
fade_out_lines_and_warmup,
lag_ratio=1.0
)
return animation_group
|
ManimML_helblazer811/manim_ml/utils/__init__.py
| |
ManimML_helblazer811/manim_ml/utils/colorschemes/__init__.py
|
from manim_ml.utils.colorschemes.colorschemes import light_mode, dark_mode
|
ManimML_helblazer811/manim_ml/utils/colorschemes/colorschemes.py
|
from manim import *
from dataclasses import dataclass
@dataclass
class ColorScheme:
primary_color: str
secondary_color: str
active_color: str
text_color: str
background_color: str
dark_mode = ColorScheme(
primary_color=BLUE,
secondary_color=WHITE,
active_color=ORANGE,
text_color=WHITE,
background_color=BLACK
)
light_mode = ColorScheme(
primary_color=BLUE,
secondary_color=BLACK,
active_color=ORANGE,
text_color=BLACK,
background_color=WHITE
)
|
ManimML_helblazer811/manim_ml/utils/mobjects/__init__.py
| |
ManimML_helblazer811/manim_ml/utils/mobjects/connections.py
|
import numpy as np
from manim import *
class NetworkConnection(VGroup):
"""
This class allows for creating connections
between locations in a network
"""
direction_vector_map = {"up": UP, "down": DOWN, "left": LEFT, "right": RIGHT}
def __init__(
self,
start_mobject,
end_mobject,
arc_direction="straight",
buffer=0.0,
arc_distance=0.2,
stroke_width=2.0,
color=WHITE,
active_color=ORANGE,
):
"""Creates an arrow with right angles in it connecting
two mobjects.
Parameters
----------
start_mobject : Mobject
Mobject where the start of the connection is from
end_mobject : Mobject
Mobject where the end of the connection goes to
arc_direction : str, optional
direction that the connection arcs, by default "straight"
buffer : float, optional
amount of space between the connection and mobjects at the end
arc_distance : float, optional
Distance from start and end mobject that the arc bends
stroke_width : float, optional
Stroke width of the connection
color : [float], optional
Color of the connection
active_color : [float], optional
Color of active animations for this mobject
"""
super().__init__()
assert arc_direction in ["straight", "up", "down", "left", "right"]
self.start_mobject = start_mobject
self.end_mobject = end_mobject
self.arc_direction = arc_direction
self.buffer = buffer
self.arc_distance = arc_distance
self.stroke_width = stroke_width
self.color = color
self.active_color = active_color
self.make_mobjects()
def make_mobjects(self):
"""Makes the submobjects"""
if self.start_mobject.get_center()[0] < self.end_mobject.get_center()[0]:
left_mobject = self.start_mobject
right_mobject = self.end_mobject
else:
right_mobject = self.start_mobject
left_mobject = self.end_mobject
if self.arc_direction == "straight":
# Make an arrow
self.straight_arrow = Arrow(
start=left_mobject.get_right() + np.array([self.buffer, 0.0, 0.0]),
end=right_mobject.get_left() + np.array([-1 * self.buffer, 0.0, 0.0]),
color=WHITE,
fill_color=WHITE,
stroke_opacity=1.0,
buff=0.0,
)
self.add(self.straight_arrow)
else:
# Figure out the direction of the arc
direction_vector = NetworkConnection.direction_vector_map[
self.arc_direction
]
# Based on the position of the start and end layer, and direction
# figure out how large to make each line
# Whichever mobject has a critical point the farthest
# distance in the direction_vector direction we will use that end
left_mobject_critical_point = left_mobject.get_critical_point(direction_vector)
right_mobject_critical_point = right_mobject.get_critical_point(direction_vector)
# Take the dot product of each
# These dot products correspond to the orthogonal projection
# onto the direction vectors
left_dot_product = np.dot(
left_mobject_critical_point,
direction_vector
)
right_dot_product = np.dot(
right_mobject_critical_point,
direction_vector
)
extra_distance = abs(left_dot_product - right_dot_product)
# The difference between the dot products
if left_dot_product < right_dot_product:
right_is_farthest = False
else:
right_is_farthest = True
# Make the start arc piece
start_line_start = left_mobject.get_critical_point(direction_vector)
start_line_start += direction_vector * self.buffer
start_line_end = start_line_start + direction_vector * self.arc_distance
if not right_is_farthest:
start_line_end = start_line_end + direction_vector * extra_distance
self.start_line = Line(
start_line_start,
start_line_end,
color=self.color,
stroke_width=self.stroke_width,
)
# Make the end arc piece with an arrow
end_line_end = right_mobject.get_critical_point(direction_vector)
end_line_end += direction_vector * self.buffer
end_line_start = end_line_end + direction_vector * self.arc_distance
if right_is_farthest:
end_line_start = end_line_start + direction_vector * extra_distance
self.end_arrow = Arrow(
start=end_line_start,
end=end_line_end,
color=WHITE,
fill_color=WHITE,
stroke_opacity=1.0,
buff=0.0,
)
# Make the middle arc piece
self.middle_line = Line(
start_line_end,
end_line_start,
color=self.color,
stroke_width=self.stroke_width,
)
# Add the mobjects
self.add(
self.start_line,
self.middle_line,
self.end_arrow,
)
@override_animation(ShowPassingFlash)
def _override_passing_flash(self, run_time=1.0, time_width=0.2):
"""Passing flash animation"""
if self.arc_direction == "straight":
return ShowPassingFlash(
self.straight_arrow.copy().set_color(self.active_color),
time_width=time_width,
)
else:
# Animate the start line
start_line_animation = ShowPassingFlash(
self.start_line.copy().set_color(self.active_color),
time_width=time_width,
)
# Animate the middle line
middle_line_animation = ShowPassingFlash(
self.middle_line.copy().set_color(self.active_color),
time_width=time_width,
)
# Animate the end line
end_line_animation = ShowPassingFlash(
self.end_arrow.copy().set_color(self.active_color),
time_width=time_width,
)
return AnimationGroup(
start_line_animation,
middle_line_animation,
end_line_animation,
lag_ratio=1.0,
run_time=run_time,
)
|
ManimML_helblazer811/manim_ml/utils/mobjects/image.py
|
from manim import *
import numpy as np
from PIL import Image
class GrayscaleImageMobject(Group):
"""Mobject for creating images in Manim from numpy arrays"""
def __init__(self, numpy_image, height=2.3):
super().__init__()
self.numpy_image = numpy_image
assert len(np.shape(self.numpy_image)) == 2
input_image = self.numpy_image[None, :, :]
# Convert grayscale to rgb version of grayscale
input_image = np.repeat(input_image, 3, axis=0)
input_image = np.rollaxis(input_image, 0, start=3)
self.image_mobject = ImageMobject(
input_image,
image_mode="RBG",
)
self.add(self.image_mobject)
self.image_mobject.set_resampling_algorithm(
RESAMPLING_ALGORITHMS["nearest"]
)
self.image_mobject.scale_to_fit_height(height)
@classmethod
def from_path(cls, path, height=2.3):
"""Loads image from path"""
image = Image.open(path)
numpy_image = np.asarray(image)
return cls(numpy_image, height=height)
@override_animation(Create)
def create(self, run_time=2):
return FadeIn(self)
def scale(self, scale_factor, **kwargs):
"""Scales the image mobject"""
# super().scale(scale_factor)
# height = self.height
self.image_mobject.scale(scale_factor)
# self.scale_to_fit_height(2)
# self.apply_points_function_about_point(
# lambda points: scale_factor * points, **kwargs
# )
def set_opacity(self, opacity):
"""Set the opacity"""
self.image_mobject.set_opacity(opacity)
class LabeledColorImage(Group):
"""Labeled Color Image"""
def __init__(
self, image, color=RED, label="Positive", stroke_width=5, font_size=24, buff=0.2
):
super().__init__()
self.image = image
self.color = color
self.label = label
self.stroke_width = stroke_width
self.font_size = font_size
text = Text(label, font_size=self.font_size)
text.next_to(self.image, UP, buff=buff)
rectangle = SurroundingRectangle(
self.image, color=color, buff=0.0, stroke_width=self.stroke_width
)
self.add(text)
self.add(rectangle)
self.add(self.image)
|
ManimML_helblazer811/manim_ml/utils/mobjects/list_group.py
|
from manim import *
class ListGroup(Mobject):
"""Indexable Group with traditional list operations"""
def __init__(self, *layers):
super().__init__()
self.items = [*layers]
def __getitem__(self, indices):
"""Traditional list indexing"""
return self.items[indices]
def insert(self, index, item):
"""Inserts item at index"""
self.items.insert(index, item)
self.submobjects = self.items
def remove_at_index(self, index):
"""Removes item at index"""
if index > len(self.items):
raise Exception(f"ListGroup index out of range: {index}")
item = self.items[index]
del self.items[index]
self.submobjects = self.items
return item
def remove_at_indices(self, indices):
"""Removes items at indices"""
items = []
for index in indices:
item = self.remove_at_index(index)
items.append(item)
return items
def remove(self, item):
"""Removes first instance of item"""
self.items.remove(item)
self.submobjects = self.items
return item
def get(self, index):
"""Gets item at index"""
return self.items[index]
def add(self, item):
"""Adds to end"""
self.items.append(item)
self.submobjects = self.items
def replace(self, index, item):
"""Replaces item at index"""
self.items[index] = item
self.submobjects = self.items
def index_of(self, item):
"""Returns index of item if it exists"""
for index, obj in enumerate(self.items):
if item is obj:
return index
return -1
def __len__(self):
"""Length of items"""
return len(self.items)
def set_z_index(self, z_index_value, family=True):
"""Sets z index of all values in ListGroup"""
for item in self.items:
item.set_z_index(z_index_value, family=True)
def __iter__(self):
self.current_index = -1
return self
def __next__(self): # Python 2: def next(self)
self.current_index += 1
if self.current_index < len(self.items):
return self.items[self.current_index]
raise StopIteration
def __repr__(self):
return f"ListGroup({self.items})"
|
ManimML_helblazer811/manim_ml/utils/mobjects/probability.py
|
from manim import *
import numpy as np
import math
class GaussianDistribution(VGroup):
"""Object for drawing a Gaussian distribution"""
def __init__(
self, axes, mean=None, cov=None, dist_theme="gaussian", color=ORANGE, **kwargs
):
super(VGroup, self).__init__(**kwargs)
self.axes = axes
self.mean = mean
self.cov = cov
self.dist_theme = dist_theme
self.color = color
if mean is None:
self.mean = np.array([0.0, 0.0])
if cov is None:
self.cov = np.array([[1, 0], [0, 1]])
# Make the Gaussian
if self.dist_theme is "gaussian":
self.ellipses = self.construct_gaussian_distribution(
self.mean, self.cov, color=self.color
)
self.add(self.ellipses)
elif self.dist_theme is "ellipse":
self.ellipses = self.construct_simple_gaussian_ellipse(
self.mean, self.cov, color=self.color
)
self.add(self.ellipses)
else:
raise Exception(f"Uncrecognized distribution theme: {self.dist_theme}")
"""
@override_animation(Create)
def _create_gaussian_distribution(self):
return Create(self)
"""
def compute_covariance_rotation_and_scale(self, covariance):
def eigsorted(cov):
"""
Eigenvalues and eigenvectors of the covariance matrix.
"""
vals, vecs = np.linalg.eigh(cov)
order = vals.argsort()[::-1]
return vals[order], vecs[:, order]
def cov_ellipse(cov, nstd):
"""
Source: http://stackoverflow.com/a/12321306/1391441
"""
vals, vecs = eigsorted(cov)
theta = np.degrees(np.arctan2(*vecs[:, 0][::-1]))
# Width and height are "full" widths, not radius
width, height = 2 * nstd * np.sqrt(vals)
return width, height, theta
width, height, angle = cov_ellipse(covariance, 1)
scale_factor = (
np.abs(self.axes.x_range[0] - self.axes.x_range[1]) / self.axes.x_length
)
width /= scale_factor
height /= scale_factor
return angle, width, height
def construct_gaussian_distribution(
self, mean, covariance, color=ORANGE, num_ellipses=4
):
"""Returns a 2d Gaussian distribution object with given mean and covariance"""
# map mean and covariance to frame coordinates
mean = self.axes.coords_to_point(*mean)
# Figure out the scale and angle of rotation
rotation, width, height = self.compute_covariance_rotation_and_scale(covariance)
# Make covariance ellipses
opacity = 0.0
ellipses = VGroup()
for ellipse_number in range(num_ellipses):
opacity += 1.0 / num_ellipses
ellipse_width = width * (1 - opacity)
ellipse_height = height * (1 - opacity)
ellipse = Ellipse(
width=ellipse_width,
height=ellipse_height,
color=color,
fill_opacity=opacity,
stroke_width=2.0,
)
ellipse.move_to(mean)
ellipse.rotate(rotation)
ellipses.add(ellipse)
return ellipses
def construct_simple_gaussian_ellipse(self, mean, covariance, color=ORANGE):
"""Returns a 2d Gaussian distribution object with given mean and covariance"""
# Map mean and covariance to frame coordinates
mean = self.axes.coords_to_point(*mean)
angle, width, height = self.compute_covariance_rotation_and_scale(covariance)
# Make covariance ellipses
ellipses = VGroup()
opacity = 0.4
ellipse = Ellipse(
width=width,
height=height,
color=color,
fill_opacity=opacity,
stroke_width=1.0,
)
ellipse.move_to(mean)
ellipse.rotate(angle)
ellipses.add(ellipse)
ellipses.set_z_index(3)
return ellipses
|
ManimML_helblazer811/manim_ml/utils/mobjects/plotting.py
|
from manim import *
import numpy as np
import matplotlib
import matplotlib.pyplot as plt
from PIL import Image
import io
def convert_matplotlib_figure_to_image_mobject(fig, dpi=200):
"""Takes a matplotlib figure and makes an image mobject from it
Parameters
----------
fig : matplotlib figure
matplotlib figure
"""
fig.tight_layout(pad=0)
# plt.axis('off')
fig.canvas.draw()
# Save data into a buffer
image_buffer = io.BytesIO()
plt.savefig(image_buffer, format='png', dpi=dpi)
# Reopen in PIL and convert to numpy
image = Image.open(image_buffer)
image = np.array(image)
# Convert it to an image mobject
image_mobject = ImageMobject(image, image_mode="RGB")
return image_mobject
|
ManimML_helblazer811/manim_ml/utils/mobjects/gridded_rectangle.py
|
from manim import *
import numpy as np
class GriddedRectangle(VGroup):
"""Rectangle object with grid lines"""
def __init__(
self,
color=ORANGE,
height=2.0,
width=4.0,
mark_paths_closed=True,
close_new_points=True,
grid_xstep=None,
grid_ystep=None,
grid_stroke_width=0.0, # DEFAULT_STROKE_WIDTH/2,
grid_stroke_color=ORANGE,
grid_stroke_opacity=1.0,
stroke_width=2.0,
fill_opacity=0.2,
show_grid_lines=False,
dotted_lines=False,
**kwargs
):
super().__init__()
# Fields
self.color = color
self.mark_paths_closed = mark_paths_closed
self.close_new_points = close_new_points
self.grid_xstep = grid_xstep
self.grid_ystep = grid_ystep
self.grid_stroke_width = grid_stroke_width
self.grid_stroke_color = grid_stroke_color
self.grid_stroke_opacity = grid_stroke_opacity if show_grid_lines else 0.0
self.stroke_width = stroke_width
self.rotation_angles = [0, 0, 0]
self.show_grid_lines = show_grid_lines
self.untransformed_width = width
self.untransformed_height = height
self.dotted_lines = dotted_lines
# Make rectangle
if self.dotted_lines:
no_border_rectangle = Rectangle(
width=width,
height=height,
color=color,
fill_color=color,
stroke_opacity=0.0,
fill_opacity=fill_opacity,
shade_in_3d=True,
)
self.rectangle = no_border_rectangle
border_rectangle = Rectangle(
width=width,
height=height,
color=color,
fill_color=color,
fill_opacity=fill_opacity,
shade_in_3d=True,
stroke_width=stroke_width,
)
self.dotted_lines = DashedVMobject(
border_rectangle,
num_dashes=int((width + height) / 2) * 20,
)
self.add(self.dotted_lines)
else:
self.rectangle = Rectangle(
width=width,
height=height,
color=color,
stroke_width=stroke_width,
fill_color=color,
fill_opacity=fill_opacity,
shade_in_3d=True,
)
self.add(self.rectangle)
# Make grid lines
grid_lines = self.make_grid_lines()
self.add(grid_lines)
# Make corner rectangles
self.corners_dict = self.make_corners_dict()
self.add(*self.corners_dict.values())
def make_corners_dict(self):
"""Make corners dictionary"""
corners_dict = {
"top_right": Dot(
self.rectangle.get_corner([1, 1, 0]), fill_opacity=0.0, radius=0.0
),
"top_left": Dot(
self.rectangle.get_corner([-1, 1, 0]), fill_opacity=0.0, radius=0.0
),
"bottom_left": Dot(
self.rectangle.get_corner([-1, -1, 0]), fill_opacity=0.0, radius=0.0
),
"bottom_right": Dot(
self.rectangle.get_corner([1, -1, 0]), fill_opacity=0.0, radius=0.0
),
}
return corners_dict
def get_corners_dict(self):
"""Returns a dictionary of the corners"""
# Sort points through clockwise rotation of a vector in the xy plane
return self.corners_dict
def make_grid_lines(self):
"""Make grid lines in rectangle"""
grid_lines = VGroup()
v = self.rectangle.get_vertices()
if self.grid_xstep is not None:
grid_xstep = abs(self.grid_xstep)
count = int(self.width / grid_xstep)
grid = VGroup(
*(
Line(
v[1] + i * grid_xstep * RIGHT,
v[1] + i * grid_xstep * RIGHT + self.height * DOWN,
stroke_color=self.grid_stroke_color,
stroke_width=self.grid_stroke_width,
stroke_opacity=self.grid_stroke_opacity,
shade_in_3d=True,
)
for i in range(1, count)
)
)
grid_lines.add(grid)
if self.grid_ystep is not None:
grid_ystep = abs(self.grid_ystep)
count = int(self.height / grid_ystep)
grid = VGroup(
*(
Line(
v[1] + i * grid_ystep * DOWN,
v[1] + i * grid_ystep * DOWN + self.width * RIGHT,
stroke_color=self.grid_stroke_color,
stroke_width=self.grid_stroke_width,
stroke_opacity=self.grid_stroke_opacity,
)
for i in range(1, count)
)
)
grid_lines.add(grid)
return grid_lines
def get_center(self):
return self.rectangle.get_center()
def get_normal_vector(self):
vertex_1 = self.rectangle.get_vertices()[0]
vertex_2 = self.rectangle.get_vertices()[1]
vertex_3 = self.rectangle.get_vertices()[2]
# First vector
normal_vector = np.cross((vertex_1 - vertex_2), (vertex_1 - vertex_3))
return normal_vector
def set_color(self, color):
"""Sets the color of the gridded rectangle"""
self.color = color
self.rectangle.set_color(color)
self.rectangle.set_stroke_color(color)
|
ManimML_helblazer811/manim_ml/utils/testing/frames_comparison.py
|
from __future__ import annotations
import functools
import inspect
from pathlib import Path
from typing import Callable
from _pytest.fixtures import FixtureRequest
from manim import Scene
from manim._config import tempconfig
from manim._config.utils import ManimConfig
from manim.camera.three_d_camera import ThreeDCamera
from manim.renderer.cairo_renderer import CairoRenderer
from manim.scene.three_d_scene import ThreeDScene
from manim.utils.testing._frames_testers import _ControlDataWriter, _FramesTester
from manim.utils.testing._test_class_makers import (
DummySceneFileWriter,
_make_scene_file_writer_class,
_make_test_renderer_class,
_make_test_scene_class,
)
SCENE_PARAMETER_NAME = "scene"
_tests_root_dir_path = Path(__file__).absolute().parents[2]
print(f"Tests root path: {_tests_root_dir_path}")
PATH_CONTROL_DATA = _tests_root_dir_path / Path("control_data", "graphical_units_data")
def frames_comparison(
func=None,
*,
last_frame: bool = True,
renderer_class=CairoRenderer,
base_scene=Scene,
**custom_config,
):
"""Compares the frames generated by the test with control frames previously registered.
If there is no control frames for this test, the test will fail. To generate
control frames for a given test, pass ``--set_test`` flag to pytest
while running the test.
Note that this decorator can be use with or without parentheses.
Parameters
----------
last_frame
whether the test should test the last frame, by default True.
renderer_class
The base renderer to use (OpenGLRenderer/CairoRenderer), by default CairoRenderer
base_scene
The base class for the scene (ThreeDScene, etc.), by default Scene
.. warning::
By default, last_frame is True, which means that only the last frame is tested.
If the scene has a moving animation, then the test must set last_frame to False.
"""
def decorator_maker(tested_scene_construct):
if (
SCENE_PARAMETER_NAME
not in inspect.getfullargspec(tested_scene_construct).args
):
raise Exception(
f"Invalid graphical test function test function : must have '{SCENE_PARAMETER_NAME}'as one of the parameters.",
)
# Exclude "scene" from the argument list of the signature.
old_sig = inspect.signature(
functools.partial(tested_scene_construct, scene=None),
)
if "__module_test__" not in tested_scene_construct.__globals__:
raise Exception(
"There is no module test name indicated for the graphical unit test. You have to declare __module_test__ in the test file.",
)
module_name = tested_scene_construct.__globals__.get("__module_test__")
test_name = tested_scene_construct.__name__[len("test_") :]
@functools.wraps(tested_scene_construct)
# The "request" parameter is meant to be used as a fixture by pytest. See below.
def wrapper(*args, request: FixtureRequest, tmp_path, **kwargs):
# Wraps the test_function to a construct method, to "freeze" the eventual additional arguments (parametrizations fixtures).
construct = functools.partial(tested_scene_construct, *args, **kwargs)
# Kwargs contains the eventual parametrization arguments.
# This modifies the test_name so that it is defined by the parametrization
# arguments too.
# Example: if "length" is parametrized from 0 to 20, the kwargs
# will be once with {"length" : 1}, etc.
test_name_with_param = test_name + "_".join(
f"_{str(tup[0])}[{str(tup[1])}]" for tup in kwargs.items()
)
config_tests = _config_test(last_frame)
config_tests["text_dir"] = tmp_path
config_tests["tex_dir"] = tmp_path
if last_frame:
config_tests["frame_rate"] = 1
config_tests["dry_run"] = True
setting_test = request.config.getoption("--set_test")
try:
test_file_path = tested_scene_construct.__globals__["__file__"]
except Exception:
test_file_path = None
real_test = _make_test_comparing_frames(
file_path=_control_data_path(
test_file_path,
module_name,
test_name_with_param,
setting_test,
),
base_scene=base_scene,
construct=construct,
renderer_class=renderer_class,
is_set_test_data_test=setting_test,
last_frame=last_frame,
show_diff=request.config.getoption("--show_diff"),
size_frame=(config_tests["pixel_height"], config_tests["pixel_width"]),
)
# Isolate the config used for the test, to avoid modifying the global config during the test run.
with tempconfig({**config_tests, **custom_config}):
real_test()
parameters = list(old_sig.parameters.values())
# Adds "request" param into the signature of the wrapper, to use the associated pytest fixture.
# This fixture is needed to have access to flags value and pytest's config. See above.
if "request" not in old_sig.parameters:
parameters += [inspect.Parameter("request", inspect.Parameter.KEYWORD_ONLY)]
if "tmp_path" not in old_sig.parameters:
parameters += [
inspect.Parameter("tmp_path", inspect.Parameter.KEYWORD_ONLY),
]
new_sig = old_sig.replace(parameters=parameters)
wrapper.__signature__ = new_sig
# Reach a bit into pytest internals to hoist the marks from our wrapped
# function.
setattr(wrapper, "pytestmark", [])
new_marks = getattr(tested_scene_construct, "pytestmark", [])
wrapper.pytestmark = new_marks
return wrapper
# Case where the decorator is called with and without parentheses.
# If func is None, callabl(None) returns False
if callable(func):
return decorator_maker(func)
return decorator_maker
def _make_test_comparing_frames(
file_path: Path,
base_scene: type[Scene],
construct: Callable[[Scene], None],
renderer_class: type, # Renderer type, there is no superclass renderer yet .....
is_set_test_data_test: bool,
last_frame: bool,
show_diff: bool,
size_frame: tuple,
) -> Callable[[], None]:
"""Create the real pytest test that will fail if the frames mismatch.
Parameters
----------
file_path
The path of the control frames.
base_scene
The base scene class.
construct
The construct method (= the test function)
renderer_class
The renderer base class.
show_diff
whether to visually show_diff (see --show_diff)
Returns
-------
Callable[[], None]
The pytest test.
"""
if is_set_test_data_test:
frames_tester = _ControlDataWriter(file_path, size_frame=size_frame)
else:
frames_tester = _FramesTester(file_path, show_diff=show_diff)
file_writer_class = (
_make_scene_file_writer_class(frames_tester)
if not last_frame
else DummySceneFileWriter
)
testRenderer = _make_test_renderer_class(renderer_class)
def real_test():
with frames_tester.testing():
sceneTested = _make_test_scene_class(
base_scene=base_scene,
construct_test=construct,
# NOTE this is really ugly but it's due to the very bad design of the two renderers.
# If you pass a custom renderer to the Scene, the Camera class given as an argument in the Scene
# is not passed to the renderer. See __init__ of Scene.
# This potentially prevents OpenGL testing.
test_renderer=testRenderer(file_writer_class=file_writer_class)
if base_scene is not ThreeDScene
else testRenderer(
file_writer_class=file_writer_class,
camera_class=ThreeDCamera,
), # testRenderer(file_writer_class=file_writer_class),
)
scene_tested = sceneTested(skip_animations=True)
scene_tested.render()
if last_frame:
frames_tester.check_frame(-1, scene_tested.renderer.get_frame())
return real_test
def _control_data_path(
test_file_path: str | None, module_name: str, test_name: str, setting_test: bool
) -> Path:
if test_file_path is None:
# For some reason, path to test file containing @frames_comparison could not
# be determined. Use local directory instead.
test_file_path = __file__
path = Path(test_file_path).absolute().parent / "control_data" / module_name
if setting_test:
# Create the directory if not existing.
path.mkdir(exist_ok=True)
if not setting_test and not path.exists():
raise Exception(f"The control frames directory can't be found in {path}")
path = (path / test_name).with_suffix(".npz")
if not setting_test and not path.is_file():
raise Exception(
f"The control frame for the test {test_name} cannot be found in {path.parent}. "
"Make sure you generated the control frames first.",
)
return path
def _config_test(last_frame: bool) -> ManimConfig:
return ManimConfig().digest_file(
str(
Path(__file__).parent
/ (
"config_graphical_tests_monoframe.cfg"
if last_frame
else "config_graphical_tests_multiframes.cfg"
),
),
)
|
ManimML_helblazer811/manim_ml/utils/testing/doc_directive.py
|
r"""
A directive for including Manim videos in a Sphinx document
"""
from __future__ import annotations
import csv
import itertools as it
import os
import re
import shutil
import sys
from pathlib import Path
from timeit import timeit
import jinja2
from docutils import nodes
from docutils.parsers.rst import Directive, directives # type: ignore
from docutils.statemachine import StringList
from manim import QUALITIES
classnamedict = {}
class SkipManimNode(nodes.Admonition, nodes.Element):
"""Auxiliary node class that is used when the ``skip-manim`` tag is present
or ``.pot`` files are being built.
Skips rendering the manim directive and outputs a placeholder instead.
"""
pass
def visit(self, node, name=""):
self.visit_admonition(node, name)
if not isinstance(node[0], nodes.title):
node.insert(0, nodes.title("skip-manim", "Example Placeholder"))
def depart(self, node):
self.depart_admonition(node)
def process_name_list(option_input: str, reference_type: str) -> list[str]:
r"""Reformats a string of space separated class names
as a list of strings containing valid Sphinx references.
Tests
-----
::
>>> process_name_list("Tex TexTemplate", "class")
[':class:`~.Tex`', ':class:`~.TexTemplate`']
>>> process_name_list("Scene.play Mobject.rotate", "func")
[':func:`~.Scene.play`', ':func:`~.Mobject.rotate`']
"""
return [f":{reference_type}:`~.{name}`" for name in option_input.split()]
class ManimDirective(Directive):
r"""The manim directive, rendering videos while building
the documentation.
See the module docstring for documentation.
"""
has_content = True
required_arguments = 1
optional_arguments = 0
option_spec = {
"hide_source": bool,
"no_autoplay": bool,
"quality": lambda arg: directives.choice(
arg,
("low", "medium", "high", "fourk"),
),
"save_as_gif": bool,
"save_last_frame": bool,
"ref_modules": lambda arg: process_name_list(arg, "mod"),
"ref_classes": lambda arg: process_name_list(arg, "class"),
"ref_functions": lambda arg: process_name_list(arg, "func"),
"ref_methods": lambda arg: process_name_list(arg, "meth"),
}
final_argument_whitespace = True
def run(self):
# Rendering is skipped if the tag skip-manim is present,
# or if we are making the pot-files
should_skip = (
"skip-manim" in self.state.document.settings.env.app.builder.tags.tags
or self.state.document.settings.env.app.builder.name == "gettext"
)
if should_skip:
node = SkipManimNode()
self.state.nested_parse(
StringList(
[
f"Placeholder block for ``{self.arguments[0]}``.",
"",
".. code-block:: python",
"",
]
+ [" " + line for line in self.content]
),
self.content_offset,
node,
)
return [node]
from manim import config, tempconfig
global classnamedict
clsname = self.arguments[0]
if clsname not in classnamedict:
classnamedict[clsname] = 1
else:
classnamedict[clsname] += 1
hide_source = "hide_source" in self.options
no_autoplay = "no_autoplay" in self.options
save_as_gif = "save_as_gif" in self.options
save_last_frame = "save_last_frame" in self.options
assert not (save_as_gif and save_last_frame)
ref_content = (
self.options.get("ref_modules", [])
+ self.options.get("ref_classes", [])
+ self.options.get("ref_functions", [])
+ self.options.get("ref_methods", [])
)
if ref_content:
ref_block = "References: " + " ".join(ref_content)
else:
ref_block = ""
if "quality" in self.options:
quality = f'{self.options["quality"]}_quality'
else:
quality = "example_quality"
frame_rate = QUALITIES[quality]["frame_rate"]
pixel_height = QUALITIES[quality]["pixel_height"]
pixel_width = QUALITIES[quality]["pixel_width"]
state_machine = self.state_machine
document = state_machine.document
source_file_name = Path(document.attributes["source"])
source_rel_name = source_file_name.relative_to(setup.confdir)
source_rel_dir = source_rel_name.parents[0]
dest_dir = Path(setup.app.builder.outdir, source_rel_dir).absolute()
if not dest_dir.exists():
dest_dir.mkdir(parents=True, exist_ok=True)
source_block = [
".. code-block:: python",
"",
" from manim import *\n",
*(" " + line for line in self.content),
]
source_block = "\n".join(source_block)
config.media_dir = (Path(setup.confdir) / "media").absolute()
config.images_dir = "{media_dir}/images"
config.video_dir = "{media_dir}/videos/{quality}"
output_file = f"{clsname}-{classnamedict[clsname]}"
config.assets_dir = Path("_static")
config.progress_bar = "none"
config.verbosity = "WARNING"
example_config = {
"frame_rate": frame_rate,
"no_autoplay": no_autoplay,
"pixel_height": pixel_height,
"pixel_width": pixel_width,
"save_last_frame": save_last_frame,
"write_to_movie": not save_last_frame,
"output_file": output_file,
}
if save_last_frame:
example_config["format"] = None
if save_as_gif:
example_config["format"] = "gif"
user_code = self.content
if user_code[0].startswith(">>> "): # check whether block comes from doctest
user_code = [
line[4:] for line in user_code if line.startswith((">>> ", "... "))
]
code = [
"from manim import *",
*user_code,
f"{clsname}().render()",
]
with tempconfig(example_config):
run_time = timeit(lambda: exec("\n".join(code), globals()), number=1)
video_dir = config.get_dir("video_dir")
images_dir = config.get_dir("images_dir")
_write_rendering_stats(
clsname,
run_time,
self.state.document.settings.env.docname,
)
# copy video file to output directory
if not (save_as_gif or save_last_frame):
filename = f"{output_file}.mp4"
filesrc = video_dir / filename
destfile = Path(dest_dir, filename)
shutil.copyfile(filesrc, destfile)
elif save_as_gif:
filename = f"{output_file}.gif"
filesrc = video_dir / filename
elif save_last_frame:
filename = f"{output_file}.png"
filesrc = images_dir / filename
else:
raise ValueError("Invalid combination of render flags received.")
rendered_template = jinja2.Template(TEMPLATE).render(
clsname=clsname,
clsname_lowercase=clsname.lower(),
hide_source=hide_source,
filesrc_rel=Path(filesrc).relative_to(setup.confdir).as_posix(),
no_autoplay=no_autoplay,
output_file=output_file,
save_last_frame=save_last_frame,
save_as_gif=save_as_gif,
source_block=source_block,
ref_block=ref_block,
)
state_machine.insert_input(
rendered_template.split("\n"),
source=document.attributes["source"],
)
return []
rendering_times_file_path = Path("../rendering_times.csv")
def _write_rendering_stats(scene_name, run_time, file_name):
with rendering_times_file_path.open("a") as file:
csv.writer(file).writerow(
[
re.sub(r"^(reference\/)|(manim\.)", "", file_name),
scene_name,
"%.3f" % run_time,
],
)
def _log_rendering_times(*args):
if rendering_times_file_path.exists():
with rendering_times_file_path.open() as file:
data = list(csv.reader(file))
if len(data) == 0:
sys.exit()
print("\nRendering Summary\n-----------------\n")
max_file_length = max(len(row[0]) for row in data)
for key, group in it.groupby(data, key=lambda row: row[0]):
key = key.ljust(max_file_length + 1, ".")
group = list(group)
if len(group) == 1:
row = group[0]
print(f"{key}{row[2].rjust(7, '.')}s {row[1]}")
continue
time_sum = sum(float(row[2]) for row in group)
print(
f"{key}{f'{time_sum:.3f}'.rjust(7, '.')}s => {len(group)} EXAMPLES",
)
for row in group:
print(f"{' '*(max_file_length)} {row[2].rjust(7)}s {row[1]}")
print("")
def _delete_rendering_times(*args):
if rendering_times_file_path.exists():
rendering_times_file_path.unlink()
def setup(app):
app.add_node(SkipManimNode, html=(visit, depart))
setup.app = app
setup.config = app.config
setup.confdir = app.confdir
app.add_directive("manim", ManimDirective)
app.connect("builder-inited", _delete_rendering_times)
app.connect("build-finished", _log_rendering_times)
metadata = {"parallel_read_safe": False, "parallel_write_safe": True}
return metadata
TEMPLATE = r"""
{% if not hide_source %}
.. raw:: html
<div id="{{ clsname_lowercase }}" class="admonition admonition-manim-example">
<p class="admonition-title">Example: {{ clsname }} <a class="headerlink" href="#{{ clsname_lowercase }}">¶</a></p>
{% endif %}
{% if not (save_as_gif or save_last_frame) %}
.. raw:: html
<video
class="manim-video"
controls
loop
{{ '' if no_autoplay else 'autoplay' }}
src="./{{ output_file }}.mp4">
</video>
{% elif save_as_gif %}
.. image:: /{{ filesrc_rel }}
:align: center
{% elif save_last_frame %}
.. image:: /{{ filesrc_rel }}
:align: center
{% endif %}
{% if not hide_source %}
{{ source_block }}
{{ ref_block }}
.. raw:: html
</div>
{% endif %}
"""
|
ManimML_helblazer811/manim_ml/decision_tree/decision_tree_surface.py
|
from manim import *
import numpy as np
from collections import deque
from sklearn.tree import _tree as ctree
class AABB:
"""Axis-aligned bounding box"""
def __init__(self, n_features):
self.limits = np.array([[-np.inf, np.inf]] * n_features)
def split(self, f, v):
left = AABB(self.limits.shape[0])
right = AABB(self.limits.shape[0])
left.limits = self.limits.copy()
right.limits = self.limits.copy()
left.limits[f, 1] = v
right.limits[f, 0] = v
return left, right
def tree_bounds(tree, n_features=None):
"""Compute final decision rule for each node in tree"""
if n_features is None:
n_features = np.max(tree.feature) + 1
aabbs = [AABB(n_features) for _ in range(tree.node_count)]
queue = deque([0])
while queue:
i = queue.pop()
l = tree.children_left[i]
r = tree.children_right[i]
if l != ctree.TREE_LEAF:
aabbs[l], aabbs[r] = aabbs[i].split(tree.feature[i], tree.threshold[i])
queue.extend([l, r])
return aabbs
def compute_decision_areas(
tree_classifier,
maxrange,
x=0,
y=1,
n_features=None
):
"""Extract decision areas.
tree_classifier: Instance of a sklearn.tree.DecisionTreeClassifier
maxrange: values to insert for [left, right, top, bottom] if the interval is open (+/-inf)
x: index of the feature that goes on the x axis
y: index of the feature that goes on the y axis
n_features: override autodetection of number of features
"""
tree = tree_classifier.tree_
aabbs = tree_bounds(tree, n_features)
maxrange = np.array(maxrange)
rectangles = []
for i in range(len(aabbs)):
if tree.children_left[i] != ctree.TREE_LEAF:
continue
l = aabbs[i].limits
r = [l[x, 0], l[x, 1], l[y, 0], l[y, 1], np.argmax(tree.value[i])]
# clip out of bounds indices
"""
if r[0] < maxrange[0]:
r[0] = maxrange[0]
if r[1] > maxrange[1]:
r[1] = maxrange[1]
if r[2] < maxrange[2]:
r[2] = maxrange[2]
if r[3] > maxrange[3]:
r[3] = maxrange[3]
print(r)
"""
rectangles.append(r)
rectangles = np.array(rectangles)
rectangles[:, [0, 2]] = np.maximum(rectangles[:, [0, 2]], maxrange[0::2])
rectangles[:, [1, 3]] = np.minimum(rectangles[:, [1, 3]], maxrange[1::2])
return rectangles
def plot_areas(rectangles):
for rect in rectangles:
color = ["b", "r"][int(rect[4])]
print(rect[0], rect[1], rect[2] - rect[0], rect[3] - rect[1])
rp = Rectangle(
[rect[0], rect[2]],
rect[1] - rect[0],
rect[3] - rect[2],
color=color,
alpha=0.3,
)
plt.gca().add_artist(rp)
def merge_overlapping_polygons(all_polygons, colors=[BLUE, GREEN, ORANGE]):
# get all polygons of each color
polygon_dict = {
str(BLUE).lower(): [],
str(GREEN).lower(): [],
str(ORANGE).lower(): [],
}
for polygon in all_polygons:
print(polygon_dict)
polygon_dict[str(polygon.color).lower()].append(polygon)
return_polygons = []
for color in colors:
color = str(color).lower()
polygons = polygon_dict[color]
points = set()
for polygon in polygons:
vertices = polygon.get_vertices().tolist()
vertices = [tuple(vert) for vert in vertices]
for pt in vertices:
if pt in points: # Shared vertice, remove it.
points.remove(pt)
else:
points.add(pt)
points = list(points)
sort_x = sorted(points)
sort_y = sorted(points, key=lambda x: x[1])
edges_h = {}
edges_v = {}
i = 0
while i < len(points):
curr_y = sort_y[i][1]
while i < len(points) and sort_y[i][1] == curr_y:
edges_h[sort_y[i]] = sort_y[i + 1]
edges_h[sort_y[i + 1]] = sort_y[i]
i += 2
i = 0
while i < len(points):
curr_x = sort_x[i][0]
while i < len(points) and sort_x[i][0] == curr_x:
edges_v[sort_x[i]] = sort_x[i + 1]
edges_v[sort_x[i + 1]] = sort_x[i]
i += 2
# Get all the polygons.
while edges_h:
# We can start with any point.
polygon = [(edges_h.popitem()[0], 0)]
while True:
curr, e = polygon[-1]
if e == 0:
next_vertex = edges_v.pop(curr)
polygon.append((next_vertex, 1))
else:
next_vertex = edges_h.pop(curr)
polygon.append((next_vertex, 0))
if polygon[-1] == polygon[0]:
# Closed polygon
polygon.pop()
break
# Remove implementation-markers from the polygon.
poly = [point for point, _ in polygon]
for vertex in poly:
if vertex in edges_h:
edges_h.pop(vertex)
if vertex in edges_v:
edges_v.pop(vertex)
polygon = Polygon(*poly, color=color, fill_opacity=0.3, stroke_opacity=1.0)
return_polygons.append(polygon)
return return_polygons
class IrisDatasetPlot(VGroup):
def __init__(self, iris):
points = iris.data[:, 0:2]
labels = iris.feature_names
targets = iris.target
# Make points
self.point_group = self._make_point_group(points, targets)
# Make axes
self.axes_group = self._make_axes_group(points, labels)
# Make legend
self.legend_group = self._make_legend(
[BLUE, ORANGE, GREEN], iris.target_names, self.axes_group
)
# Make title
# title_text = "Iris Dataset Plot"
# self.title = Text(title_text).match_y(self.axes_group).shift([0.5, self.axes_group.height / 2 + 0.5, 0])
# Make all group
self.all_group = Group(self.point_group, self.axes_group, self.legend_group)
# scale the groups
self.point_group.scale(1.6)
self.point_group.match_x(self.axes_group)
self.point_group.match_y(self.axes_group)
self.point_group.shift([0.2, 0, 0])
self.axes_group.scale(0.7)
self.all_group.shift([0, 0.2, 0])
@override_animation(Create)
def create_animation(self):
animation_group = AnimationGroup(
# Perform the animations
Create(self.point_group, run_time=2),
Wait(0.5),
Create(self.axes_group, run_time=2),
# add title
# Create(self.title),
Create(self.legend_group),
)
return animation_group
def _make_point_group(self, points, targets, class_colors=[BLUE, ORANGE, GREEN]):
point_group = VGroup()
for point_index, point in enumerate(points):
# draw the dot
current_target = targets[point_index]
color = class_colors[current_target]
dot = Dot(point=np.array([point[0], point[1], 0])).set_color(color)
dot.scale(0.5)
point_group.add(dot)
return point_group
def _make_legend(self, class_colors, feature_labels, axes):
legend_group = VGroup()
# Make Text
setosa = Text("Setosa", color=BLUE)
verisicolor = Text("Verisicolor", color=ORANGE)
virginica = Text("Virginica", color=GREEN)
labels = VGroup(setosa, verisicolor, virginica).arrange(
direction=RIGHT, aligned_edge=LEFT, buff=2.0
)
labels.scale(0.5)
legend_group.add(labels)
# surrounding rectangle
surrounding_rectangle = SurroundingRectangle(labels, color=WHITE)
surrounding_rectangle.move_to(labels)
legend_group.add(surrounding_rectangle)
# shift the legend group
legend_group.move_to(axes)
legend_group.shift([0, -3.0, 0])
legend_group.match_x(axes[0][0])
return legend_group
def _make_axes_group(self, points, labels, font="Source Han Sans", font_scale=0.75):
axes_group = VGroup()
# make the axes
x_range = [
np.amin(points, axis=0)[0] - 0.2,
np.amax(points, axis=0)[0] - 0.2,
0.5,
]
y_range = [np.amin(points, axis=0)[1] - 0.2, np.amax(points, axis=0)[1], 0.5]
axes = Axes(
x_range=x_range,
y_range=y_range,
x_length=9,
y_length=6.5,
# axis_config={"number_scale_value":0.75, "include_numbers":True},
tips=False,
).shift([0.5, 0.25, 0])
axes_group.add(axes)
# make axis labels
# x_label
x_label = (
Text(labels[0], font=font)
.match_y(axes.get_axes()[0])
.shift([0.5, -0.75, 0])
.scale(font_scale)
)
axes_group.add(x_label)
# y_label
y_label = (
Text(labels[1], font=font)
.match_x(axes.get_axes()[1])
.shift([-0.75, 0, 0])
.rotate(np.pi / 2)
.scale(font_scale)
)
axes_group.add(y_label)
return axes_group
class DecisionTreeSurface(VGroup):
def __init__(self, tree_clf, data, axes, class_colors=[BLUE, ORANGE, GREEN]):
# take the tree and construct the surface from it
self.tree_clf = tree_clf
self.data = data
self.axes = axes
self.class_colors = class_colors
self.surface_rectangles = self.generate_surface_rectangles()
def generate_surface_rectangles(self):
# compute data bounds
left = np.amin(self.data[:, 0]) - 0.2
right = np.amax(self.data[:, 0]) - 0.2
top = np.amax(self.data[:, 1])
bottom = np.amin(self.data[:, 1]) - 0.2
maxrange = [left, right, bottom, top]
rectangles = compute_decision_areas(
self.tree_clf, maxrange, x=0, y=1, n_features=2
)
# turn the rectangle objects into manim rectangles
def convert_rectangle_to_polygon(rect):
# get the points for the rectangle in the plot coordinate frame
bottom_left = [rect[0], rect[3]]
bottom_right = [rect[1], rect[3]]
top_right = [rect[1], rect[2]]
top_left = [rect[0], rect[2]]
# convert those points into the entire manim coordinates
bottom_left_coord = self.axes.coords_to_point(*bottom_left)
bottom_right_coord = self.axes.coords_to_point(*bottom_right)
top_right_coord = self.axes.coords_to_point(*top_right)
top_left_coord = self.axes.coords_to_point(*top_left)
points = [
bottom_left_coord,
bottom_right_coord,
top_right_coord,
top_left_coord,
]
# construct a polygon object from those manim coordinates
rectangle = Polygon(
*points, color=color, fill_opacity=0.3, stroke_opacity=0.0
)
return rectangle
manim_rectangles = []
for rect in rectangles:
color = self.class_colors[int(rect[4])]
rectangle = convert_rectangle_to_polygon(rect)
manim_rectangles.append(rectangle)
manim_rectangles = merge_overlapping_polygons(
manim_rectangles, colors=[BLUE, GREEN, ORANGE]
)
return manim_rectangles
@override_animation(Create)
def create_override(self):
# play a reveal of all of the surface rectangles
animations = []
for rectangle in self.surface_rectangles:
animations.append(Create(rectangle))
animation_group = AnimationGroup(*animations)
return animation_group
@override_animation(Uncreate)
def uncreate_override(self):
# play a reveal of all of the surface rectangles
animations = []
for rectangle in self.surface_rectangles:
animations.append(Uncreate(rectangle))
animation_group = AnimationGroup(*animations)
return animation_group
def make_split_to_animation_map(self):
"""
Returns a dictionary mapping a given split
node to an animation to be played
"""
# Create an initial decision tree surface
# Go through each split node
# 1. Make a line split animation
# 2. Create the relevant classification areas
# and transform the old ones to them
pass
|
ManimML_helblazer811/manim_ml/decision_tree/helpers.py
|
def compute_node_depths(tree):
"""Computes the depths of nodes for level order traversal"""
def depth(node_index, current_node_index=0):
"""Compute the height of a node"""
if current_node_index == node_index:
return 0
elif (
tree.children_left[current_node_index]
== tree.children_right[current_node_index]
):
return -1
else:
# Compute the height of each subtree
l_depth = depth(node_index, tree.children_left[current_node_index])
r_depth = depth(node_index, tree.children_right[current_node_index])
# The index is only in one of them
if l_depth != -1:
return l_depth + 1
elif r_depth != -1:
return r_depth + 1
else:
return -1
node_depths = [depth(index) for index in range(tree.node_count)]
return node_depths
def compute_level_order_traversal(tree):
"""Computes level order traversal of a sklearn tree"""
def depth(node_index, current_node_index=0):
"""Compute the height of a node"""
if current_node_index == node_index:
return 0
elif (
tree.children_left[current_node_index]
== tree.children_right[current_node_index]
):
return -1
else:
# Compute the height of each subtree
l_depth = depth(node_index, tree.children_left[current_node_index])
r_depth = depth(node_index, tree.children_right[current_node_index])
# The index is only in one of them
if l_depth != -1:
return l_depth + 1
elif r_depth != -1:
return r_depth + 1
else:
return -1
node_depths = [(index, depth(index)) for index in range(tree.node_count)]
node_depths = sorted(node_depths, key=lambda x: x[1])
sorted_inds = [node_depth[0] for node_depth in node_depths]
return sorted_inds
def compute_bfs_traversal(tree):
"""Traverses the tree in BFS order and returns the nodes in order"""
traversal_order = []
tree_root_index = 0
queue = [tree_root_index]
while len(queue) > 0:
current_index = queue.pop(0)
traversal_order.append(current_index)
left_child_index = tree.children_left[node_index]
right_child_index = tree.children_right[node_index]
is_leaf_node = left_child_index == right_child_index
if not is_leaf_node:
queue.append(left_child_index)
queue.append(right_child_index)
return traversal_order
def compute_best_first_traversal(tree):
"""Traverses the tree according to the best split first order"""
pass
def compute_node_to_parent_mapping(tree):
"""Returns a hashmap mapping node indices to their parent indices"""
node_to_parent = {0: -1} # Root has no parent
num_nodes = tree.node_count
for node_index in range(num_nodes):
# Explore left children
left_child_node_index = tree.children_left[node_index]
if left_child_node_index != -1:
node_to_parent[left_child_node_index] = node_index
# Explore right children
right_child_node_index = tree.children_right[node_index]
if right_child_node_index != -1:
node_to_parent[right_child_node_index] = node_index
return node_to_parent
|
ManimML_helblazer811/manim_ml/decision_tree/decision_tree.py
|
"""
Module for visualizing decision trees in Manim.
It parses a decision tree classifier from sklearn.
TODO return a map from nodes to split animation for BFS tree expansion
TODO reimplement the decision 2D decision tree surface drawing.
"""
from manim import *
from manim_ml.decision_tree.decision_tree_surface import (
compute_decision_areas,
merge_overlapping_polygons,
)
import manim_ml.decision_tree.helpers as helpers
import numpy as np
from PIL import Image
class LeafNode(Group):
"""Leaf node in tree"""
def __init__(
self, class_index, display_type="image", class_image_paths=[], class_colors=[]
):
super().__init__()
self.display_type = display_type
self.class_image_paths = class_image_paths
self.class_colors = class_colors
assert self.display_type in ["image", "text"]
if self.display_type == "image":
self._construct_image_node(class_index)
else:
raise NotImplementedError()
def _construct_image_node(self, class_index):
"""Make an image node"""
# Get image
image_path = self.class_image_paths[class_index]
pil_image = Image.open(image_path)
node = ImageMobject(pil_image)
node.scale(1.5)
rectangle = Rectangle(
width=node.width + 0.05,
height=node.height + 0.05,
color=self.class_colors[class_index],
stroke_width=6,
)
rectangle.move_to(node.get_center())
rectangle.shift([-0.02, 0.02, 0])
self.add(rectangle)
self.add(node)
class SplitNode(VGroup):
"""Node for splitting decision in tree"""
def __init__(self, feature, threshold):
super().__init__()
node_text = f"{feature}\n<= {threshold:.2f} cm"
# Draw decision text
decision_text = Text(node_text, color=WHITE)
# Draw the surrounding box
bounding_box = SurroundingRectangle(decision_text, buff=0.3, color=WHITE)
self.add(bounding_box)
self.add(decision_text)
class DecisionTreeDiagram(Group):
"""Decision Tree Diagram Class for Manim"""
def __init__(
self,
sklearn_tree,
feature_names=None,
class_names=None,
class_images_paths=None,
class_colors=[RED, GREEN, BLUE],
):
super().__init__()
self.tree = sklearn_tree
self.feature_names = feature_names
self.class_names = class_names
self.class_image_paths = class_images_paths
self.class_colors = class_colors
# Make graph container for the tree
self.tree_group, self.nodes_map, self.edge_map = self._make_tree()
self.add(self.tree_group)
def _make_node(
self,
node_index,
):
"""Make node"""
is_split_node = (
self.tree.children_left[node_index] != self.tree.children_right[node_index]
)
if is_split_node:
node_feature = self.tree.feature[node_index]
node_threshold = self.tree.threshold[node_index]
node = SplitNode(self.feature_names[node_feature], node_threshold)
else:
# Get the most abundant class for the given leaf node
# Make the leaf node object
tree_class_index = np.argmax(self.tree.value[node_index])
node = LeafNode(
class_index=tree_class_index,
class_colors=self.class_colors,
class_image_paths=self.class_image_paths,
)
return node
def _make_connection(self, top, bottom, is_leaf=False):
"""Make a connection from top to bottom"""
top_node_bottom_location = top.get_center()
top_node_bottom_location[1] -= top.height / 2
bottom_node_top_location = bottom.get_center()
bottom_node_top_location[1] += bottom.height / 2
line = Line(top_node_bottom_location, bottom_node_top_location, color=WHITE)
return line
def _make_tree(self):
"""Construct the tree diagram"""
tree_group = Group()
max_depth = self.tree.max_depth
# Make the root node
nodes_map = {}
root_node = self._make_node(
node_index=0,
)
nodes_map[0] = root_node
tree_group.add(root_node)
# Save some information
node_height = root_node.height
node_width = root_node.width
scale_factor = 1.0
edge_map = {}
# tree height
tree_height = scale_factor * node_height * max_depth
tree_width = scale_factor * 2**max_depth * node_width
# traverse tree
def recurse(node_index, depth, direction, parent_object, parent_node):
# make the node object
is_leaf = (
self.tree.children_left[node_index]
== self.tree.children_right[node_index]
)
node_object = self._make_node(node_index=node_index)
nodes_map[node_index] = node_object
node_height = node_object.height
# set the node position
direction_factor = -1 if direction == "left" else 1
shift_right_amount = (
0.9 * direction_factor * scale_factor * tree_width / (2**depth) / 2
)
if is_leaf:
shift_down_amount = -1.0 * scale_factor * node_height
else:
shift_down_amount = -1.8 * scale_factor * node_height
node_object.match_x(parent_object).match_y(parent_object).shift(
[shift_right_amount, shift_down_amount, 0]
)
tree_group.add(node_object)
# make a connection
connection = self._make_connection(
parent_object, node_object, is_leaf=is_leaf
)
edge_name = str(parent_node) + "," + str(node_index)
edge_map[edge_name] = connection
tree_group.add(connection)
# recurse
if not is_leaf:
recurse(
self.tree.children_left[node_index],
depth + 1,
"left",
node_object,
node_index,
)
recurse(
self.tree.children_right[node_index],
depth + 1,
"right",
node_object,
node_index,
)
recurse(self.tree.children_left[0], 1, "left", root_node, 0)
recurse(self.tree.children_right[0], 1, "right", root_node, 0)
tree_group.scale(0.35)
return tree_group, nodes_map, edge_map
def create_level_order_expansion_decision_tree(self, tree):
"""Expands the decision tree in level order"""
raise NotImplementedError()
def create_bfs_expansion_decision_tree(self, tree):
"""Expands the tree using BFS"""
animations = []
split_node_animations = {} # Dictionary mapping split node to animation
# Compute parent mapping
parent_mapping = helpers.compute_node_to_parent_mapping(self.tree)
# Create the root node as most common class
placeholder_class_nodes = {}
root_node_class_index = np.argmax(
self.tree.value[0]
)
root_placeholder_node = LeafNode(
class_index=root_node_class_index,
class_colors=self.class_colors,
class_image_paths=self.class_image_paths,
)
root_placeholder_node.move_to(self.nodes_map[0])
placeholder_class_nodes[0] = root_placeholder_node
root_create_animation = AnimationGroup(
FadeIn(root_placeholder_node),
lag_ratio=0.0
)
animations.append(root_create_animation)
# Iterate through the nodes
queue = [0]
while len(queue) > 0:
node_index = queue.pop(0)
# Check if a node is a split node or not
left_child_index = self.tree.children_left[node_index]
right_child_index = self.tree.children_right[node_index]
is_leaf_node = left_child_index == right_child_index
if not is_leaf_node:
# Remove the currently placeholder class node
fade_out_animation = FadeOut(
placeholder_class_nodes[node_index]
)
animations.append(fade_out_animation)
# Fade in the split node
fade_in_animation = FadeIn(
self.nodes_map[node_index]
)
animations.append(fade_in_animation)
# Split the node by creating the children and connecting them
# to the parent
# Handle left child
assert left_child_index in self.nodes_map.keys()
left_node = self.nodes_map[left_child_index]
left_parent_edge = self.edge_map[f"{node_index},{left_child_index}"]
# Get the children of the left node
left_node_left_index = self.tree.children_left[left_child_index]
left_node_right_index = self.tree.children_right[left_child_index]
left_is_leaf = left_node_left_index == left_node_right_index
if left_is_leaf:
# If a child is a leaf then just create it
left_animation = FadeIn(left_node)
else:
# If the child is a split node find the dominant class and make a temp
left_node_class_index = np.argmax(
self.tree.value[left_child_index]
)
new_leaf_node = LeafNode(
class_index=left_node_class_index,
class_colors=self.class_colors,
class_image_paths=self.class_image_paths,
)
new_leaf_node.move_to(self.nodes_map[leaf_child_index])
placeholder_class_nodes[left_child_index] = new_leaf_node
left_animation = AnimationGroup(
FadeIn(new_leaf_node),
Create(left_parent_edge),
lag_ratio=0.0
)
# Handle right child
assert right_child_index in self.nodes_map.keys()
right_node = self.nodes_map[right_child_index]
right_parent_edge = self.edge_map[f"{node_index},{right_child_index}"]
# Get the children of the left node
right_node_left_index = self.tree.children_left[right_child_index]
right_node_right_index = self.tree.children_right[right_child_index]
right_is_leaf = right_node_left_index == right_node_right_index
if right_is_leaf:
# If a child is a leaf then just create it
right_animation = FadeIn(right_node)
else:
# If the child is a split node find the dominant class and make a temp
right_node_class_index = np.argmax(
self.tree.value[right_child_index]
)
new_leaf_node = LeafNode(
class_index=right_node_class_index,
class_colors=self.class_colors,
class_image_paths=self.class_image_paths,
)
placeholder_class_nodes[right_child_index] = new_leaf_node
right_animation = AnimationGroup(
FadeIn(new_leaf_node),
Create(right_parent_edge),
lag_ratio=0.0
)
# Combine the animations
split_animation = AnimationGroup(
left_animation,
right_animation,
lag_ratio=0.0,
)
animations.append(split_animation)
# Add the split animation to the split node dict
split_node_animations[node_index] = split_animation
# Add the children to the queue
if left_child_index != -1:
queue.append(left_child_index)
if right_child_index != -1:
queue.append(right_child_index)
return Succession(
*animations,
lag_ratio=1.0
), split_node_animations
def make_expand_tree_animation(self, node_expand_order):
"""
Make an animation for expanding the decision tree
Shows each split node as a leaf node initially, and
then when it comes up shows it as a split node. The
reason for this is for purposes of animating each of the
splits in a decision surface.
"""
# Show the root node as a leaf node
# Iterate through the nodes in the traversal order
for node_index in node_expand_order[1:]:
# Figure out if it is a leaf or not
# If it is not a leaf then remove the placeholder leaf node
# then show the split node
# If it is a leaf then just show the leaf node
pass
pass
@override_animation(Create)
def create_decision_tree(self, traversal_order="bfs"):
"""Makes a create animation for the decision tree"""
# Comptue the node expand order
if traversal_order == "level":
node_expand_order = helpers.compute_level_order_traversal(self.tree)
elif traversal_order == "bfs":
node_expand_order = helpers.compute_bfs_traversal(self.tree)
else:
raise Exception(f"Uncrecognized traversal: {traversal_order}")
# Make the animation
expand_tree_animation = self.make_expand_tree_animation(node_expand_order)
return expand_tree_animation
class DecisionTreeContainer():
"""Connects the DecisionTreeDiagram to the DecisionTreeEmbedding"""
def __init__(self, sklearn_tree, points, classes):
self.sklearn_tree = sklearn_tree
self.points = points
self.classes = classes
def make_unfold_tree_animation(self):
"""Unfolds the tree through an in order traversal
This animations unfolds the tree diagram as well as showing the splitting
of a shaded region in the Decision Tree embedding.
"""
# Draw points in the embedding
# Start the tree splitting animation
pass
|
ManimML_helblazer811/manim_ml/neural_network/neural_network.py
|
"""Neural Network Manim Visualization
This module is responsible for generating a neural network visualization with
manim, specifically a fully connected neural network diagram.
Example:
# Specify how many nodes are in each node layer
layer_node_count = [5, 3, 5]
# Create the object with default style settings
NeuralNetwork(layer_node_count)
"""
import textwrap
from manim_ml.neural_network.layers.embedding import EmbeddingLayer
from manim_ml.utils.mobjects.connections import NetworkConnection
import numpy as np
from manim import *
from manim_ml.neural_network.layers.parent_layers import ConnectiveLayer, ThreeDLayer
from manim_ml.neural_network.layers.util import get_connective_layer
from manim_ml.utils.mobjects.list_group import ListGroup
from manim_ml.neural_network.animations.neural_network_transformations import (
InsertLayer,
RemoveLayer,
)
import manim_ml
class NeuralNetwork(Group):
"""Neural Network Visualization Container Class"""
def __init__(
self,
input_layers,
layer_spacing=0.2,
animation_dot_color=manim_ml.config.color_scheme.active_color,
edge_width=2.5,
dot_radius=0.03,
title=" ",
layout="linear",
layout_direction="left_to_right",
debug_mode=False
):
super(Group, self).__init__()
self.input_layers_dict = self.make_input_layers_dict(input_layers)
self.input_layers = ListGroup(*self.input_layers_dict.values())
self.edge_width = edge_width
self.layer_spacing = layer_spacing
self.animation_dot_color = animation_dot_color
self.dot_radius = dot_radius
self.title_text = title
self.created = False
self.layout = layout
self.layout_direction = layout_direction
self.debug_mode = debug_mode
# TODO take layer_node_count [0, (1, 2), 0]
# and make it have explicit distinct subspaces
# Construct all of the layers
self._construct_input_layers()
# Place the layers
self._place_layers(layout=layout, layout_direction=layout_direction)
# Make the connective layers
self.connective_layers, self.all_layers = self._construct_connective_layers()
# Place the connective layers
self._place_connective_layers()
# Make overhead title
self.title = Text(
self.title_text,
font_size=DEFAULT_FONT_SIZE / 2
)
self.title.next_to(self, UP * self.layer_spacing, buff=0.25)
self.add(self.title)
# Place layers at correct z index
self.connective_layers.set_z_index(2)
self.input_layers.set_z_index(3)
# Center the whole diagram by default
self.all_layers.move_to(ORIGIN)
self.add(self.all_layers)
# Make container for connections
self.connections = []
# Print neural network
print(repr(self))
def make_input_layers_dict(self, input_layers):
"""Make dictionary of input layers"""
if isinstance(input_layers, dict):
# If input layers is dictionary then return it
return input_layers
elif isinstance(input_layers, list):
# If input layers is a list then make a dictionary with default
return_dict = {}
for layer_index, input_layer in enumerate(input_layers):
return_dict[f"layer{layer_index}"] = input_layer
return return_dict
else:
raise Exception(f"Uncrecognized input layers type: {type(input_layers)}")
def add_connection(
self,
start_mobject_or_name,
end_mobject_or_name,
connection_style="default",
connection_position="bottom",
arc_direction="down"
):
"""Add connection from start layer to end layer"""
assert connection_style in ["default"]
if connection_style == "default":
# Make arrow connection from start layer to end layer
# Add the connection
if isinstance(start_mobject_or_name, Mobject):
input_mobject = start_mobject_or_name
else:
input_mobject = self.input_layers_dict[start_mobject_or_name]
if isinstance(end_mobject_or_name, Mobject):
output_mobject = end_mobject_or_name
else:
output_mobject = self.input_layers_dict[end_mobject_or_name]
connection = NetworkConnection(
input_mobject,
output_mobject,
arc_direction=arc_direction,
buffer=0.05
)
self.connections.append(connection)
self.add(connection)
def _construct_input_layers(self):
"""Constructs each of the input layers in context
of their adjacent layers"""
prev_layer = None
next_layer = None
# Go through all the input layers and run their construct method
print("Constructing layers")
for layer_index in range(len(self.input_layers)):
current_layer = self.input_layers[layer_index]
print(f"Current layer: {current_layer}")
if layer_index < len(self.input_layers) - 1:
next_layer = self.input_layers[layer_index + 1]
if layer_index > 0:
prev_layer = self.input_layers[layer_index - 1]
# Run the construct layer method for each
current_layer.construct_layer(
prev_layer,
next_layer,
debug_mode=self.debug_mode
)
def _place_layers(
self,
layout="linear",
layout_direction="top_to_bottom"
):
"""Creates the neural network"""
# TODO implement more sophisticated custom layouts
# Default: Linear layout
for layer_index in range(1, len(self.input_layers)):
previous_layer = self.input_layers[layer_index - 1]
current_layer = self.input_layers[layer_index]
current_layer.move_to(previous_layer.get_center())
if layout_direction == "left_to_right":
x_shift = (
previous_layer.get_width() / 2
+ current_layer.get_width() / 2
+ self.layer_spacing
)
shift_vector = np.array([x_shift, 0, 0])
elif layout_direction == "top_to_bottom":
y_shift = -(
(previous_layer.get_width() / 2 + current_layer.get_width() / 2)
+ self.layer_spacing
)
shift_vector = np.array([0, y_shift, 0])
else:
raise Exception(f"Unrecognized layout direction: {layout_direction}")
current_layer.shift(shift_vector)
# After all layers have been placed place their activation functions
layer_max_height = max([layer.get_height() for layer in self.input_layers])
for current_layer in self.input_layers:
# Place activation function
if hasattr(current_layer, "activation_function"):
if not current_layer.activation_function is None:
# Get max height of layer
up_movement = np.array([
0,
layer_max_height / 2
+ current_layer.activation_function.get_height() / 2
+ 0.5 * self.layer_spacing,
0,
])
current_layer.activation_function.move_to(
current_layer,
)
current_layer.activation_function.match_y(
self.input_layers[0]
)
current_layer.activation_function.shift(
up_movement
)
self.add(
current_layer.activation_function
)
def _construct_connective_layers(self):
"""Draws connecting lines between layers"""
connective_layers = ListGroup()
all_layers = ListGroup()
for layer_index in range(len(self.input_layers) - 1):
current_layer = self.input_layers[layer_index]
# Add the layer to the list of layers
all_layers.add(current_layer)
next_layer = self.input_layers[layer_index + 1]
# Check if layer is actually a nested NeuralNetwork
if isinstance(current_layer, NeuralNetwork):
# Last layer of the current layer
current_layer = current_layer.all_layers[-1]
if isinstance(next_layer, NeuralNetwork):
# First layer of the next layer
next_layer = next_layer.all_layers[0]
# Find connective layer with correct layer pair
connective_layer = get_connective_layer(current_layer, next_layer)
connective_layers.add(connective_layer)
# Construct the connective layer
connective_layer.construct_layer(current_layer, next_layer)
# Add the layer to the list of layers
all_layers.add(connective_layer)
# Add final layer
all_layers.add(self.input_layers[-1])
# Handle layering
return connective_layers, all_layers
def _place_connective_layers(self):
"""Places the connective layers
"""
# Place each of the connective layers halfway between the adjacent layers
for connective_layer in self.connective_layers:
layer_midpoint = (
connective_layer.input_layer.get_center() +
connective_layer.output_layer.get_center()
) / 2
connective_layer.move_to(layer_midpoint)
def insert_layer(self, layer, insert_index):
"""Inserts a layer at the given index"""
neural_network = self
insert_animation = InsertLayer(layer, insert_index, neural_network)
return insert_animation
def remove_layer(self, layer):
"""Removes layer object if it exists"""
neural_network = self
return RemoveLayer(layer, neural_network, layer_spacing=self.layer_spacing)
def replace_layer(self, old_layer, new_layer):
"""Replaces given layer object"""
raise NotImplementedError()
remove_animation = self.remove_layer(insert_index)
insert_animation = self.insert_layer(layer, insert_index)
# Make the animation
animation_group = AnimationGroup(
FadeOut(self.all_layers[insert_index]), FadeIn(layer), lag_ratio=1.0
)
return animation_group
def make_forward_pass_animation(
self,
run_time=None,
passing_flash=True,
layer_args={},
per_layer_animations=False,
**kwargs
):
"""Generates an animation for feed forward propagation"""
all_animations = []
per_layer_animation_map = {}
per_layer_runtime = (
run_time / len(self.all_layers) if not run_time is None else None
)
for layer_index, layer in enumerate(self.all_layers):
# Get the layer args
if isinstance(layer, ConnectiveLayer):
"""
NOTE: By default a connective layer will get the combined
layer_args of the layers it is connecting and itself.
"""
before_layer_args = {}
current_layer_args = {}
after_layer_args = {}
if layer.input_layer in layer_args:
before_layer_args = layer_args[layer.input_layer]
if layer in layer_args:
current_layer_args = layer_args[layer]
if layer.output_layer in layer_args:
after_layer_args = layer_args[layer.output_layer]
# Merge the two dicts
current_layer_args = {
**before_layer_args,
**current_layer_args,
**after_layer_args,
}
else:
current_layer_args = {}
if layer in layer_args:
current_layer_args = layer_args[layer]
# Perform the forward pass of the current layer
layer_forward_pass = layer.make_forward_pass_animation(
layer_args=current_layer_args, run_time=per_layer_runtime, **kwargs
)
# Animate a forward pass for incoming connections
connection_input_pass = AnimationGroup()
for connection in self.connections:
if isinstance(layer, ConnectiveLayer):
output_layer = layer.output_layer
if connection.end_mobject == output_layer:
connection_input_pass = ShowPassingFlash(
connection,
run_time=layer_forward_pass.run_time,
time_width=0.2,
)
break
layer_forward_pass = AnimationGroup(
layer_forward_pass,
connection_input_pass,
lag_ratio=0.0
)
all_animations.append(layer_forward_pass)
# Add the animation to per layer animation
per_layer_animation_map[layer] = layer_forward_pass
# Make the animation group
animation_group = Succession(*all_animations, lag_ratio=1.0)
if per_layer_animations:
return per_layer_animation_map
else:
return animation_group
@override_animation(Create)
def _create_override(self, **kwargs):
"""Overrides Create animation"""
# Stop the neural network from being created twice
if self.created:
return AnimationGroup()
self.created = True
animations = []
# Create the overhead title
animations.append(Create(self.title))
# Create each layer one by one
for layer in self.all_layers:
layer_animation = Create(layer)
# Make titles
create_title = Create(layer.title)
# Create layer animation group
animation_group = AnimationGroup(layer_animation, create_title)
animations.append(animation_group)
animation_group = AnimationGroup(*animations, lag_ratio=1.0)
return animation_group
def set_z_index(self, z_index_value: float, family=False):
"""Overriden set_z_index"""
# Setting family=False stops sub-neural networks from inheriting parent z_index
for layer in self.all_layers:
if not isinstance(NeuralNetwork):
layer.set_z_index(z_index_value)
def scale(self, scale_factor, **kwargs):
"""Overriden scale"""
prior_center = self.get_center()
for layer in self.all_layers:
layer.scale(scale_factor, **kwargs)
# Place layers with scaled spacing
self.layer_spacing *= scale_factor
# self.connective_layers, self.all_layers = self._construct_connective_layers()
self._place_layers(layout=self.layout, layout_direction=self.layout_direction)
self._place_connective_layers()
# Scale the title
self.remove(self.title)
self.title.scale(scale_factor, **kwargs)
self.title.next_to(self, UP, buff=0.25 * scale_factor)
self.add(self.title)
self.move_to(prior_center)
def filter_layers(self, function):
"""Filters layers of the network given function"""
layers_to_return = []
for layer in self.all_layers:
func_out = function(layer)
assert isinstance(
func_out, bool
), "Filter layers function returned a non-boolean type."
if func_out:
layers_to_return.append(layer)
return layers_to_return
def __repr__(self, metadata=["z_index", "title_text"]):
"""Print string representation of layers"""
inner_string = ""
for layer in self.all_layers:
inner_string += f"{repr(layer)}("
for key in metadata:
value = getattr(layer, key)
if not value is "":
inner_string += f"{key}={value}, "
inner_string += "),\n"
inner_string = textwrap.indent(inner_string, " ")
string_repr = "NeuralNetwork([\n" + inner_string + "])"
return string_repr
|
ManimML_helblazer811/manim_ml/neural_network/__init__.py
|
from manim_ml.neural_network.neural_network import NeuralNetwork
from manim_ml.neural_network.layers.feed_forward import FeedForwardLayer
from manim_ml.neural_network.layers.convolutional_2d_to_convolutional_2d import (
Convolutional2DToConvolutional2D,
)
from manim_ml.neural_network.layers.convolutional_2d_to_feed_forward import (
Convolutional2DToFeedForward,
)
from manim_ml.neural_network.layers.convolutional_2d_to_max_pooling_2d import (
Convolutional2DToMaxPooling2D,
)
from manim_ml.neural_network.layers.convolutional_2d import Convolutional2DLayer
from manim_ml.neural_network.layers.embedding_to_feed_forward import (
EmbeddingToFeedForward,
)
from manim_ml.neural_network.layers.embedding import EmbeddingLayer
from manim_ml.neural_network.layers.feed_forward_to_embedding import (
FeedForwardToEmbedding,
)
from manim_ml.neural_network.layers.feed_forward_to_feed_forward import (
FeedForwardToFeedForward,
)
from manim_ml.neural_network.layers.feed_forward_to_image import FeedForwardToImage
from manim_ml.neural_network.layers.feed_forward_to_vector import FeedForwardToVector
from manim_ml.neural_network.layers.feed_forward import FeedForwardLayer
from manim_ml.neural_network.layers.image_to_convolutional_2d import (
ImageToConvolutional2DLayer,
)
from manim_ml.neural_network.layers.image_to_feed_forward import ImageToFeedForward
from manim_ml.neural_network.layers.image import ImageLayer
from manim_ml.neural_network.layers.max_pooling_2d_to_convolutional_2d import (
MaxPooling2DToConvolutional2D,
)
from manim_ml.neural_network.layers.max_pooling_2d import MaxPooling2DLayer
from manim_ml.neural_network.layers.paired_query_to_feed_forward import (
PairedQueryToFeedForward,
)
from manim_ml.neural_network.layers.paired_query import PairedQueryLayer
from manim_ml.neural_network.layers.triplet_to_feed_forward import TripletToFeedForward
from manim_ml.neural_network.layers.triplet import TripletLayer
from manim_ml.neural_network.layers.vector import VectorLayer
from manim_ml.neural_network.layers.math_operation_layer import MathOperationLayer
|
ManimML_helblazer811/manim_ml/neural_network/layers/math_operation_layer.py
|
from manim import *
from manim_ml.neural_network.activation_functions import get_activation_function_by_name
from manim_ml.neural_network.activation_functions.activation_function import (
ActivationFunction,
)
from manim_ml.neural_network.layers.parent_layers import VGroupNeuralNetworkLayer
class MathOperationLayer(VGroupNeuralNetworkLayer):
"""Handles rendering a layer for a neural network"""
valid_operations = ["+", "-", "*", "/"]
def __init__(
self,
operation_type: str,
node_radius=0.5,
node_color=BLUE,
node_stroke_width=2.0,
active_color=ORANGE,
activation_function=None,
font_size=20,
**kwargs
):
super(VGroupNeuralNetworkLayer, self).__init__(**kwargs)
# Ensure operation type is valid
assert operation_type in MathOperationLayer.valid_operations
self.operation_type = operation_type
self.node_radius = node_radius
self.node_color = node_color
self.node_stroke_width = node_stroke_width
self.active_color = active_color
self.font_size = font_size
self.activation_function = activation_function
def construct_layer(
self,
input_layer: "NeuralNetworkLayer",
output_layer: "NeuralNetworkLayer",
**kwargs
):
"""Creates the neural network layer"""
# Draw the operation
self.operation_text = Text(
self.operation_type,
font_size=self.font_size
)
self.add(self.operation_text)
# Make the surrounding circle
self.surrounding_circle = Circle(
color=self.node_color,
stroke_width=self.node_stroke_width
).surround(self.operation_text)
self.add(self.surrounding_circle)
# Make the activation function
self.construct_activation_function()
super().construct_layer(input_layer, output_layer, **kwargs)
def construct_activation_function(self):
"""Construct the activation function"""
# Add the activation function
if not self.activation_function is None:
# Check if it is a string
if isinstance(self.activation_function, str):
activation_function = get_activation_function_by_name(
self.activation_function
)()
else:
assert isinstance(self.activation_function, ActivationFunction)
activation_function = self.activation_function
# Plot the function above the rest of the layer
self.activation_function = activation_function
self.add(self.activation_function)
def make_forward_pass_animation(self, layer_args={}, **kwargs):
"""Makes the forward pass animation
Parameters
----------
layer_args : dict, optional
layer specific arguments, by default {}
Returns
-------
AnimationGroup
Forward pass animation
"""
# Make highlight animation
succession = Succession(
ApplyMethod(
self.surrounding_circle.set_color,
self.active_color,
run_time=0.25
),
Wait(1.0),
ApplyMethod(
self.surrounding_circle.set_color,
self.node_color,
run_time=0.25
),
)
# Animate the activation function
if not self.activation_function is None:
animation_group = AnimationGroup(
succession,
self.activation_function.make_evaluate_animation(),
lag_ratio=0.0,
)
return animation_group
else:
return succession
def get_center(self):
return self.surrounding_circle.get_center()
def get_left(self):
return self.surrounding_circle.get_left()
def get_right(self):
return self.surrounding_circle.get_right()
def move_to(self, mobject_or_point):
"""Moves the center of the layer to the given mobject or point"""
layer_center = self.surrounding_circle.get_center()
if isinstance(mobject_or_point, Mobject):
target_center = mobject_or_point.get_center()
else:
target_center = mobject_or_point
self.shift(target_center - layer_center)
|
ManimML_helblazer811/manim_ml/neural_network/layers/embedding.py
|
from manim import *
from manim_ml.utils.mobjects.probability import GaussianDistribution
from manim_ml.neural_network.layers.parent_layers import VGroupNeuralNetworkLayer
class EmbeddingLayer(VGroupNeuralNetworkLayer):
"""NeuralNetwork embedding object that can show probability distributions"""
def __init__(
self,
point_radius=0.02,
mean=np.array([0, 0]),
covariance=np.array([[1.0, 0], [0, 1.0]]),
dist_theme="gaussian",
paired_query_mode=False,
**kwargs
):
super(VGroupNeuralNetworkLayer, self).__init__(**kwargs)
self.mean = mean
self.covariance = covariance
self.gaussian_distributions = VGroup()
self.add(self.gaussian_distributions)
self.point_radius = point_radius
self.dist_theme = dist_theme
self.paired_query_mode = paired_query_mode
def construct_layer(
self,
input_layer: "NeuralNetworkLayer",
output_layer: "NeuralNetworkLayer",
**kwargs
):
self.axes = Axes(
tips=False,
x_length=0.8,
y_length=0.8,
x_range=(-1.4, 1.4),
y_range=(-1.8, 1.8),
x_axis_config={"include_ticks": False, "stroke_width": 0.0},
y_axis_config={"include_ticks": False, "stroke_width": 0.0},
)
self.add(self.axes)
self.axes.move_to(self.get_center())
# Make point cloud
self.point_cloud = self.construct_gaussian_point_cloud(
self.mean, self.covariance
)
self.add(self.point_cloud)
# Make latent distribution
self.latent_distribution = GaussianDistribution(
self.axes, mean=self.mean, cov=self.covariance
) # Use defaults
super().construct_layer(input_layer, output_layer, **kwargs)
def add_gaussian_distribution(self, gaussian_distribution):
"""Adds given GaussianDistribution to the list"""
self.gaussian_distributions.add(gaussian_distribution)
return Create(gaussian_distribution)
def remove_gaussian_distribution(self, gaussian_distribution):
"""Removes the given gaussian distribution from the embedding"""
for gaussian in self.gaussian_distributions:
if gaussian == gaussian_distribution:
self.gaussian_distributions.remove(gaussian_distribution)
return FadeOut(gaussian)
def sample_point_location_from_distribution(self):
"""Samples from the current latent distribution"""
mean = self.latent_distribution.mean
cov = self.latent_distribution.cov
point = np.random.multivariate_normal(mean, cov)
# Make dot at correct location
location = self.axes.coords_to_point(point[0], point[1])
return location
def get_distribution_location(self):
"""Returns mean of latent distribution in axes frame"""
return self.axes.coords_to_point(self.latent_distribution.mean)
def construct_gaussian_point_cloud(
self, mean, covariance, point_color=WHITE, num_points=400
):
"""Plots points sampled from a Gaussian with the given mean and covariance"""
# Sample points from a Gaussian
np.random.seed(5)
points = np.random.multivariate_normal(mean, covariance, num_points)
# Add each point to the axes
point_dots = VGroup()
for point in points:
point_location = self.axes.coords_to_point(*point)
dot = Dot(point_location, color=point_color, radius=self.point_radius / 2)
dot.set_z_index(-1)
point_dots.add(dot)
return point_dots
def make_forward_pass_animation(self, layer_args={}, **kwargs):
"""Forward pass animation"""
animations = []
if "triplet_args" in layer_args:
triplet_args = layer_args["triplet_args"]
positive_dist_args = triplet_args["positive_dist"]
negative_dist_args = triplet_args["negative_dist"]
anchor_dist_args = triplet_args["anchor_dist"]
# Create each dist
anchor_dist = GaussianDistribution(self.axes, **anchor_dist_args)
animations.append(Create(anchor_dist))
positive_dist = GaussianDistribution(self.axes, **positive_dist_args)
animations.append(Create(positive_dist))
negative_dist = GaussianDistribution(self.axes, **negative_dist_args)
animations.append(Create(negative_dist))
# Draw edges in between anchor and positive, anchor and negative
anchor_positive = Line(
anchor_dist.get_center(),
positive_dist.get_center(),
color=GOLD,
stroke_width=DEFAULT_STROKE_WIDTH / 2,
)
anchor_positive.set_z_index(3)
animations.append(Create(anchor_positive))
anchor_negative = Line(
anchor_dist.get_center(),
negative_dist.get_center(),
color=GOLD,
stroke_width=DEFAULT_STROKE_WIDTH / 2,
)
anchor_negative.set_z_index(3)
animations.append(Create(anchor_negative))
elif not self.paired_query_mode:
# Normal embedding mode
if "dist_args" in layer_args:
scale_factor = 1.0
if "scale_factor" in layer_args:
scale_factor = layer_args["scale_factor"]
self.latent_distribution = GaussianDistribution(
self.axes, **layer_args["dist_args"]
).scale(scale_factor)
else:
# Make ellipse object corresponding to the latent distribution
# self.latent_distribution = GaussianDistribution(
# self.axes,
# dist_theme=self.dist_theme,
# cov=np.array([[0.8, 0], [0.0, 0.8]])
# )
pass
# Create animation
create_distribution = Create(self.latent_distribution)
animations.append(create_distribution)
else:
# Paired Query Mode
assert "positive_dist_args" in layer_args
assert "negative_dist_args" in layer_args
positive_dist_args = layer_args["positive_dist_args"]
negative_dist_args = layer_args["negative_dist_args"]
# Handle logic for embedding a paired query into the embedding layer
positive_dist = GaussianDistribution(self.axes, **positive_dist_args)
self.gaussian_distributions.add(positive_dist)
negative_dist = GaussianDistribution(self.axes, **negative_dist_args)
self.gaussian_distributions.add(negative_dist)
animations.append(Create(positive_dist))
animations.append(Create(negative_dist))
animation_group = AnimationGroup(*animations)
return animation_group
@override_animation(Create)
def _create_override(self, **kwargs):
# Plot each point at once
point_animations = []
for point in self.point_cloud:
point_animations.append(GrowFromCenter(point))
point_animation = AnimationGroup(*point_animations, lag_ratio=1.0, run_time=2.5)
return point_animation
class NeuralNetworkEmbeddingTestScene(Scene):
def construct(self):
nne = EmbeddingLayer()
mean = np.array([0, 0])
cov = np.array([[5.0, 1.0], [0.0, 1.0]])
point_cloud = nne.construct_gaussian_point_cloud(mean, cov)
nne.add(point_cloud)
gaussian = nne.construct_gaussian_distribution(mean, cov)
nne.add(gaussian)
self.add(nne)
|
ManimML_helblazer811/manim_ml/neural_network/layers/paired_query.py
|
from manim import *
from manim_ml.neural_network.layers.parent_layers import NeuralNetworkLayer
from manim_ml.utils.mobjects.image import GrayscaleImageMobject, LabeledColorImage
import numpy as np
class PairedQueryLayer(NeuralNetworkLayer):
"""Paired Query Layer"""
def __init__(
self, positive, negative, stroke_width=5, font_size=18, spacing=0.5, **kwargs
):
super().__init__(**kwargs)
self.positive = positive
self.negative = negative
self.font_size = font_size
self.spacing = spacing
self.stroke_width = stroke_width
# Make the assets
self.assets = self.make_assets()
self.add(self.assets)
self.add(self.title)
def construct_layer(
self,
input_layer: "NeuralNetworkLayer",
output_layer: "NeuralNetworkLayer",
**kwargs
):
return super().construct_layer(input_layer, output_layer, **kwargs)
@classmethod
def from_paths(cls, positive_path, negative_path, grayscale=True, **kwargs):
"""Creates a query using the paths"""
# Load images from path
if grayscale:
positive = GrayscaleImageMobject.from_path(positive_path)
negative = GrayscaleImageMobject.from_path(negative_path)
else:
positive = ImageMobject(positive_path)
negative = ImageMobject(negative_path)
# Make the layer
query_layer = cls(positive, negative, **kwargs)
return query_layer
def make_assets(self):
"""
Constructs the assets needed for a query layer
"""
# Handle positive
positive_group = LabeledColorImage(
self.positive,
color=BLUE,
label="Positive",
font_size=self.font_size,
stroke_width=self.stroke_width,
)
# Handle negative
negative_group = LabeledColorImage(
self.negative,
color=RED,
label="Negative",
font_size=self.font_size,
stroke_width=self.stroke_width,
)
# Distribute the groups uniformly vertically
assets = Group(positive_group, negative_group)
assets.arrange(DOWN, buff=self.spacing)
return assets
@override_animation(Create)
def _create_override(self):
# TODO make Create animation that is custom
return FadeIn(self.assets)
def make_forward_pass_animation(self, layer_args={}, **kwargs):
"""Forward pass for query"""
return AnimationGroup()
|
ManimML_helblazer811/manim_ml/neural_network/layers/feed_forward_to_math_operation.py
|
from manim import *
from manim_ml.neural_network.layers.feed_forward import FeedForwardLayer
from manim_ml.neural_network.layers.parent_layers import ConnectiveLayer
from manim_ml.neural_network.layers.math_operation_layer import MathOperationLayer
from manim_ml.utils.mobjects.connections import NetworkConnection
class FeedForwardToMathOperation(ConnectiveLayer):
"""Image Layer to FeedForward layer"""
input_class = FeedForwardLayer
output_class = MathOperationLayer
def __init__(
self,
input_layer,
output_layer,
active_color=ORANGE,
**kwargs
):
self.active_color = active_color
super().__init__(input_layer, output_layer, **kwargs)
def construct_layer(
self,
input_layer: "NeuralNetworkLayer",
output_layer: "NeuralNetworkLayer",
**kwargs
):
# Draw an arrow from the output of the feed forward layer to the
# input of the math operation layer
self.connection = NetworkConnection(
self.input_layer,
self.output_layer,
arc_direction="straight",
buffer=0.05
)
self.add(self.connection)
return super().construct_layer(input_layer, output_layer, **kwargs)
def make_forward_pass_animation(self, layer_args={}, **kwargs):
"""Makes dots diverge from the given location and move to the feed forward nodes decoder"""
# Make flashing pass animation on arrow
passing_flash = ShowPassingFlash(
self.connection.copy().set_color(self.active_color)
)
return passing_flash
|
ManimML_helblazer811/manim_ml/neural_network/layers/image_to_feed_forward.py
|
from manim import *
from manim_ml.neural_network.layers.feed_forward import FeedForwardLayer
from manim_ml.neural_network.layers.image import ImageLayer
from manim_ml.neural_network.layers.parent_layers import ConnectiveLayer
class ImageToFeedForward(ConnectiveLayer):
"""Image Layer to FeedForward layer"""
input_class = ImageLayer
output_class = FeedForwardLayer
def __init__(
self,
input_layer,
output_layer,
animation_dot_color=RED,
dot_radius=0.05,
**kwargs
):
super().__init__(input_layer, output_layer, **kwargs)
self.animation_dot_color = animation_dot_color
self.dot_radius = dot_radius
self.feed_forward_layer = output_layer
self.image_layer = input_layer
def construct_layer(
self,
input_layer: "NeuralNetworkLayer",
output_layer: "NeuralNetworkLayer",
**kwargs
):
return super().construct_layer(input_layer, output_layer, **kwargs)
def make_forward_pass_animation(self, layer_args={}, **kwargs):
"""Makes dots diverge from the given location and move to the feed forward nodes decoder"""
animations = []
dots = []
image_mobject = self.image_layer.image_mobject
# Move the dots to the centers of each of the nodes in the FeedForwardLayer
image_location = image_mobject.get_center()
for node in self.feed_forward_layer.node_group:
new_dot = Dot(
image_location, radius=self.dot_radius, color=self.animation_dot_color
)
per_node_succession = Succession(
Create(new_dot),
new_dot.animate.move_to(node.get_center()),
)
animations.append(per_node_succession)
dots.append(new_dot)
animation_group = AnimationGroup(*animations)
return animation_group
@override_animation(Create)
def _create_override(self):
return AnimationGroup()
|
ManimML_helblazer811/manim_ml/neural_network/layers/vector.py
|
from manim import *
import random
from manim_ml.neural_network.layers.parent_layers import VGroupNeuralNetworkLayer
class VectorLayer(VGroupNeuralNetworkLayer):
"""Shows a vector"""
def __init__(self, num_values, value_func=lambda: random.uniform(0, 1), **kwargs):
super().__init__(**kwargs)
self.num_values = num_values
self.value_func = value_func
def construct_layer(
self,
input_layer: "NeuralNetworkLayer",
output_layer: "NeuralNetworkLayer",
**kwargs,
):
super().construct_layer(input_layer, output_layer, **kwargs)
# Make the vector
self.vector_label = self.make_vector()
self.add(self.vector_label)
def make_vector(self):
"""Makes the vector"""
if False:
# TODO install Latex
values = np.array([self.value_func() for i in range(self.num_values)])
values = values[None, :].T
vector = Matrix(values)
vector_label = Text(f"[{self.value_func():.2f}]")
vector_label.scale(0.3)
return vector_label
def make_forward_pass_animation(self, layer_args={}, **kwargs):
return AnimationGroup()
@override_animation(Create)
def _create_override(self):
"""Create animation"""
return Write(self.vector_label)
|
ManimML_helblazer811/manim_ml/neural_network/layers/__init__.py
|
from manim_ml.neural_network.layers.convolutional_2d_to_feed_forward import (
Convolutional2DToFeedForward,
)
from manim_ml.neural_network.layers.convolutional_2d_to_max_pooling_2d import (
Convolutional2DToMaxPooling2D,
)
from manim_ml.neural_network.layers.image_to_convolutional_2d import (
ImageToConvolutional2DLayer,
)
from manim_ml.neural_network.layers.max_pooling_2d_to_convolutional_2d import (
MaxPooling2DToConvolutional2D,
)
from manim_ml.neural_network.layers.max_pooling_2d_to_feed_forward import (
MaxPooling2DToFeedForward,
)
from .convolutional_2d_to_convolutional_2d import Convolutional2DToConvolutional2D
from .convolutional_2d import Convolutional2DLayer
from .feed_forward_to_vector import FeedForwardToVector
from .paired_query_to_feed_forward import PairedQueryToFeedForward
from .embedding_to_feed_forward import EmbeddingToFeedForward
from .embedding import EmbeddingLayer
from .feed_forward_to_embedding import FeedForwardToEmbedding
from .feed_forward_to_feed_forward import FeedForwardToFeedForward
from .feed_forward_to_image import FeedForwardToImage
from .feed_forward import FeedForwardLayer
from .image_to_feed_forward import ImageToFeedForward
from .image import ImageLayer
from .parent_layers import ConnectiveLayer, NeuralNetworkLayer
from .triplet import TripletLayer
from .triplet_to_feed_forward import TripletToFeedForward
from .paired_query import PairedQueryLayer
from .paired_query_to_feed_forward import PairedQueryToFeedForward
from .max_pooling_2d import MaxPooling2DLayer
from .feed_forward_to_math_operation import FeedForwardToMathOperation
connective_layers_list = (
EmbeddingToFeedForward,
FeedForwardToEmbedding,
FeedForwardToFeedForward,
FeedForwardToImage,
ImageToFeedForward,
PairedQueryToFeedForward,
TripletToFeedForward,
PairedQueryToFeedForward,
FeedForwardToVector,
Convolutional2DToConvolutional2D,
ImageToConvolutional2DLayer,
Convolutional2DToFeedForward,
Convolutional2DToMaxPooling2D,
MaxPooling2DToConvolutional2D,
MaxPooling2DToFeedForward,
FeedForwardToMathOperation
)
|
ManimML_helblazer811/manim_ml/neural_network/layers/util.py
|
import warnings
from manim import *
from manim_ml.neural_network.layers.parent_layers import BlankConnective, ThreeDLayer
from manim_ml.neural_network.layers import connective_layers_list
def get_connective_layer(input_layer, output_layer):
"""
Deduces the relevant connective layer
"""
connective_layer_class = None
for candidate_class in connective_layers_list:
input_class = candidate_class.input_class
output_class = candidate_class.output_class
if isinstance(input_layer, input_class) and isinstance(
output_layer, output_class
):
connective_layer_class = candidate_class
break
if connective_layer_class is None:
connective_layer_class = BlankConnective
warnings.warn(
f"Unrecognized input/output class pair: {input_layer} and {output_layer}"
)
# Make the instance now
connective_layer = connective_layer_class(input_layer, output_layer)
return connective_layer
|
ManimML_helblazer811/manim_ml/neural_network/layers/max_pooling_2d_to_feed_forward.py
|
from manim import *
from manim_ml.neural_network.layers.convolutional_2d_to_feed_forward import (
Convolutional2DToFeedForward,
)
from manim_ml.neural_network.layers.feed_forward import FeedForwardLayer
from manim_ml.neural_network.layers.max_pooling_2d import MaxPooling2DLayer
class MaxPooling2DToFeedForward(Convolutional2DToFeedForward):
"""Feed Forward to Embedding Layer"""
input_class = MaxPooling2DLayer
output_class = FeedForwardLayer
def __init__(
self,
input_layer: MaxPooling2DLayer,
output_layer: FeedForwardLayer,
passing_flash_color=ORANGE,
**kwargs
):
super().__init__(input_layer, output_layer, **kwargs)
def construct_layer(
self,
input_layer: "NeuralNetworkLayer",
output_layer: "NeuralNetworkLayer",
**kwargs
):
return super().construct_layer(input_layer, output_layer, **kwargs)
|
ManimML_helblazer811/manim_ml/neural_network/layers/feed_forward_to_image.py
|
from manim import *
from manim_ml.neural_network.layers.feed_forward import FeedForwardLayer
from manim_ml.neural_network.layers.image import ImageLayer
from manim_ml.neural_network.layers.parent_layers import ConnectiveLayer
class FeedForwardToImage(ConnectiveLayer):
"""Image Layer to FeedForward layer"""
input_class = FeedForwardLayer
output_class = ImageLayer
def __init__(
self,
input_layer,
output_layer,
animation_dot_color=RED,
dot_radius=0.05,
**kwargs
):
super().__init__(input_layer, output_layer, **kwargs)
self.animation_dot_color = animation_dot_color
self.dot_radius = dot_radius
self.feed_forward_layer = input_layer
self.image_layer = output_layer
def construct_layer(
self,
input_layer: "NeuralNetworkLayer",
output_layer: "NeuralNetworkLayer",
**kwargs
):
return super().construct_layer(input_layer, output_layer, **kwargs)
def make_forward_pass_animation(self, layer_args={}, **kwargs):
"""Makes dots diverge from the given location and move to the feed forward nodes decoder"""
animations = []
image_mobject = self.image_layer.image_mobject
# Move the dots to the centers of each of the nodes in the FeedForwardLayer
image_location = image_mobject.get_center()
for node in self.feed_forward_layer.node_group:
new_dot = Dot(
node.get_center(),
radius=self.dot_radius,
color=self.animation_dot_color,
)
per_node_succession = Succession(
Create(new_dot),
new_dot.animate.move_to(image_location),
)
animations.append(per_node_succession)
animation_group = AnimationGroup(*animations)
return animation_group
@override_animation(Create)
def _create_override(self):
return AnimationGroup()
|
ManimML_helblazer811/manim_ml/neural_network/layers/feed_forward.py
|
from manim import *
from manim_ml.neural_network.activation_functions import get_activation_function_by_name
from manim_ml.neural_network.activation_functions.activation_function import (
ActivationFunction,
)
from manim_ml.neural_network.layers.parent_layers import VGroupNeuralNetworkLayer
import manim_ml
class FeedForwardLayer(VGroupNeuralNetworkLayer):
"""Handles rendering a layer for a neural network"""
def __init__(
self,
num_nodes,
layer_buffer=SMALL_BUFF / 2,
node_radius=0.08,
node_color=manim_ml.config.color_scheme.primary_color,
node_outline_color=manim_ml.config.color_scheme.secondary_color,
rectangle_color=manim_ml.config.color_scheme.secondary_color,
node_spacing=0.3,
rectangle_fill_color=manim_ml.config.color_scheme.background_color,
node_stroke_width=2.0,
rectangle_stroke_width=2.0,
animation_dot_color=manim_ml.config.color_scheme.active_color,
activation_function=None,
**kwargs
):
super(VGroupNeuralNetworkLayer, self).__init__(**kwargs)
self.num_nodes = num_nodes
self.layer_buffer = layer_buffer
self.node_radius = node_radius
self.node_color = node_color
self.node_stroke_width = node_stroke_width
self.node_outline_color = node_outline_color
self.rectangle_stroke_width = rectangle_stroke_width
self.rectangle_color = rectangle_color
self.node_spacing = node_spacing
self.rectangle_fill_color = rectangle_fill_color
self.animation_dot_color = animation_dot_color
self.activation_function = activation_function
self.node_group = VGroup()
def construct_layer(
self,
input_layer: "NeuralNetworkLayer",
output_layer: "NeuralNetworkLayer",
**kwargs
):
"""Creates the neural network layer"""
# Add Nodes
for node_number in range(self.num_nodes):
node_object = Circle(
radius=self.node_radius,
color=self.node_color,
stroke_width=self.node_stroke_width,
)
self.node_group.add(node_object)
# Space the nodes
# Assumes Vertical orientation
for node_index, node_object in enumerate(self.node_group):
location = node_index * self.node_spacing
node_object.move_to([0, location, 0])
# Create Surrounding Rectangle
self.surrounding_rectangle = SurroundingRectangle(
self.node_group,
color=self.rectangle_color,
fill_color=self.rectangle_fill_color,
fill_opacity=1.0,
buff=self.layer_buffer,
stroke_width=self.rectangle_stroke_width,
)
self.surrounding_rectangle.set_z_index(1)
# Add the objects to the class
self.add(self.surrounding_rectangle, self.node_group)
self.construct_activation_function()
super().construct_layer(input_layer, output_layer, **kwargs)
def construct_activation_function(self):
"""Construct the activation function"""
# Add the activation function
if not self.activation_function is None:
# Check if it is a string
if isinstance(self.activation_function, str):
activation_function = get_activation_function_by_name(
self.activation_function
)()
else:
assert isinstance(self.activation_function, ActivationFunction)
activation_function = self.activation_function
# Plot the function above the rest of the layer
self.activation_function = activation_function
self.add(self.activation_function)
def make_dropout_forward_pass_animation(self, layer_args, **kwargs):
"""Makes a forward pass animation with dropout"""
# Make sure proper dropout information was passed
assert "dropout_node_indices" in layer_args
dropout_node_indices = layer_args["dropout_node_indices"]
# Only highlight nodes that were note dropped out
nodes_to_highlight = []
for index, node in enumerate(self.node_group):
if not index in dropout_node_indices:
nodes_to_highlight.append(node)
nodes_to_highlight = VGroup(*nodes_to_highlight)
# Make highlight animation
succession = Succession(
ApplyMethod(
nodes_to_highlight.set_color, self.animation_dot_color, run_time=0.25
),
Wait(1.0),
ApplyMethod(nodes_to_highlight.set_color, self.node_color, run_time=0.25),
)
return succession
def make_forward_pass_animation(self, layer_args={}, **kwargs):
# Check if dropout is a thing
if "dropout_node_indices" in layer_args:
# Drop out certain nodes
return self.make_dropout_forward_pass_animation(
layer_args=layer_args, **kwargs
)
else:
# Make highlight animation
succession = Succession(
ApplyMethod(
self.node_group.set_color, self.animation_dot_color, run_time=0.25
),
Wait(1.0),
ApplyMethod(self.node_group.set_color, self.node_color, run_time=0.25),
)
if not self.activation_function is None:
animation_group = AnimationGroup(
succession,
self.activation_function.make_evaluate_animation(),
lag_ratio=0.0,
)
return animation_group
else:
return succession
@override_animation(Create)
def _create_override(self, **kwargs):
animations = []
animations.append(Create(self.surrounding_rectangle))
for node in self.node_group:
animations.append(Create(node))
animation_group = AnimationGroup(*animations, lag_ratio=0.0)
return animation_group
def get_height(self):
return self.surrounding_rectangle.get_height()
def get_center(self):
return self.surrounding_rectangle.get_center()
def get_left(self):
return self.surrounding_rectangle.get_left()
def get_right(self):
return self.surrounding_rectangle.get_right()
def move_to(self, mobject_or_point):
"""Moves the center of the layer to the given mobject or point"""
layer_center = self.surrounding_rectangle.get_center()
if isinstance(mobject_or_point, Mobject):
target_center = mobject_or_point.get_center()
else:
target_center = mobject_or_point
self.shift(target_center - layer_center)
|
ManimML_helblazer811/manim_ml/neural_network/layers/parent_layers.py
|
from manim import *
from abc import ABC, abstractmethod
class NeuralNetworkLayer(ABC, Group):
"""Abstract Neural Network Layer class"""
def __init__(self, text=None, *args, **kwargs):
super(Group, self).__init__()
self.title_text = kwargs["title"] if "title" in kwargs else " "
if "title" in kwargs:
self.title = Text(self.title_text, font_size=DEFAULT_FONT_SIZE // 3).scale(0.6)
self.title.next_to(self, UP, 1.2)
else:
self.title = Group()
# self.add(self.title)
@abstractmethod
def construct_layer(
self,
input_layer: "NeuralNetworkLayer",
output_layer: "NeuralNetworkLayer",
**kwargs,
):
"""Constructs the layer at network construction time
Parameters
----------
input_layer : NeuralNetworkLayer
preceding layer
output_layer : NeuralNetworkLayer
following layer
"""
if "debug_mode" in kwargs and kwargs["debug_mode"]:
self.add(SurroundingRectangle(self))
@abstractmethod
def make_forward_pass_animation(self, layer_args={}, **kwargs):
pass
@override_animation(Create)
def _create_override(self):
return Succession()
def __repr__(self):
return f"{type(self).__name__}"
class VGroupNeuralNetworkLayer(NeuralNetworkLayer):
def __init__(self, *args, **kwargs):
super().__init__(*args, **kwargs)
# self.camera = camera
@abstractmethod
def make_forward_pass_animation(self, **kwargs):
pass
@override_animation(Create)
def _create_override(self):
return super()._create_override()
class ThreeDLayer(ABC):
"""Abstract class for 3D layers"""
pass
# Angle of ThreeD layers is static context
class ConnectiveLayer(VGroupNeuralNetworkLayer):
"""Forward pass animation for a given pair of layers"""
@abstractmethod
def __init__(self, input_layer, output_layer, **kwargs):
super(VGroupNeuralNetworkLayer, self).__init__(**kwargs)
self.input_layer = input_layer
self.output_layer = output_layer
# Handle input and output class
# assert isinstance(input_layer, self.input_class), f"{input_layer}, {self.input_class}"
# assert isinstance(output_layer, self.output_class), f"{output_layer}, {self.output_class}"
@abstractmethod
def make_forward_pass_animation(self, run_time=2.0, layer_args={}, **kwargs):
pass
@override_animation(Create)
def _create_override(self):
return super()._create_override()
def __repr__(self):
return (
f"{self.__class__.__name__}("
+ f"input_layer={self.input_layer.__class__.__name__},"
+ f"output_layer={self.output_layer.__class__.__name__},"
+ ")"
)
class BlankConnective(ConnectiveLayer):
"""Connective layer to be used when the given pair of layers is undefined"""
def __init__(self, input_layer, output_layer, **kwargs):
super().__init__(input_layer, output_layer, **kwargs)
def make_forward_pass_animation(self, run_time=1.5, layer_args={}, **kwargs):
return AnimationGroup(run_time=run_time)
@override_animation(Create)
def _create_override(self):
return super()._create_override()
|
ManimML_helblazer811/manim_ml/neural_network/layers/triplet_to_feed_forward.py
|
from manim import *
from manim_ml.neural_network.layers.feed_forward import FeedForwardLayer
from manim_ml.neural_network.layers.parent_layers import ConnectiveLayer
from manim_ml.neural_network.layers.triplet import TripletLayer
class TripletToFeedForward(ConnectiveLayer):
"""TripletLayer to FeedForward layer"""
input_class = TripletLayer
output_class = FeedForwardLayer
def __init__(
self,
input_layer,
output_layer,
animation_dot_color=RED,
dot_radius=0.02,
**kwargs
):
super().__init__(input_layer, output_layer, **kwargs)
self.animation_dot_color = animation_dot_color
self.dot_radius = dot_radius
self.feed_forward_layer = output_layer
self.triplet_layer = input_layer
def construct_layer(
self,
input_layer: "NeuralNetworkLayer",
output_layer: "NeuralNetworkLayer",
**kwargs
):
return super().construct_layer(input_layer, output_layer, **kwargs)
def make_forward_pass_animation(self, layer_args={}, **kwargs):
"""Makes dots diverge from the given location and move to the feed forward nodes decoder"""
animations = []
# Loop through each image
images = [
self.triplet_layer.anchor,
self.triplet_layer.positive,
self.triplet_layer.negative,
]
for image_mobject in images:
image_animations = []
dots = []
# Move dots from each image to the centers of each of the nodes in the FeedForwardLayer
image_location = image_mobject.get_center()
for node in self.feed_forward_layer.node_group:
new_dot = Dot(
image_location,
radius=self.dot_radius,
color=self.animation_dot_color,
)
per_node_succession = Succession(
Create(new_dot),
new_dot.animate.move_to(node.get_center()),
)
image_animations.append(per_node_succession)
dots.append(new_dot)
animations.append(AnimationGroup(*image_animations))
animation_group = AnimationGroup(*animations)
return animation_group
@override_animation(Create)
def _create_override(self):
return AnimationGroup()
|
ManimML_helblazer811/manim_ml/neural_network/layers/image.py
|
from manim import *
import numpy as np
from PIL import Image
from manim_ml.utils.mobjects.image import GrayscaleImageMobject
from manim_ml.neural_network.layers.parent_layers import NeuralNetworkLayer
class ImageLayer(NeuralNetworkLayer):
"""Single Image Layer for Neural Network"""
def __init__(
self,
numpy_image,
height=1.5,
show_image_on_create=True,
**kwargs
):
super().__init__(**kwargs)
self.image_height = height
self.numpy_image = numpy_image
self.show_image_on_create = show_image_on_create
def construct_layer(self, input_layer, output_layer, **kwargs):
"""Construct layer method
Parameters
----------
input_layer :
Input layer
output_layer :
Output layer
"""
if len(np.shape(self.numpy_image)) == 2:
# Assumed Grayscale
self.num_channels = 1
self.image_mobject = GrayscaleImageMobject(
self.numpy_image,
height=self.image_height
)
elif len(np.shape(self.numpy_image)) == 3:
# Assumed RGB
self.num_channels = 3
self.image_mobject = ImageMobject(self.numpy_image).scale_to_fit_height(
self.image_height
)
self.add(self.image_mobject)
super().construct_layer(input_layer, output_layer, **kwargs)
@classmethod
def from_path(cls, image_path, grayscale=True, **kwargs):
"""Creates a query using the paths"""
# Load images from path
image = Image.open(image_path)
numpy_image = np.asarray(image)
# Make the layer
image_layer = cls(numpy_image, **kwargs)
return image_layer
@override_animation(Create)
def _create_override(self, **kwargs):
debug_mode = False
if debug_mode:
return FadeIn(SurroundingRectangle(self.image_mobject))
if self.show_image_on_create:
return FadeIn(self.image_mobject)
else:
return AnimationGroup()
def make_forward_pass_animation(self, layer_args={}, **kwargs):
return AnimationGroup()
def get_right(self):
"""Override get right"""
return self.image_mobject.get_right()
def scale(self, scale_factor, **kwargs):
"""Scales the image mobject"""
self.image_mobject.scale(scale_factor)
@property
def width(self):
return self.image_mobject.width
@property
def height(self):
return self.image_mobject.height
|
ManimML_helblazer811/manim_ml/neural_network/layers/convolutional_2d_to_max_pooling_2d.py
|
import random
from manim import *
from manim_ml.utils.mobjects.gridded_rectangle import GriddedRectangle
from manim_ml.neural_network.layers.convolutional_2d_to_convolutional_2d import (
get_rotated_shift_vectors,
)
from manim_ml.neural_network.layers.max_pooling_2d import MaxPooling2DLayer
from manim_ml.neural_network.layers.parent_layers import ConnectiveLayer, ThreeDLayer
from manim_ml.neural_network.layers.feed_forward import FeedForwardLayer
from manim_ml.neural_network.layers.convolutional_2d import Convolutional2DLayer
import manim_ml
class Uncreate(Create):
def __init__(
self,
mobject,
reverse_rate_function: bool = True,
introducer: bool = True,
remover: bool = True,
**kwargs,
) -> None:
super().__init__(
mobject,
reverse_rate_function=reverse_rate_function,
introducer=introducer,
remover=remover,
**kwargs,
)
class Convolutional2DToMaxPooling2D(ConnectiveLayer, ThreeDLayer):
"""Feed Forward to Embedding Layer"""
input_class = Convolutional2DLayer
output_class = MaxPooling2DLayer
def __init__(
self,
input_layer: Convolutional2DLayer,
output_layer: MaxPooling2DLayer,
active_color=ORANGE,
**kwargs,
):
super().__init__(input_layer, output_layer, **kwargs)
self.active_color = active_color
def construct_layer(
self,
input_layer: "NeuralNetworkLayer",
output_layer: "NeuralNetworkLayer",
**kwargs,
):
return super().construct_layer(input_layer, output_layer, **kwargs)
def make_forward_pass_animation(self, layer_args={}, run_time=1.5, **kwargs):
"""Forward pass animation from conv2d to max pooling"""
cell_width = self.input_layer.cell_width
feature_map_size = self.input_layer.feature_map_size
kernel_size = self.output_layer.kernel_size
feature_maps = self.input_layer.feature_maps
grid_stroke_width = 1.0
# Make all of the kernel gridded rectangles
create_gridded_rectangle_animations = []
create_and_remove_cell_animations = []
transform_gridded_rectangle_animations = []
remove_gridded_rectangle_animations = []
for feature_map_index, feature_map in enumerate(feature_maps):
# 1. Draw gridded rectangle with kernel_size x kernel_size
# box regions over the input feature maps.
gridded_rectangle = GriddedRectangle(
color=self.active_color,
height=cell_width * feature_map_size[1],
width=cell_width * feature_map_size[0],
grid_xstep=cell_width * kernel_size,
grid_ystep=cell_width * kernel_size,
grid_stroke_width=grid_stroke_width,
grid_stroke_color=self.active_color,
show_grid_lines=True,
)
gridded_rectangle.set_z_index(10)
# 2. Randomly highlight one of the cells in the kernel.
highlighted_cells = []
num_cells_in_kernel = kernel_size * kernel_size
num_x_kernels = int(feature_map_size[0] / kernel_size)
num_y_kernels = int(feature_map_size[1] / kernel_size)
for kernel_x in range(0, num_x_kernels):
for kernel_y in range(0, num_y_kernels):
# Choose a random cell index
cell_index = random.randint(0, num_cells_in_kernel - 1)
# Make a rectangle in that cell
cell_rectangle = GriddedRectangle(
color=self.active_color,
height=cell_width,
width=cell_width,
fill_opacity=0.7,
stroke_width=0.0,
z_index=10,
)
# Move to the correct location
kernel_shift_vector = [
kernel_size * cell_width * kernel_x,
-1 * kernel_size * cell_width * kernel_y,
0,
]
cell_shift_vector = [
(cell_index % kernel_size) * cell_width,
-1 * int(cell_index / kernel_size) * cell_width,
0,
]
cell_rectangle.next_to(
gridded_rectangle.get_corners_dict()["top_left"],
submobject_to_align=cell_rectangle.get_corners_dict()[
"top_left"
],
buff=0.0,
)
cell_rectangle.shift(kernel_shift_vector)
cell_rectangle.shift(cell_shift_vector)
highlighted_cells.append(cell_rectangle)
# Rotate the gridded rectangles so they match the angle
# of the conv maps
gridded_rectangle_group = VGroup(gridded_rectangle, *highlighted_cells)
gridded_rectangle_group.rotate(
manim_ml.config.three_d_config.rotation_angle,
about_point=gridded_rectangle.get_center(),
axis=manim_ml.config.three_d_config.rotation_axis,
)
gridded_rectangle_group.next_to(
feature_map.get_corners_dict()["top_left"],
submobject_to_align=gridded_rectangle.get_corners_dict()["top_left"],
buff=0.0,
)
# 3. Make a create gridded rectangle
create_rectangle = Create(
gridded_rectangle,
)
create_gridded_rectangle_animations.append(create_rectangle)
# 4. Create and fade out highlighted cells
create_group = AnimationGroup(
*[Create(highlighted_cell) for highlighted_cell in highlighted_cells],
lag_ratio=1.0,
)
uncreate_group = AnimationGroup(
*[Uncreate(highlighted_cell) for highlighted_cell in highlighted_cells],
lag_ratio=0.0,
)
create_and_remove_cell_animation = Succession(
create_group, Wait(1.0), uncreate_group
)
create_and_remove_cell_animations.append(create_and_remove_cell_animation)
# 5. Move and resize the gridded rectangle to the output
# feature maps.
output_gridded_rectangle = GriddedRectangle(
color=self.active_color,
height=cell_width * feature_map_size[1] / 2,
width=cell_width * feature_map_size[0] / 2,
grid_xstep=cell_width,
grid_ystep=cell_width,
grid_stroke_width=grid_stroke_width,
grid_stroke_color=self.active_color,
show_grid_lines=True,
)
output_gridded_rectangle.rotate(
manim_ml.config.three_d_config.rotation_angle,
about_point=output_gridded_rectangle.get_center(),
axis=manim_ml.three_d_config.rotation_axis,
)
output_gridded_rectangle.move_to(
self.output_layer.feature_maps[feature_map_index].copy()
)
transform_rectangle = ReplacementTransform(
gridded_rectangle,
output_gridded_rectangle,
introducer=True,
)
transform_gridded_rectangle_animations.append(
transform_rectangle,
)
"""
Succession(
Uncreate(gridded_rectangle),
transform_rectangle,
lag_ratio=1.0
)
"""
# 6. Make the gridded feature map(s) disappear.
remove_gridded_rectangle_animations.append(
Uncreate(gridded_rectangle_group)
)
create_gridded_rectangle_animation = AnimationGroup(
*create_gridded_rectangle_animations
)
create_and_remove_cell_animation = AnimationGroup(
*create_and_remove_cell_animations
)
transform_gridded_rectangle_animation = AnimationGroup(
*transform_gridded_rectangle_animations
)
remove_gridded_rectangle_animation = AnimationGroup(
*remove_gridded_rectangle_animations
)
return Succession(
create_gridded_rectangle_animation,
Wait(1),
create_and_remove_cell_animation,
transform_gridded_rectangle_animation,
Wait(1),
remove_gridded_rectangle_animation,
lag_ratio=1.0,
)
|
ManimML_helblazer811/manim_ml/neural_network/layers/feed_forward_to_vector.py
|
from manim import *
from manim_ml.neural_network.layers.feed_forward import FeedForwardLayer
from manim_ml.neural_network.layers.parent_layers import ConnectiveLayer
from manim_ml.neural_network.layers.vector import VectorLayer
class FeedForwardToVector(ConnectiveLayer):
"""Image Layer to FeedForward layer"""
input_class = FeedForwardLayer
output_class = VectorLayer
def __init__(
self,
input_layer,
output_layer,
animation_dot_color=RED,
dot_radius=0.05,
**kwargs
):
super().__init__(input_layer, output_layer, **kwargs)
self.animation_dot_color = animation_dot_color
self.dot_radius = dot_radius
self.feed_forward_layer = input_layer
self.vector_layer = output_layer
def construct_layer(
self,
input_layer: "NeuralNetworkLayer",
output_layer: "NeuralNetworkLayer",
**kwargs
):
return super().construct_layer(input_layer, output_layer, **kwargs)
def make_forward_pass_animation(self, layer_args={}, **kwargs):
"""Makes dots diverge from the given location and move to the feed forward nodes decoder"""
animations = []
# Move the dots to the centers of each of the nodes in the FeedForwardLayer
destination = self.vector_layer.get_center()
for node in self.feed_forward_layer.node_group:
new_dot = Dot(
node.get_center(),
radius=self.dot_radius,
color=self.animation_dot_color,
)
per_node_succession = Succession(
Create(new_dot),
new_dot.animate.move_to(destination),
)
animations.append(per_node_succession)
animation_group = AnimationGroup(*animations)
return animation_group
@override_animation(Create)
def _create_override(self):
return AnimationGroup()
|
ManimML_helblazer811/manim_ml/neural_network/layers/feed_forward_to_feed_forward.py
|
from typing import List, Union
import numpy as np
from manim import *
from manim_ml.neural_network.layers.feed_forward import FeedForwardLayer
from manim_ml.neural_network.layers.parent_layers import ConnectiveLayer
import manim_ml
class FeedForwardToFeedForward(ConnectiveLayer):
"""Layer for connecting FeedForward layer to FeedForwardLayer"""
input_class = FeedForwardLayer
output_class = FeedForwardLayer
def __init__(
self,
input_layer,
output_layer,
passing_flash=True,
dot_radius=0.05,
animation_dot_color=manim_ml.config.color_scheme.active_color,
edge_color=manim_ml.config.color_scheme.secondary_color,
edge_width=1.5,
camera=None,
**kwargs
):
super().__init__(input_layer, output_layer, **kwargs)
self.passing_flash = passing_flash
self.edge_color = edge_color
self.dot_radius = dot_radius
self.animation_dot_color = animation_dot_color
self.edge_width = edge_width
def construct_layer(
self,
input_layer: "NeuralNetworkLayer",
output_layer: "NeuralNetworkLayer",
**kwargs
):
self.edges = self.construct_edges()
self.add(self.edges)
super().construct_layer(input_layer, output_layer, **kwargs)
def construct_edges(self):
# Go through each node in the two layers and make a connecting line
edges = []
for node_i in self.input_layer.node_group:
for node_j in self.output_layer.node_group:
line = Line(
node_i.get_center(),
node_j.get_center(),
color=self.edge_color,
stroke_width=self.edge_width,
)
edges.append(line)
edges = VGroup(*edges)
return edges
@override_animation(FadeOut)
def _fadeout_animation(self):
animations = []
for edge in self.edges:
animations.append(FadeOut(edge))
animation_group = AnimationGroup(*animations)
return animation_group
def make_forward_pass_animation(
self, layer_args={}, run_time=1, feed_forward_dropout=0.0, **kwargs
):
"""Animation for passing information from one FeedForwardLayer to the next"""
path_animations = []
dots = []
for edge_index, edge in enumerate(self.edges):
if (
not "edge_indices_to_dropout" in layer_args
or not edge_index in layer_args["edge_indices_to_dropout"]
):
dot = Dot(
color=self.animation_dot_color,
fill_opacity=1.0,
radius=self.dot_radius,
)
# Add to dots group
dots.append(dot)
# Make the animation
if self.passing_flash:
copy_edge = edge.copy()
anim = ShowPassingFlash(
copy_edge.set_color(self.animation_dot_color), time_width=0.3
)
else:
anim = MoveAlongPath(
dot, edge, run_time=run_time, rate_function=sigmoid
)
path_animations.append(anim)
if not self.passing_flash:
dots = VGroup(*dots)
self.add(dots)
path_animations = AnimationGroup(*path_animations)
return path_animations
def modify_edge_colors(self, colors=None, magnitudes=None, color_scheme="inferno"):
"""Changes the colors of edges"""
# TODO implement
pass
def modify_edge_stroke_widths(self, widths):
"""Changes the widths of the edges"""
assert len(widths) > 0
# Note: 1d-arrays are assumed to be in row major order
widths = np.array(widths)
widths = np.flatten(widths)
# Check thickness size
assert np.shape(widths)[0] == len(self.edges)
# Make animation
animations = []
for index, edge in enumerate(self.edges):
width = widths[index]
change_width = edge.animate.set_stroke_width(width)
animations.append(change_width)
animation_group = AnimationGroup(*animations)
return animation_group
@override_animation(Create)
def _create_override(self, **kwargs):
animations = []
for edge in self.edges:
animations.append(Create(edge))
animation_group = AnimationGroup(*animations, lag_ratio=0.0)
return animation_group
|
ManimML_helblazer811/manim_ml/neural_network/layers/image_to_convolutional_2d.py
|
import numpy as np
from manim import *
from manim_ml.neural_network.layers.convolutional_2d import Convolutional2DLayer
from manim_ml.neural_network.layers.image import ImageLayer
from manim_ml.neural_network.layers.parent_layers import (
ThreeDLayer,
VGroupNeuralNetworkLayer,
)
from manim_ml.utils.mobjects.gridded_rectangle import GriddedRectangle
import manim_ml
class ImageToConvolutional2DLayer(VGroupNeuralNetworkLayer, ThreeDLayer):
"""Handles rendering a convolutional layer for a nn"""
input_class = ImageLayer
output_class = Convolutional2DLayer
def __init__(
self, input_layer: ImageLayer, output_layer: Convolutional2DLayer, **kwargs
):
super().__init__(input_layer, output_layer, **kwargs)
self.input_layer = input_layer
self.output_layer = output_layer
def construct_layer(
self,
input_layer: "NeuralNetworkLayer",
output_layer: "NeuralNetworkLayer",
**kwargs,
):
return super().construct_layer(input_layer, output_layer, **kwargs)
def make_forward_pass_animation(self, run_time=5, layer_args={}, **kwargs):
"""Maps image to convolutional layer"""
# Transform the image from the input layer to the
num_image_channels = self.input_layer.num_channels
if num_image_channels == 1 or num_image_channels == 3: # TODO fix this later
return self.grayscale_image_forward_pass_animation()
elif num_image_channels == 3:
return self.rbg_image_forward_pass_animation()
else:
raise Exception(
f"Unrecognized number of image channels: {num_image_channels}"
)
def rbg_image_forward_pass_animation(self):
"""Handles forward pass animation for 3 channel image"""
image_mobject = self.input_layer.image_mobject
# TODO get each color channel and turn it into an image
# TODO create image mobjects for each channel and transform
# it to the feature maps of the output_layer
raise NotImplementedError()
def grayscale_image_forward_pass_animation(self):
"""Handles forward pass animation for 1 channel image"""
animations = []
image_mobject = self.input_layer.image_mobject
target_feature_map = self.output_layer.feature_maps[0]
# Map image mobject to feature map
# Make rotation of image
rotation = ApplyMethod(
image_mobject.rotate,
manim_ml.config.three_d_config.rotation_angle,
manim_ml.config.three_d_config.rotation_axis,
image_mobject.get_center(),
run_time=0.5,
)
"""
x_rotation = ApplyMethod(
image_mobject.rotate,
ThreeDLayer.three_d_x_rotation,
[1, 0, 0],
image_mobject.get_center(),
run_time=0.5
)
y_rotation = ApplyMethod(
image_mobject.rotate,
ThreeDLayer.three_d_y_rotation,
[0, 1, 0],
image_mobject.get_center(),
run_time=0.5
)
"""
# Set opacity
set_opacity = ApplyMethod(image_mobject.set_opacity, 0.2, run_time=0.5)
# Scale the max of width or height to the
# width of the feature_map
def scale_image_func(image_mobject):
max_width_height = max(image_mobject.width, image_mobject.height)
scale_factor = target_feature_map.untransformed_width / max_width_height
image_mobject.scale(scale_factor)
return image_mobject
scale_image = ApplyFunction(scale_image_func, image_mobject)
# scale_image = ApplyMethod(image_mobject.scale, scale_factor, run_time=0.5)
# Move the image
move_image = ApplyMethod(image_mobject.move_to, target_feature_map)
# Compose the animations
animation = Succession(
rotation,
scale_image,
set_opacity,
move_image,
)
return animation
def scale(self, scale_factor, **kwargs):
super().scale(scale_factor, **kwargs)
@override_animation(Create)
def _create_override(self, **kwargs):
return AnimationGroup()
|
ManimML_helblazer811/manim_ml/neural_network/layers/max_pooling_2d_to_convolutional_2d.py
|
import numpy as np
from manim import *
from manim_ml.neural_network.layers.convolutional_2d_to_convolutional_2d import (
Convolutional2DToConvolutional2D,
Filters,
)
from manim_ml.neural_network.layers.max_pooling_2d import MaxPooling2DLayer
from manim_ml.neural_network.layers.parent_layers import ConnectiveLayer, ThreeDLayer
from manim_ml.neural_network.layers.feed_forward import FeedForwardLayer
from manim_ml.neural_network.layers.convolutional_2d import Convolutional2DLayer
from manim.utils.space_ops import rotation_matrix
class MaxPooling2DToConvolutional2D(Convolutional2DToConvolutional2D):
"""Feed Forward to Embedding Layer"""
input_class = MaxPooling2DLayer
output_class = Convolutional2DLayer
def __init__(
self,
input_layer: MaxPooling2DLayer,
output_layer: Convolutional2DLayer,
passing_flash_color=ORANGE,
cell_width=1.0,
stroke_width=2.0,
show_grid_lines=False,
**kwargs
):
input_layer.num_feature_maps = output_layer.num_feature_maps
super().__init__(input_layer, output_layer, **kwargs)
self.passing_flash_color = passing_flash_color
self.cell_width = cell_width
self.stroke_width = stroke_width
self.show_grid_lines = show_grid_lines
def construct_layer(
self,
input_layer: "NeuralNetworkLayer",
output_layer: "NeuralNetworkLayer",
**kwargs
):
"""Constructs the MaxPooling to Convolution3D layer
Parameters
----------
input_layer : NeuralNetworkLayer
input layer
output_layer : NeuralNetworkLayer
output layer
"""
super().construct_layer(input_layer, output_layer, **kwargs)
|
End of preview. Expand
in Data Studio
README.md exists but content is empty.
- Downloads last month
- 16