This repository is meant to be a backup for my Stable Diffusion 1.5 models in case they ever went down on every other site I have published them on.

Also, this specific one mostly filled with miscellaneous models that are not often used, but fulfill different purposes.

BlackDaisyMix - "Kawaii" Anime Model

What is this model?

This is a Stable Diffusion 1.5 model born slightly out of hatred against "kawaii" anime models - chibi anatomy but with strong styles that can stand on its own. Initially, this was meant to be a merge that distills the essence behind those models and reverse it, but I decided to release the final results because it has surprising prompt adherence and uniqueness in it. How to use this model?

First point: Please, try to exercise a little restraint in using this model. Due to its nature, chibis are much more likely to occur than usual.

Prompting

This model uses booru tags like all other anime models. However, this model has somewhat decent natural language recognition as well. If you have a tag autocorrect extension, use Drac's tag list for best results.

Model cutoff is ~2020, so don't expect too much from it.

Negative prompting is unnecessary, but could be used.

This model is meant mostly to be used as a 2nd pass in a workflow, but could be used separately as well.

Parameters:

Sampler (Basic): DPM++ 2M Karras or Euler ancestral

Sampler (Advanced): DPM++ 2M CFG++ SGM Uniform

Step count: 20-30 (50 for CFG++)

CFG: 7-10 (0.5-1 for CFG++)

CLIP Skip: 1-2

Resolution: 768 base resolution maximum, 512 base resolution recommended

Hi.res fix: recommended - 2x @ 0.5 denoise. If possible, do this using another model.

Version 1 - Alpha Variation

This model is made of the following:

Checkpoints:

CocoaMix - V8

CuteYukiMix - kemiaomiao

HotaruBreed CuteAnimeMix - v7

UsaChoMix

ZenTypeEX - ex_WK

ZenTypeB - v2

ZenTypeD - v7

ZenTypeA - A2_v4

LoRAs:

CluelessC's Hololive LoRA (hll6.3-a10-eps-resized-512)

Theovercomer8's Contrast Fix

SD1.5 DPO LoRA

SPO SD1.5 LoRA

CLIP: ViT-L-14-TEXT-detail

VAE (baked in): WD1.4-kl-f8-anime02-bless09

Merge recipe in model description and inside model metadata as a workflow.

Version 1 - Beta Variation

This model is made of the following:

Checkpoints:

CocoaMix - V8

CuteYukiMix - kemiaomiao

HotaruBreed CuteAnimeMix - v7

UsaChoMix

ZenTypeEX - ex_WK

ZenTypeX - v2

ZenTypeB - v2

ZenTypeA - A2_v4

LoRAs:

CluelessC's Hololive LoRA (hll6.3-a10-eps-resized-512)

Theovercomer8's Contrast Fix

SD1.5 DPO LoRA

SPO SD1.5 LoRA

CLIP: ViT-L-14-TEXT-detail

VAE (baked in): WD1.4-kl-f8-anime02-bless09

Merge recipe in model description and inside model metadata as a workflow.

ColorSplash - V-prediction Vibrant Mix

This model uses zSNR and V-prediction. Download the .yaml config file and place it in the same folder as your models (for A1111/Forge); or use ModelDiscreteSampling node (for ComfyUI)

What is this model?

ColorSplash (previously named ColorStorm in preview images, now remade to work better) is a Stable Diffusion 1.5 merge created to test whether v-prediction can be merged in without losing too much quality. It's a work-in-progress as I try and make it work.

How to use this model?

Prompting:

This model (in my testing) uses booru tags very well, but can rarely use natural language.

Style is hard to influence with this model. With small prompt counts, the model is clean and crisp, but gets more distorted the more words is added. To counter this issue, Hi.res fix is highly recommended

Like most anime checkpoints, this one also has a female bias, but it can be countered.

This model is mainly tested against ComfyUI's prompt parser and reForge's default parser (with .yaml config file).

Character recognition should work well with really popular characters, but not much else. 9th Tail's character training is degraded, but not lost. ConcoctionMix's use of Hololive and AIOMonsterGirl isn't fully lost either, though it's also degraded a bit. Quick note: Suzuran's multiple tails might bleed through.

This model is also surprisingly good at furry content. Not quite sure why though...

Also, "brown-outs" happens when touching merged knowledge (tints the image brown in most cases), so do be careful about that.

Parameters:

(Anything in bold has been tested and working decently fine)

Sampler + Scheduler: Almost anything works, but with some browning. Best solution would be Euler beta, DPM++ 2M beta, with secondary recommendations being: Euler a, DPM++ 2M SGM Uniform, DPM Adaptive, UniPC simple, DDIM (with ddim_uniform), DPM++ SDE Beta. Karras schedulers are completely unusable. This is probably the only issue with v-prediction models

Steps: 20+

CFG: 4-12 (recommended range: 4-6, 7-12 is usable, but artifacted), RescaleCFG recommended

CLIP Skip: 1-2

Resolution: 3:2 aspect ratio tested, base 768 resolution and below (640, 512) are usable.

Hi.res fix: Highly recommended. Tested at 1.5x latent upscale

Merge recipe:

ColorSplash-v0.1

The image is the entirety of the ColorSplash-v0.1 merging recipe in ComfyUI form. Uses comfy-mecha and ComfyUI-DareMerge [not properly]. Metadata included inside image.

It's a simple TIES merge (k=0.9) with:

Model A: 9th Tail - main_v0.3

Model B: ConcoctionMix-a1 [Vodka]

Model C: ConcoctionMix-a2 [Vermouth]

ColorSplash-v0.1.1

V0.1.1 uses exclusively comfy-mecha, since, I don't need to alter CLIP at all.

Model A: ColorSplash - v0.1 (It was initially meant to be a ConcoctionMix experiment)

Model B: AIOMonsterGirl - v4 (more cohesive knowledge transfer)

Model C: OpenSolera - a6 [Fleur] (same idea)

Model D: FluffyRock Unleashed - v1.0 Base (same idea)

Step 1: Train Difference using A-x-A method, where x is model B, C, or D

Step 2: Perform a TIES Sum with Dropout (with k=1), then add it back into model A

ColorSplash-v0.1.2

A Train Difference merge into Add Difference of 4 models:

Model A: ColorSplash - v0.1.1

Model B: AIOMonsterGirl - v4 (Last merge wasn't cutting it)

Model C: HyperFusion V-pred - v9 (To add more NSFW and v-prediction data)

Model D: 9th Tail - main_v0.3

Step 1: Train Difference in x-Model A-x style for Model B, C, and D Step 2: Add Difference of Model D-A Train Difference into Model B-A and Model C-A Step 3: Weighted Sum merge between the remaining 2 models Step 4: Clamp the model using the 4 chosen checkpoints earlier

Wormwood - Realistic Mix

What is this model?

Wormwood is a merged Stable Diffusion 1.5 model from the following models:

Photon - v1.0 (CivitAI)

AbsoluteReality - v1.8.1 (CivitAI)

xxMix9_Realistic - v4.0 (CivitAI)

CyberRealistic - v6.0 (CivitAI)

ReV Animated - v2 Rebirth (CivitAI)

majicMIX Realistic - v7 (CivitAI)

DreamShaper - v8 (CivitAI)

CyberRealistic Classic - v3.2 (CivitAI)

Uber Realistic Porn Merge - v2.0 (CivitAI)

Perfect World - v6 (CivitAI)

LazyMix+ - v4.0 (CivitAI)

OpenSolera - a5 [Serif] (CivitAI)

This model is an attempt to make a photorealistic model usable with booru tags and also slightly solve the "same face" syndrome. Heavily experimental, with no major testing done other than making sure it works.

How to use this model?

Prompting

This model is mostly tested against booru tags, but probably works best with natural language with a few caveats:

For tags: use 1male/1female or 1man/1woman. 1boy tends to go too young (like illegally young. THIS IS NOT AN INVITATION TO DO THAT, DO NOT)

USE NEGATIVE EMBEDDINGS with this model. This model kinda needs it to be good

It's not very versatile despite its trail mix of models, so use LoRAs to make this model decently usable

Sampler

Tested against DPM++ 2M Karras at 20 steps for the most part, but Euler a might be better. Other samplers are untested. Use at your own risk.

CLIP Skip works at both 1 and 2, recommended at 2 for better prompt adherence.

Resolution

The basics of Stable Diffusion 1.5: 512x512, 512x768, 768x512 all works. 640x640 works, but not well-tested.

For upscaling, RealESRGAN for 2x and 4x is my current recommendation.

Downloads last month

-

Downloads are not tracked for this model. How to track
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for AzelusLightvale/SD1.5-Miscellaneous_Models

Finetuned
(297)
this model