- 57
- 1 478 422
DeepFindr
Germany
Registrace 31. 03. 2019
Hello and welcome on my Channel :)
I make videos about all kinds of Machine Learning / Data Science topics and am happy to share what I've learned.
If you enjoy the content and want to support me (only if you want!), these are the current options:
►Share this channel: bit.ly/3zEqL1W
►Support me on Patreon: bit.ly/2Wed242
►Buy me a coffee on Ko-Fi: bit.ly/3kJYEdl
Contact: deepfindr@gmail.com
Website: deepfindr.github.io
I make videos about all kinds of Machine Learning / Data Science topics and am happy to share what I've learned.
If you enjoy the content and want to support me (only if you want!), these are the current options:
►Share this channel: bit.ly/3zEqL1W
►Support me on Patreon: bit.ly/2Wed242
►Buy me a coffee on Ko-Fi: bit.ly/3kJYEdl
Contact: deepfindr@gmail.com
Website: deepfindr.github.io
Uniform Manifold Approximation and Projection (UMAP) | Dimensionality Reduction Techniques (5/5)
▬▬ Papers / Resources ▬▬▬
Colab Notebook: colab.research.google.com/drive/1n_kdyXsA60djl-nTSUxLQTZuKcxkMA83?usp=sharing
Sources:
- TDA Introduction: www.frontiersin.org/articles/10.3389/frai.2021.667963/full
- TDA Blogpost: chance.amstat.org/2021/04/topological-data-analysis/
- TDA Applications Blogpost: orbyta.it/tda-in-a-nutshell-how-can-we-find-multidimensional-voids-and-explore-the-black-boxes-of-deep-learning/
- TDA Intro Paper: arxiv.org/pdf/2006.03173.pdf
- Mathematical UMAP Blogpost: topos.site/blog/2024-04-05-understanding-umap/
- UMAP Author Talk: czcams.com/video/nq6iPZVUxZU/video.html&ab_channel=Enthought
- UMAP vs. t-SNE Global preservation paper: dkobak.github.io/pdfs/kobak2021initialization.pdf
- Fuzzy Topology Slidedeck: speakerdeck.com/lmcinnes/umap-uniform-manifold-approximation-and-projection-for-dimension-reduction?slide=39
- Short UMAP Tutorial: jyopari.github.io/umap.html
Image Sources:
- Thumbnail Image: johncarlosbaez.wordpress.com/2020/02/10/the-category-theory-behind-umap/
- Persistent Homology: orbyta.it/tda-in-a-nutshell-how-can-we-find-multidimensional-voids-and-explore-the-black-boxes-of-deep-learning/
▬▬ Support me if you like 🌟
►Link to this channel: bit.ly/3zEqL1W
►Support me on Patreon: bit.ly/2Wed242
►Buy me a coffee on Ko-Fi: bit.ly/3kJYEdl
►E-Mail: deepfindr@gmail.com
▬▬ Used Music ▬▬▬▬▬▬▬▬▬▬▬
Music from #Uppbeat (free for Creators!):
uppbeat.io/t/sulyya/weather-compass
License code: ZRGIWRHMLMZMAHQI
▬▬ Used Icons ▬▬▬▬▬▬▬▬▬▬
All Icons are from flaticon: www.flaticon.com/authors/freepik
▬▬ Timestamps ▬▬▬▬▬▬▬▬▬▬▬
00:00 Introduction
00:32 Local vs. Global Technqiues
1:25 Is UMAP better?
02:08 The Paper
02:40 Topological Data Analysis Primer
04:04 Simplices
05:04 Filtration
06:22 Persistent Homology
07:02 UMAP Overview
07:40 Step 1: Graph construction
08:25 Uniform distribution
09:44 Non-uniform real-world data
10:48 Enforcing uniformity
12:05 Exponential decay
12:43 Local connectivity constraint
14:24 Distance function
16:19 Local metric spaces
17:00 Fuzzy simplicial complex
18:38 The full picture of step 1
19:10 Step 2: Graph layout optimization
19:55 Comparing graphs
21:15 Cross entropy loss
22:14 Attractive and repulsive forces
22:56 More details
24:04 Code
26:28 t-SNE vs. UMAP
27:24 Outro
▬▬ My equipment 💻
- Microphone: amzn.to/3DVqB8H
- Microphone mount: amzn.to/3BWUcOJ
- Monitors: amzn.to/3G2Jjgr
- Monitor mount: amzn.to/3AWGIAY
- Height-adjustable table: amzn.to/3aUysXC
- Ergonomic chair: amzn.to/3phQg7r
- PC case: amzn.to/3jdlI2Y
- GPU: amzn.to/3AWyzwy
- Keyboard: amzn.to/2XskWHP
- Bluelight filter glasses: amzn.to/3pj0fK2
Colab Notebook: colab.research.google.com/drive/1n_kdyXsA60djl-nTSUxLQTZuKcxkMA83?usp=sharing
Sources:
- TDA Introduction: www.frontiersin.org/articles/10.3389/frai.2021.667963/full
- TDA Blogpost: chance.amstat.org/2021/04/topological-data-analysis/
- TDA Applications Blogpost: orbyta.it/tda-in-a-nutshell-how-can-we-find-multidimensional-voids-and-explore-the-black-boxes-of-deep-learning/
- TDA Intro Paper: arxiv.org/pdf/2006.03173.pdf
- Mathematical UMAP Blogpost: topos.site/blog/2024-04-05-understanding-umap/
- UMAP Author Talk: czcams.com/video/nq6iPZVUxZU/video.html&ab_channel=Enthought
- UMAP vs. t-SNE Global preservation paper: dkobak.github.io/pdfs/kobak2021initialization.pdf
- Fuzzy Topology Slidedeck: speakerdeck.com/lmcinnes/umap-uniform-manifold-approximation-and-projection-for-dimension-reduction?slide=39
- Short UMAP Tutorial: jyopari.github.io/umap.html
Image Sources:
- Thumbnail Image: johncarlosbaez.wordpress.com/2020/02/10/the-category-theory-behind-umap/
- Persistent Homology: orbyta.it/tda-in-a-nutshell-how-can-we-find-multidimensional-voids-and-explore-the-black-boxes-of-deep-learning/
▬▬ Support me if you like 🌟
►Link to this channel: bit.ly/3zEqL1W
►Support me on Patreon: bit.ly/2Wed242
►Buy me a coffee on Ko-Fi: bit.ly/3kJYEdl
►E-Mail: deepfindr@gmail.com
▬▬ Used Music ▬▬▬▬▬▬▬▬▬▬▬
Music from #Uppbeat (free for Creators!):
uppbeat.io/t/sulyya/weather-compass
License code: ZRGIWRHMLMZMAHQI
▬▬ Used Icons ▬▬▬▬▬▬▬▬▬▬
All Icons are from flaticon: www.flaticon.com/authors/freepik
▬▬ Timestamps ▬▬▬▬▬▬▬▬▬▬▬
00:00 Introduction
00:32 Local vs. Global Technqiues
1:25 Is UMAP better?
02:08 The Paper
02:40 Topological Data Analysis Primer
04:04 Simplices
05:04 Filtration
06:22 Persistent Homology
07:02 UMAP Overview
07:40 Step 1: Graph construction
08:25 Uniform distribution
09:44 Non-uniform real-world data
10:48 Enforcing uniformity
12:05 Exponential decay
12:43 Local connectivity constraint
14:24 Distance function
16:19 Local metric spaces
17:00 Fuzzy simplicial complex
18:38 The full picture of step 1
19:10 Step 2: Graph layout optimization
19:55 Comparing graphs
21:15 Cross entropy loss
22:14 Attractive and repulsive forces
22:56 More details
24:04 Code
26:28 t-SNE vs. UMAP
27:24 Outro
▬▬ My equipment 💻
- Microphone: amzn.to/3DVqB8H
- Microphone mount: amzn.to/3BWUcOJ
- Monitors: amzn.to/3G2Jjgr
- Monitor mount: amzn.to/3AWGIAY
- Height-adjustable table: amzn.to/3aUysXC
- Ergonomic chair: amzn.to/3phQg7r
- PC case: amzn.to/3jdlI2Y
- GPU: amzn.to/3AWyzwy
- Keyboard: amzn.to/2XskWHP
- Bluelight filter glasses: amzn.to/3pj0fK2
zhlédnutí: 1 320
Video
t-distributed Stochastic Neighbor Embedding (t-SNE) | Dimensionality Reduction Techniques (4/5)
zhlédnutí 3KPřed 3 měsíci
To try everything Brilliant has to offer-free-for a full 30 days, visit brilliant.org/DeepFindr. The first 200 of you will get 20% off Brilliant’s annual premium subscription. (Video sponsered by Brilliant.org) ▬▬ Papers / Resources ▬▬▬ Colab Notebook: colab.research.google.com/drive/1n_kdyXsA60djl-nTSUxLQTZuKcxkMA83?usp=sharing Entropy: gregorygundersen.com/blog/2020/09/01/gaussian-entropy/ At...
Multidimensional Scaling (MDS) | Dimensionality Reduction Techniques (3/5)
zhlédnutí 2,7KPřed 4 měsíci
To try everything Brilliant has to offer-free-for a full 30 days, visit brilliant.org/DeepFindr . The first 200 of you will get 20% off Brilliant’s annual premium subscription ▬▬ Papers / Resources ▬▬▬ Colab Notebook: colab.research.google.com/drive/1n_kdyXsA60djl-nTSUxLQTZuKcxkMA83?usp=sharing Kruskal Paper 1964: cda.psych.uiuc.edu/psychometrika_highly_cited_articles/kruskal_1964a.pdf Very old...
Principal Component Analysis (PCA) | Dimensionality Reduction Techniques (2/5)
zhlédnutí 3,5KPřed 6 měsíci
▬▬ Papers / Resources ▬▬▬ Colab Notebook: colab.research.google.com/drive/1n_kdyXsA60djl-nTSUxLQTZuKcxkMA83?usp=sharing Peter Bloem PCA Blog: peterbloem.nl/blog/pca PCA for DS book: pca4ds.github.io/basic.html PCA Book: cda.psych.uiuc.edu/statistical_learning_course/Jolliffe I. Principal Component Analysis (2ed., Springer, 2002)(518s)_MVsa_.pdf Lagrange Multipliers: ekamperi.github.io/mathemati...
Dimensionality Reduction Techniques | Introduction and Manifold Learning (1/5)
zhlédnutí 8KPřed 6 měsíci
Brilliant 20% off: brilliant.org/DeepFindr/ ▬▬ Papers / Resources ▬▬▬ Intro to Dim. Reduction Paper: drops.dagstuhl.de/opus/volltexte/2012/3747/pdf/12.pdf T-SNE Visualization Video: czcams.com/video/wvsE8jm1GzE/video.html&ab_channel=GoogleforDevelopers On the Surprising Behavior of Distance Metrics in High Dimensional Space: link.springer.com/chapter/10.1007/3-540-44503-X_27 On the Intrinsic Di...
LoRA explained (and a bit about precision and quantization)
zhlédnutí 46KPřed 9 měsíci
▬▬ Papers / Resources ▬▬▬ LoRA Paper: arxiv.org/abs/2106.09685 QLoRA Paper: arxiv.org/abs/2305.14314 Huggingface 8bit intro: huggingface.co/blog/hf-bitsandbytes-integration PEFT / LoRA Tutorial: www.philschmid.de/fine-tune-flan-t5-peft Adapter Layers: arxiv.org/pdf/1902.00751.pdf Prefix Tuning: arxiv.org/abs/2101.00190 ▬▬ Support me if you like 🌟 ►Link to this channel: bit.ly/3zEqL1W ►Support m...
Vision Transformer Quick Guide - Theory and Code in (almost) 15 min
zhlédnutí 56KPřed 11 měsíci
▬▬ Papers / Resources ▬▬▬ Colab Notebook: colab.research.google.com/drive/1P9TPRWsDdqJC6IvOxjG2_3QlgCt59P0w?usp=sharing ViT paper: arxiv.org/abs/2010.11929 Best Transformer intro: jalammar.github.io/illustrated-transformer/ CNNs vs ViT: arxiv.org/abs/2108.08810 CNNs vs ViT Blog: towardsdatascience.com/do-vision-transformers-see-like-convolutional-neural-networks-paper-explained-91b4bd5185c8 Swi...
Personalized Image Generation (using Dreambooth) explained!
zhlédnutí 7KPřed rokem
▬▬ Papers / Resources ▬▬▬ Colab Notebook: colab.research.google.com/drive/1QUjLK6oUB_F4FsIDYusaHx-Yl7mL-Lae?usp=sharing Stable Diffusion Tutorial: jalammar.github.io/illustrated-stable-diffusion/ Stable Diffusion Paper: arxiv.org/abs/2112.10752 Hypernet Blogpost: blog.novelai.net/novelai-improvements-on-stable-diffusion-e10d38db82ac Dreambooth Paper: arxiv.org/abs/2208.12242 LoRa Paper: arxiv.o...
Equivariant Neural Networks | Part 3/3 - Transformers and GNNs
zhlédnutí 5KPřed rokem
▬▬ Papers / Resources ▬▬▬ SchNet: arxiv.org/abs/1706.08566 SE(3) Transformer: arxiv.org/abs/2006.10503 Tensor Field Network: arxiv.org/abs/1802.08219 Spherical Harmonics CZcams Video: czcams.com/video/EcKgJhFdtEY/video.html&ab_channel=BJBodner Spherical Harmonics Formula: czcams.com/video/5PMqf3Hj-Aw/video.html&ab_channel=ProfessorMdoesScience Tensor Field Network Jupyter Notebook: github.com/U...
Equivariant Neural Networks | Part 2/3 - Generalized CNNs
zhlédnutí 4,7KPřed rokem
▬▬ Papers / Resources ▬▬▬ Group Equivariant CNNs: arxiv.org/abs/1602.07576 Convolution 3B1B video: czcams.com/video/KuXjwB4LzSA/video.html&ab_channel=3Blue1Brown Fabian Fuchs Equivariance: fabianfuchsml.github.io/equivariance1of2/ Steerable CNNs: arxiv.org/abs/1612.08498 Blogpost GCNN: medium.com/swlh/geometric-deep-learning-group-equivariant-convolutional-networks-ec687c7a7b41 Roto-Translation...
Equivariant Neural Networks | Part 1/3 - Introduction
zhlédnutí 10KPřed rokem
▬▬ Papers / Resources ▬▬▬ Fabian Fuchs Equivariance: fabianfuchsml.github.io/equivariance1of2/ Deep Learning for Molecules: dmol.pub/dl/Equivariant.html Naturally Occuring Equivariance: distill.pub/2020/circuits/equivariance/ 3Blue1Brown Group Theory: czcams.com/video/mH0oCDa74tE/video.html&ab_channel=3Blue1Brown Group Equivariant CNNs: arxiv.org/abs/1602.07576 Equivariance vs Data Augmentation...
State of AI 2022 - My Highlights
zhlédnutí 2,8KPřed rokem
▬▬ Sources ▬▬▬▬▬▬▬ - State of AI Report 2022: www.stateof.ai/ ▬▬ Used Icons ▬▬▬▬▬▬▬▬▬▬ All Icons are from flaticon: www.flaticon.com/authors/freepik ▬▬ Used Music ▬▬▬▬▬▬▬▬▬▬▬ Music from Uppbeat (free for Creators!): uppbeat.io/t/sensho/forgiveness License code: AG34GTPX2CW8CTHS ▬▬ Used Videos ▬▬▬▬▬▬▬▬▬▬▬ Byron Bhxr: www.pexels.com/de-de/video/wissenschaft-animation-dna-biochemie-11268031/ ▬▬ Ti...
Contrastive Learning in PyTorch - Part 2: CL on Point Clouds
zhlédnutí 15KPřed rokem
▬▬ Papers/Sources ▬▬▬▬▬▬▬ - Colab Notebook: colab.research.google.com/drive/1oO-Raqge8oGXGNkZQOYTH-je4Xi1SFVI?usp=sharing - SimCLRv2: arxiv.org/pdf/2006.10029.pdf - PointNet: arxiv.org/pdf/1612.00593.pdf - PointNet : arxiv.org/pdf/1706.02413.pdf - EdgeConv: arxiv.org/pdf/1801.07829.pdf - Contrastive Learning Survey: arxiv.org/ftp/arxiv/papers/2010/2010.05113.pdf ▬▬ Used Icons ▬▬▬▬▬▬▬▬▬▬ All Ico...
Contrastive Learning in PyTorch - Part 1: Introduction
zhlédnutí 28KPřed rokem
▬▬ Notes ▬▬▬▬▬▬▬▬▬▬▬ Two small things I realized when editing this video - SimCLR uses two separate augmented views as positive samples - Many frameworks have separate projection heads on the learned representations which transforms them additionally for the contrastive loss ▬▬ Papers/Sources ▬▬▬▬▬▬▬ - Intro: sthalles.github.io/a-few-words-on-representation-learning/ - Survey: arxiv.org/ftp/arx...
Self-/Unsupervised GNN Training
zhlédnutí 16KPřed rokem
▬▬ Papers/Sources ▬▬▬▬▬▬▬ - Molecular Pre-Training Evaluation: arxiv.org/pdf/2207.06010.pdf - Latent Space Image: arxiv.org/pdf/2206.08005.pdf - Survey Xie et al.: arxiv.org/pdf/2102.10757.pdf - Survey Liu et al.: arxiv.org/pdf/2103.00111.pdf - Graph Autoencoder, Kipf/Welling: arxiv.org/pdf/1611.07308.pdf - GraphCL: arxiv.org/pdf/2010.13902.pdf - Deep Graph Infomax: arxiv.org/pdf/1809.10341.pdf...
Diffusion models from scratch in PyTorch
zhlédnutí 229KPřed rokem
Diffusion models from scratch in PyTorch
How to get started with Data Science (Career tracks and advice)
zhlédnutí 1,6KPřed 2 lety
How to get started with Data Science (Career tracks and advice)
Converting a Tabular Dataset to a Temporal Graph Dataset for GNNs
zhlédnutí 11KPřed 2 lety
Converting a Tabular Dataset to a Temporal Graph Dataset for GNNs
Converting a Tabular Dataset to a Graph Dataset for GNNs
zhlédnutí 29KPřed 2 lety
Converting a Tabular Dataset to a Graph Dataset for GNNs
How to handle Uncertainty in Deep Learning #2.2
zhlédnutí 2,9KPřed 2 lety
How to handle Uncertainty in Deep Learning #2.2
How to handle Uncertainty in Deep Learning #2.1
zhlédnutí 5KPřed 2 lety
How to handle Uncertainty in Deep Learning #2.1
How to handle Uncertainty in Deep Learning #1.2
zhlédnutí 3,6KPřed 2 lety
How to handle Uncertainty in Deep Learning #1.2
How to handle Uncertainty in Deep Learning #1.1
zhlédnutí 11KPřed 2 lety
How to handle Uncertainty in Deep Learning #1.1
Recommender Systems using Graph Neural Networks
zhlédnutí 21KPřed 2 lety
Recommender Systems using Graph Neural Networks
Fake News Detection using Graphs with Pytorch Geometric
zhlédnutí 14KPřed 2 lety
Fake News Detection using Graphs with Pytorch Geometric
Fraud Detection with Graph Neural Networks
zhlédnutí 25KPřed 2 lety
Fraud Detection with Graph Neural Networks
Traffic Forecasting with Pytorch Geometric Temporal
zhlédnutí 22KPřed 2 lety
Traffic Forecasting with Pytorch Geometric Temporal
Friendly Introduction to Temporal Graph Neural Networks (and some Traffic Forecasting)
zhlédnutí 26KPřed 2 lety
Friendly Introduction to Temporal Graph Neural Networks (and some Traffic Forecasting)
Python Graph Neural Network Libraries (an Overview)
zhlédnutí 8KPřed 2 lety
Python Graph Neural Network Libraries (an Overview)
I have come to understand attention as key, query, value multiplication/addition. Do you know why this wasn't used and if it's appropriate to call it attention?
Hi, Query / Key / Value are just a design choice of the transformer model. Attention is another technique of the architecture. There is also a GNN Transformer (look for Graphormer) that follows the query/key/value pattern. The attention mechanism is detached from this concept and is simply a way to learn importance between embeddings.
Excellent overview. Appreciate it!
There was a error on your published code but not in the video. attn_output, attn_output_weights = self.att(x, x, x) It should be attn_output, attn_output_weights = self.att(q, k, v) Anyway, thanks for sharing the video and code base. It helped me a lot while learning ViT
Great video. Amazing stuff. I have a query. In this use case, it is assumed the distance-based calculations formulate the edge index, and hence, it is constant. How we should proceed if the edges/edge indexes change for every time snapshot.
You are amazing at explaining. Congratulations at having done this so incredibly well
Hi, great video, thanks! Is there a way to use SHAP for ARIMA/SARIMA?
your content and explanation is Incredibly helpful. Thank you
Is this better for the MNIST challenge compared to a simple conv network like LeNet
Awesome!
great video to explain lora! thanks
Is it okay to not scale the numerical data? Can we just proceed with the analysis as is?
can you please make a video on how to perform inference on VIT like googles open source vision transformer?
Great video! For me, the code makes it easier to understand the math than the actual formulas, so videos like these really help.
great video....
this is really amazing content but there is a problem on colab this code is not work anymore
great video , thanks for the work! here is a question from a complete beginner of mlflow and deployment: do I need 3 different machines to run the servers seperately ? Thanks!
Hi, nope you just need 3 terminals / tabs in the terminal :) the different servers will run on different ports of the same machine
@@DeepFindr i think i get the idea. Appreciate it. 8)
Great talk. It’s very clearly explained and well presented.
good job!
For the line "from torch_geometric_temporal.dataset import METRLADatasetLoader" I am getting this error, "ModuleNotFoundError Traceback (most recent call last) <ipython-input-5-ab694df90048> in <cell line: 2>() 1 import numpy as np ----> 2 from torch_geometric_temporal.dataset import METRLADatasetLoader 3 from torch_geometric_temporal.signal import StaticGraphTemporalSignal 4 5 loader = METRLADatasetLoader() 3 frames /usr/local/lib/python3.10/dist-packages/torch_geometric_temporal/nn/attention/tsagcn.py in <module> 4 import torch.nn as nn 5 from torch.autograd import Variable ----> 6 from torch_geometric.utils.to_dense_adj import to_dense_adj 7 import torch.nn.functional as F 8 ModuleNotFoundError: No module named 'torch_geometric.utils.to_dense_adj'" Can you kindly guide that what could be the issue?
Excellent.
Explained quite well !
I've changed the output layer a bit... this: self.head_ln = nn.LayerNorm(emb_dim) self.head = nn.Sequential(nn.Linear(int((1 + self.height/self.patch_size * self.width/self.patch_size) * emb_dim), out_dim)) Then in forward: x = x.view(x.shape[0], int((1 + self.height/self.patch_size * self.width/self.patch_size) * x.shape[-1])) out = self.head(x) The downside is that you'll likely get a lot more overfitting, but without it the network was not really training at all.
Hi, thanks for your recommendation. I would probably not use this model for real world data as there are many important details that are missing (for the sake of providing a simple overview). I will pin your comment for others that also want to use this implementation. Thank you!
BEST DIMENSIONAL REDUCTION VIDEO SERIES EVER! You are the 3blue1brown for data mining.
Thanks for the nice words!
I think there is a confusion between cls token and positional embedding? At 6:09?
Finally a channel with good content!
hi there I have used the code for binary class classification, but encountering the problem on accuracy , showing 100% accuracy only on label 1 and some times on label 2. So it would be helpful for me if u provide me any solution
Hi, please see pinned comment. Maybe this helps :)
Very useful and informative video, especially PointNet and batch size parts. Special thanks for using point cloud domain!
Glad it was useful! :)
Where to get slides? Used in video
how is Lora fine-tuning track changes from creating two decomposition matrix? How the ΔW is determined?
This channel is a gift
Ideot read paper. Lol
Your videos are great, super high quality and clear explanations. Thanks you so much!
Thank you sir! Your videos are great! 👍
Perfect! Just in time for my ML final lmao
Haha awesome! Good luck :)
Brilliant
The video is great but the training in the code didn't work for the entire 1000 epochs. Despite the code looks logical, there is endless of things that can go wrong so I think it was better to do a tutorial with working ViT notebook.
Hi! I think this is because the Dataset is too small. Transformers are data hungry. It should work with a bigger dataset
Also have a look at the pinned comment, maybe that helps :)
This work is crazy good!!
hello, nice work!! Can you do more video about combinning graph neural netwwork and recurrent network, i cannot find that anywhere
Hi, have a look at my temporal GNN videos, this might be what you are looking for :)
@@DeepFindr Thanks for the help!
great video, thanks
What an excellent video!! Congrats!!
beutifully explained
Thank you very much for this amazing vide. However, although this was probably only for demo purposes of a forward pass after LoRA finetuning; the modified forward pass method you`ve shown might be mislieading; since the forward pass of the function is assumed to be entirely linear. So, does the addition of the LoRA finetuned weights to the base model weights happen directly within model weights file (like .safetensors) or can it be done on a higher level on pytorch or tensorflow?
Thanks a lot Amazing explanation, very clear and straightforward
Also StandfordCars is no longer available can you plese chnage it?
I didnt understand...why do we have to convert images to tensors?
Conversion of Videos From SD to HD Resolution Using Diffusion Source code please
Do you have ppt on this
TLDR: "couldn't make it work but maybe you can"
installation is very difficult