Comparing Kolmogorov-Arnold Network(KAN ) and Multi-Layer Perceptrons (MLPs) We have taken the classic Multi-Layer Perceptrons (MLPs) for granted and built
XLSTM — Extended Long Short-Term Memory Networks LSTMs or Long Short-Term Memory Networks have been around for a long
Make your LLM Fully Utilize the Context A simple data-driven approach from Microsoft to increasing the context length of
Naive Quantization Methods for LLMs — a hands-on LLMs today are quite large and the size of these LLMs both
Fine-tuning of LLMs - The six step lifecycle Fine-tuning is an art and a methodical process similar to software engineering.
Retrieval Augmented Generation(RAG) — A quick and comprehensive introduction LLMs have come a long way in the past year or so.
Lumiere — The most promising Text-to-Video model yet from Google Would you like to see Monalisa smile like a witch? Or would
ControlNet — Take complete control of images from the generative model This week let's look at one of the most influential
QLoRA — Train your LLMs on a Single GPU In my previous article, we saw about Low-Rank Adaptation or LoRA. LoRA
LoRA - Low-Rank Adaptation of LLMs (paper explained) Introduction Whenever we want a custom model for our application, we start
Stable Video Diffusion — Convert Text and Images to Videos Stability AI, one of the leading players in the image generation space,
Emu — the foundation model for Emu Edit and Emu Video “Enhancing Image Generation Models Using Photogenic Needles in a Haystack” aka. Emu