📊Think-in Agents
A Deep Dive Into the Most Relevant AI Models (LLMs, Diffusion, Transformers, and More)LIVE

Featured Article

A Deep Dive Into the Most Relevant AI Models (LLMs, Diffusion, Transformers, and More)

An in-depth guide to today’s most important AI model families—how they work, what they’re best at, and how to choose the right model for real-world systems.

By Imran Khan·Apr 09, 2026
AI modelsmachine learningdeep learningtransformersLLMdiffusion models+8
Read entire entry →

Latest Entries

Junior vs Senior Developer: The Real Difference Is Decision-Making, Not Languages
By Imran Khan·Apr 06, 2026·12m Read

Junior vs Senior Developer: The Real Difference Is Decision-Making, Not Languages

A practical deep dive into how junior and senior developers make technical decisions: tools-first vs constraints-first, and how to choose the simplest solution that meets real requirements.

software-engineeringsenior-developerjunior-developertechnical-decisionstrade-offs+5
12m Read
How npm Supply Chain Attacks Work (and How npm + OWASP Recommend Defending Against Them)
By Imran Khan·Apr 03, 2026·11m Read

How npm Supply Chain Attacks Work (and How npm + OWASP Recommend Defending Against Them)

A technical deep dive into common npm package supply chain attack patterns and practical mitigations based on official npm guidance and OWASP supply chain security recommendations.

npmsupply chain securityOWASPsoftware composition analysisdependency confusion+7
11m Read
Pretext by Cheng Lou: How to Update Text Layout Without getClientBoundingRect (Deep Dive)
By Imran Khan·Apr 01, 2026·13m Read

Pretext by Cheng Lou: How to Update Text Layout Without getClientBoundingRect (Deep Dive)

An engineering-focused explanation of Cheng Lou’s Pretext concept: measuring text spacing algorithmically to avoid expensive DOM reads, enabling responsive updates without full rerenders, and how it compares to React’s standard rendering model.

PretextCheng Loutext layoutDOM performancegetClientBoundingRect+8
13m Read
Running Local AI Models with Ollama: Infrastructure Requirements by Model Size (7B to 70B+)
By Imran Khan·Mar 30, 2026·11m Read

Running Local AI Models with Ollama: Infrastructure Requirements by Model Size (7B to 70B+)

A deep, practical guide to running local LLMs with Ollama, including how model size affects RAM/VRAM, CPU/GPU requirements, quantization choices, and real-world example setups from 7B to 70B+.

Ollamalocal LLMLLM inferencequantizationGGUF+12
11m Read
How to Build Stateful AI Agents in n8n with External Memory (Deep Guide + Examples)
By Imran Khan·Mar 23, 2026·13m Read

How to Build Stateful AI Agents in n8n with External Memory (Deep Guide + Examples)

A practical, engineering-focused guide to designing stateful agents in n8n using external memory stores—covering architectures, memory schemas, retrieval patterns, and concrete workflow examples with sources.

n8nAI agentsstateful agentsexternal memoryRAG+10
13m Read
How to Install n8n as a Service: Local Docker, VPS with Nginx, and Coolify (Beginner Developer Guide)
By Imran Khan·Mar 20, 2026·11m Read

How to Install n8n as a Service: Local Docker, VPS with Nginx, and Coolify (Beginner Developer Guide)

A deep, step-by-step guide to running n8n as a reliable service locally with Docker, on a VPS with Docker + manual Nginx reverse proxy, and deploying via Coolify—plus requirements, examples, and official documentation links.

n8ndockerdocker-composevpsnginx+8
11m Read
System Status: Online© 2026 Think-in Agents