Svelte Prologue
Let’s Use Svelte Like This
The reason I started this blog is because of LLMs, a form of generative AI. I believe that the manufacturing industry will evolve beyond simple automation toward autonomous manufacturing, and LLMs will be the most crucial technology driving this change.
To integrate LLMs into semiconductor manufacturing, models like ChatGPT, Claude, and Grok are essential. However, before deploying such models, it is critical to have a user interface (UI) that enables them to be used as actual services. No matter how powerful a model may be, if it isn’t easily accessible and usable on the shop floor, real-world adoption becomes difficult.
In late 2024, Jensen Huang introduced the concept of “Physics AI.” But I believe that before we get there, we first need to establish an interface where humans and LLMs can communicate naturally.
However, in a corporate environment, due to security concerns and technical limitations, there is no flexible UI available to utilize LLMs effectively. Moving forward, we need to embrace new technologies such as MCP (Model Context Protocol) to drive innovation in manufacturing, but the current situation leaves much to be desired.
While I was exploring these ideas, I came across an open-source project called OpenWebUI. After trying it out, I found that it already included most of the features I had envisioned. It inspired me to consider building a service of my own based on it.
The only problem was that I had little to no experience in frontend development. So, starting on March 24, I began diving into the related technologies. I learned that OpenWebUI is built with Svelte, Vite, TypeScript, and Tailwind CSS. Among these four, I decided to first explore the frontend framework Svelte/Sveltekit (including Vite), which forms the core of the UI layer.
Recommended For:
This tutorial is perfect for those who relate to any of the following scenarios (and, to be honest, all of these apply to me!):
- A UI is essential for effectively utilizing LLMs, and OpenWebUI is the optimal solution for that.
- In an on-premise environment, it is crucial to rapidly develop various functionalities of LLMs.
- This time, I’d like to take the opportunity to learn frontend development.
OpenWebUI
Open WebUI is an extensible, feature-rich, and user-friendly self-hosted AI platform designed to operate entirely offline. It supports various LLM runners like Ollama and OpenAI-compatible APIs, with built-in inference engine for RAG, making it a powerful AI deployment solution.