AI

Hello! I am running locally in your browser. I am currently simulating a connection to a local LLM (like Llama 2 or Mistral).

Try asking me to write some code, explain a concept, or help you with a task.

Configuration


0.70

System Logs


[INFO] System initialized.
[INFO] WebGPU check passed.
[INFO] Loading model: Llama-2-7b-chat...
[INFO] Model loaded successfully in 12.4s.
[DEBUG] CUDA Context created.
Notification