--- license: apache-2.0 language: - en - zh library_name: transformers tags: - trl - gpt_oss - code - ui - web - .tsx - .html - .css - abliterated - text-generation-inference - web-ui base_model: - Tesslate/UIGEN-T3-4B-Preview pipeline_tag: text-generation --- ![2](/static-proxy?url=https%3A%2F%2Fcdn-uploads.huggingface.co%2Fproduction%2Fuploads%2F65bb837dbfb878f46c77de4c%2FP-LIpWdt5ypMGIfE5kJiK.png) # **Muscae-Qwen3-UI-Code-4B** > **Muscae-Qwen3-UI-Code-4B** is a web-UI-focused model fine-tuned on UIGEN-T3-4B-Preview (built upon **Qwen3-4B**) for **controlled Abliterated Reasoning** and **polished token probabilities**, designed **exclusively for experimental use**. > It excels at **modern web UI coding tasks**, **structured component generation**, and **layout-aware reasoning**, making it ideal for frontend developers, UI engineers, and research prototypes exploring structured code generation. > \[!note] > GGUF: [https://huggingface.co/prithivMLmods/Muscae-Qwen3-UI-Code-4B-GGUF](https://huggingface.co/prithivMLmods/Muscae-Qwen3-UI-Code-4B-GGUF) ## **Key Features** 1. **UI-Oriented Abliterated Reasoning** Controlled reasoning precision tailored for frontend development and code generation, with polished token distributions ensuring structured, maintainable output. 2. **Web UI Component Generation** Excels at generating **responsive components**, **semantic HTML**, and **Tailwind-based layouts** with reasoning-aware structure and minimal boilerplate. 3. **Layout-Aware Structured Logic** Understands **UI state flows**, **component hierarchies**, and **responsive design patterns**, producing logically consistent, production-ready UI code. 4. **Hybrid Reasoning for Code** Combines symbolic reasoning with probabilistic inference to deliver optimized component logic, conditional rendering, and event-driven UI behavior. 5. **Structured Output Mastery** Natively outputs in **HTML**, **React**, **Markdown**, **JSON**, and **YAML**, making it ideal for UI prototyping, design systems, and documentation generation. 6. **Optimized Lightweight Footprint** With a **4B parameter size**, it’s deployable on **mid-range GPUs**, **offline workstations**, or **edge devices** while retaining strong UI coding capabilities. ## **Quickstart with Transformers** ```python from transformers import AutoModelForCausalLM, AutoTokenizer model_name = "prithivMLmods/Muscae-Qwen3-UI-Code-4B" model = AutoModelForCausalLM.from_pretrained( model_name, torch_dtype="auto", device_map="auto" ) tokenizer = AutoTokenizer.from_pretrained(model_name) prompt = "Generate a responsive landing page hero section with Tailwind and semantic HTML." messages = [ {"role": "system", "content": "You are a frontend coding assistant skilled in UI generation, semantic HTML, and component structuring."}, {"role": "user", "content": prompt} ] text = tokenizer.apply_chat_template( messages, tokenize=False, add_generation_prompt=True ) model_inputs = tokenizer([text], return_tensors="pt").to(model.device) generated_ids = model.generate( **model_inputs, max_new_tokens=512 ) generated_ids = [ output_ids[len(input_ids):] for input_ids, output_ids in zip(model_inputs.input_ids, generated_ids) ] response = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0] print(response) ``` ## **Intended Use** * Web UI coding and component generation * Responsive layout and frontend architecture prototyping * Semantic HTML, Tailwind, and React code generation * Research and experimental projects on structured code synthesis * Design-system-driven development workflows ## **Limitations** * Experimental model – not optimized for production-critical deployments * Focused on **UI coding** – not suitable for general reasoning or creative writing * May produce inconsistent results with **very long prompts** or **cross-framework tasks** * Prioritizes structure and correctness over stylistic creativity or verbosity