PORTFOLIO // 2026
Koshi Mazaki

Product Architect

Founder, Glitch Candies Studio.

Building the intersection of Creative Coding and Agentic AI.

I design and deploy unique aesthetics for immersive, AI-driven experiences.

SIDKIT

SIDKIT

What if agents could write directly to hardware? SIDKIT pairs LLM agents with a Rust toolchain — one designs sound, one generates code, one compiles and flashes. Same Teensy becomes synth, sequencer, or game console via prompt.

The Thesis

Everyone will be vibe coding. SIDKIT extends it into the physical world — a living, reconfigurable instrument designed for agent collaboration. Users move from Text → Code → Firmware → Sound without leaving the browser.

The Challenge

How to build AI native hardware from ground up allowing agents to code games, synthesis modules and sequencers with just voice or text prompt. There is no device that allows that atm. What is the optimal interface for no-coders or beginners? How to learn with AI? How to teach articulating complex tasks and features.

Solution: SIDKIT thanks to its component and design systems allows flashing firmware directly from prompts. It previews interactive devices before committing flashes using WASM. WASM version mirrors hardware — hear it before flashing. Digital twin ensures what you hear is what you get. That allows faster iteration and abstracts coding away from the users. Errors and solutions are saved in the database allowing agents to find common problems.

SIDKIT Media Player Prototype
Prototype Media Player with shaders, designed for 256x128 4-bit greyscale OLED

The Category: Text to Hardware

Same Teensy, different device. Via prompt it becomes:

  • A wavetable synthesiser
  • A Pac-Man melody generator
  • A 4-track step sequencer
  • A hybrid engine (physical modelling + SID chip emulation)
  • A Zelda-style adventure with procedural audio

You're not generating firmware — you're generating devices.

Triple-Agent Architecture

AgentFunctionLLMStack
SAGEPlatform guide, sound design, SysEx generationClaude HaikuMCP Tools
BuilderC++ code generation, module creationClaude Sonnet/OpusCode Gen
Tool ServerCompile (arm-gcc) + Flash (teensy_loader)NoRust + Axum

System Flow

Pipeline

Text

Natural Language

Code

C++ Generated

Firmware

Compiled Binary

Hardware

Teensy 4.1

User Prompt
"Make it sound like 80s arcade"
SAGE
Haiku
Sound Design
Builder
Sonnet
C++ Codegen
Tools
Rust
Compile + Flash
SysEx → WASM
Preview in Browser
Teensy 4.1
Flash to Hardware

Design Decisions

  • Multi-Engine Architecture: 5 emulation engines (ESFM, APU, ReSID, Amiga, YM2612) swappable in real-time — sequences can switch chip emulations mid-melody, covering classic retro consoles and DOS sound chips.
  • Semantic Knowledge Base: 100+ JSON documents — chips, synthesis methods, games, SDK patterns. Cloudflare AI embeddings, cached in Vectorize.
  • MCP Tools (SAGE): 9 specialised tools — get_platform_spec, get_engine_specs, search_knowledge, control_synth, etc.
  • Dual Design Systems: Two parallel component libraries — firmware OLED (C++/U8g2) and WebUI (React/TypeScript).
  • Error Learning: SQLite-based pattern storage. Successful fixes are stored — next time that error appears, the agent recalls the solution.
  • Schema-Driven Architecture: 12 structured schemas (parameters, SysEx, presets, engines, UI components) — single source of truth for firmware ↔ WebUI sync.
  • R&D Pipeline: 40+ prototypes hand-tested and benchmarked — iterative development from concept to hardware.
SAGE Agent UI - Natural language interface for SIDKIT platform
SAGE — AI-powered platform guide with semantic knowledge retrieval

Hardware Platform

MCUTeensy 4.1 (ARM Cortex-M7 @ 600MHz)
Display256x128 4-bit greyscale OLED
Controls8 encoders + 24 buttons
Audio2x stereo + USB Audio (48kHz)
Architecture4 engine slots, 1x sequencer slot 1x game slot, any sound engine in any slot
SIDKIT Hardware Mixer Interface
Hardware mixer: Drive, Outputs, Mods — Icons for accessibility and satisfying UX

Games as Sequencers

Games aren't just sound effects — they're melody generators. Every action triggers musical events that can be recorded and looped.

SIDMANPlayable

Pac-Man meets Orca. Player navigates maze, falling notes hit character, creating melodies. Gameplay becomes composition.

BITSEngine Done

Lemmings-inspired rhythm pathfinder. AI creatures walk toward exits that trigger drums. Manipulate obstacles to create rhythmic variations.

Rain SequencerPrototype

Probability-driven step sequencer with sprite animations. Raindrops fall with weighted randomness — each hit triggers notes. Visual patterns become generative compositions. Sprite animations are triggered by selected tracks for engagement and visual impact.

Grid SequencerPrototype

Game of Life meets step sequencer. Cellular automata and L-systems grid evolves over time, modulating playback speed and pitch across selected waveforms on active steps. Users can lock parameters per step and change primitive AI algorithms in real-time to affect modulation.

Agent-to-Hardware Protocol (A2HW)

  • 1.Preview Layer — WASM synth mirrors hardware
  • 2.SysEx Control — Real-time changes without reflashing
  • 3.Error Learning — Pattern matching from successful fixes
  • 4.Digital Twin — WebUI and hardware UI stay in sync

This is not a basic synth with an AI assistant. It's a platform for generating devices.

Stack

Next.js 15 • React • TypeScript • WASM (reSID) • Rust • Axum • C++ • PlatformIO • MCP Server • Cloudflare Workers/D1/Vectorize