One of the first architectural decisions I made with WSL-UI was to build a complete mock mode. Not just for automated testing — though that's essential — but for development itself.
Why? Because I didn't want to accidentally delete my actual WSL distributions while debugging. And I wanted to test scenarios that are hard to reproduce with real distros, like network timeouts or corrupted registry entries.
The Anti-Corruption Layer
The key insight came from Domain-Driven Design: the Anti-Corruption Layer pattern. Instead of calling wsl.exe directly from my command handlers, I created a layer of abstraction that could be swapped out at runtime.
In Rust, this meant defining traits for each external dependency:
pub trait WslCommandExecutor: Send + Sync {
fn list_distributions(&self) -> Result<Vec<Distribution>, WslError>;
fn start_distribution(&self, name: &str) -> Result<(), WslError>;
fn stop_distribution(&self, name: &str) -> Result<(), WslError>;
fn terminate_distribution(&self, name: &str) -> Result<(), WslError>;
fn import_distribution(&self, name: &str, path: &Path, location: &Path)
-> Result<(), WslError>;
// ... more operations
}
pub trait ResourceMonitor: Send + Sync {
fn get_memory_usage(&self, name: &str) -> Result<u64, WslError>;
fn get_cpu_percentage(&self, name: &str) -> Result<f64, WslError>;
fn get_vhdx_size(&self, name: &str) -> Result<u64, WslError>;
fn get_registry_info(&self, name: &str) -> Result<RegistryInfo, WslError>;
}
pub trait TerminalExecutor: Send + Sync {
fn open_terminal(&self, name: &str) -> Result<(), WslError>;
fn execute_command(&self, name: &str, command: &str) -> Result<String, WslError>;
}
The real implementations call wsl.exe, read the Windows Registry, and interact with Windows Terminal. The mock implementations? They maintain internal state and return controlled responses.
The Mock Distro Menagerie
When mock mode is active, the app starts with a set of seven fake distributions:
| Name | Version | State | Install Source |
|---|---|---|---|
| Ubuntu | WSL2 | Running | Microsoft Store |
| Debian | WSL2 | Stopped | LXC Container |
| Alpine | WSL2 | Stopped | Container Import |
| Ubuntu-22.04 | WSL2 | Running | Download |
| Fedora | WSL2 | Running | Download |
| Arch | WSL2 | Stopped | Download |
| Ubuntu-legacy | WSL1 | Stopped | Legacy |
These aren't just names in a list. Each has simulated resource usage:
let (mock_memory, mock_cpu) = match distro {
"Ubuntu" => (512_000_000, 2.5), // ~512MB, 2.5% CPU
"Ubuntu-22.04" => (384_000_000, 1.8), // ~384MB, 1.8% CPU
"Debian" => (256_000_000, 0.5), // ~256MB, 0.5% CPU
"Alpine" => (64_000_000, 0.2), // ~64MB, 0.2% CPU
"Fedora" => (196_000_000, 1.2), // ~196MB, 1.2% CPU
_ => (128_000_000, 0.3), // Default values
};
They have fake registry entries with realistic GUIDs, paths, and package information. They report disk sizes between 500MB and 8GB. The mock even simulates physical disks — a 500GB SSD and a 1TB HDD with partitions — for the disk mounting feature.
State Management in Mock Mode
The mock isn't static. When you start a distribution, its state changes to "Running". When you stop it, it goes back to "Stopped". Create a new one, and it appears in the list. Delete one, and it's gone.
pub struct MockWslExecutor {
distributions: Arc<Mutex<HashMap<String, MockDistribution>>>,
default_distribution: Arc<Mutex<Option<String>>>,
error_simulation: Arc<Mutex<Option<ErrorSimulation>>>,
}
impl MockWslExecutor {
pub fn new() -> Self {
let mut distros = HashMap::new();
// Initialize with the 7 default distributions
distros.insert("Ubuntu".to_string(), MockDistribution {
guid: generate_guid(),
name: "Ubuntu".to_string(),
state: DistroState::Running,
version: WslVersion::Wsl2,
// ...
});
// ... more distros
Self {
distributions: Arc::new(Mutex::new(distros)),
default_distribution: Arc::new(Mutex::new(Some("Ubuntu".to_string()))),
error_simulation: Arc::new(Mutex::new(None)),
}
}
}
The Arc<Mutex<...>> pattern ensures thread safety — Tauri commands can be called from multiple threads, and the mock state needs to be consistent.
Simulating Failures
Testing the happy path is easy. Testing error handling is harder — unless you can make errors happen on demand.
The mock includes an error simulation system:
pub struct ErrorSimulation {
pub operation: String,
pub error_type: ErrorType,
pub delay_ms: u64,
}
pub enum ErrorType {
Timeout,
CommandFailed,
NotFound,
Cancelled,
}
From the frontend (or E2E tests), I can tell the mock to fail the next operation:
await invoke('set_mock_error', {
operation: 'start_distribution',
errorType: 'timeout',
delayMs: 5000
});
// Now the next start_distribution call will timeout
await invoke('start_distribution', { name: 'Ubuntu' });
// Throws a timeout error after 5 seconds
This let me test:
- Progress dialogs during slow operations
- Error notification display
- Retry logic
- Graceful degradation
Activating Mock Mode
Mock mode is controlled by environment variables:
# Using either of these activates mock mode
WSL_MOCK=1
WSL_UI_MOCK_MODE=1
On startup, the app checks these and initializes the appropriate implementations:
fn init_executors() {
if crate::utils::is_mock_mode() {
// Create and wire up mock implementations
let wsl_mock = Arc::new(MockWslExecutor::new());
WSL_EXECUTOR.get_or_init(|| wsl_mock.clone());
let resource_mock = MockResourceMonitor::with_wsl_mock(wsl_mock.clone());
RESOURCE_MONITOR.get_or_init(|| Arc::new(resource_mock));
let terminal_mock = MockTerminalExecutor::new();
TERMINAL_EXECUTOR.get_or_init(|| Arc::new(terminal_mock));
} else {
// Use real implementations
WSL_EXECUTOR.get_or_init(|| Arc::new(RealWslExecutor));
RESOURCE_MONITOR.get_or_init(|| Arc::new(RealResourceMonitor));
TERMINAL_EXECUTOR.get_or_init(|| Arc::new(RealTerminalExecutor));
}
}
Frontend Test Utilities
The mock mode isn't just for the backend. The frontend exposes Zustand stores on the window object during E2E tests:
// In development/test mode
if (import.meta.env.DEV || import.meta.env.MODE === 'test') {
(window as any).__distroStore = useDistroStore;
(window as any).__notificationStore = useNotificationStore;
}
This lets E2E tests directly inspect and manipulate application state:
// In a WebdriverIO test
const store = await browser.execute(() => window.__distroStore.getState());
expect(store.distributions).toHaveLength(7);
// Reset to initial state between tests
await browser.execute(() => window.__distroStore.getState().reset());
The Benefits
Building the mock mode took significant effort — probably 15-20% of the total project time. Was it worth it?
Absolutely.
Faster development — No waiting for real WSL operations. Starting a distribution takes milliseconds instead of seconds.
Safe experimentation — I could test destructive operations (delete, format) without risk.
Reproducible tests — E2E tests run against identical initial state every time.
Offline development — No need for actual WSL distributions to be installed.
Edge case coverage — Easy to test scenarios like "what if the user has 50 distributions?"
CI/CD friendly — Tests run in GitHub Actions on a clean Windows runner with no WSL setup required.
Lessons Learned
If I were doing this again, I'd start with the mock mode even earlier. The abstraction layer pays dividends throughout development, not just in testing.
A few things I'd do differently:
More realistic timing — The mock is too fast. Real WSL operations have noticeable latency. Adding configurable delays would make the development experience more representative.
Persistence option — Currently the mock resets on app restart. An option to persist mock state to a file would be useful for longer testing sessions.
Fuzz testing — Random operation sequences to find edge cases. The infrastructure is there; I just need to write the tests.
Next up in this series: the gnarly details of renaming distributions, including the Windows Registry changes required to make it work properly.
Try It Yourself
WSL-UI is open source and available on:
- Microsoft Store: Search for "WSL UI" or visit the store listing
- Direct Download: Official Website
- GitHub: github.com/octasoft-ltd/wsl-ui



