Why Testing Mobile Apps Is a Different Beast Entirely
Let me be honest with you: testing mobile applications is harder than testing web apps or APIs. I know that sounds like a convenient excuse for skipping tests, but hear me out — the difficulty is real, and understanding it is the first step toward building a strategy that actually works.
Mobile apps face a unique cocktail of challenges that server-side code simply doesn't deal with. Your app has to behave correctly across a fragmented landscape of devices with different screen sizes, OS versions, hardware capabilities, and manufacturer-specific quirks. It has to handle lifecycle events gracefully — the user switches to another app, an incoming phone call interrupts your carefully orchestrated data sync, the OS kills your process to reclaim memory. And through all of that, the UI has to remain responsive and correct.
Oh, and your app also has to work when the network is fast, slow, intermittent, or completely absent. No pressure.
If you ask me, the biggest reason so many .NET MAUI teams ship with poor test coverage isn't laziness — it's confusion. The tooling landscape is genuinely fragmented. Should you use xUnit or NUnit? What about device runner tests versus standard unit tests? Is Appium still the way to go for UI testing, or did something better come along? How do you even reference a .NET MAUI project from a plain test project without pulling in platform-specific targets? These are real questions, and the documentation doesn't always connect the dots clearly.
This guide is the article I wish existed when I started taking .NET MAUI testing seriously. We'll build up a complete, practical testing strategy layer by layer — from fast unit tests that run in milliseconds, through integration tests that verify real behavior, all the way to UI automation that proves your app works from the user's perspective. Along the way, I'll share the patterns that have earned their place in production and the pitfalls that have cost me hours of debugging.
Structuring Your .NET MAUI Solution for Testability
Before writing a single test, you need a solution structure that makes testing natural rather than painful. Here's the thing: if your code is hard to test, that's not a testing problem — it's a design problem. And with .NET MAUI, the design decisions you make early on determine whether testing will be a smooth experience or a constant uphill battle.
MVVM as the Foundation
The Model-View-ViewModel pattern isn't just an architectural nicety in .NET MAUI — it's the single most important enabler of testable code. When your ViewModels contain the application logic and your Views are thin XAML shells that data-bind to those ViewModels, you can test the vast majority of your app's behavior without ever touching the UI layer.
Your ViewModels don't inherit from any platform class, don't depend on the MAUI runtime, and can run perfectly well in a plain .NET test project. That's a huge deal.
Interface-Driven Service Design
Every service your ViewModel depends on should be accessed through an interface. Navigation, API calls, local storage, geolocation, connectivity — all of it. This isn't busywork; it's what lets you swap real implementations for test doubles during testing. When your LoginViewModel depends on IAuthenticationService instead of a concrete AuthenticationService, you can test the login flow without making real HTTP calls.
Separating Platform Code from Business Logic
Keep your business logic in code that has zero awareness of the platform it's running on. The moment your service class references Android.Content.Context or UIKit.UIApplication, you've made it untestable in a standard unit test project. Instead, define platform abstractions through interfaces and let the DI container wire up the real implementations at runtime.
Recommended Solution Structure
Here's the solution layout I've settled on after several production projects. It keeps concerns cleanly separated and makes each test project's purpose immediately obvious:
MyApp.sln
│
├── src/
│ ├── MyApp/ # .NET MAUI app project
│ │ ├── Platforms/ # Platform-specific code
│ │ ├── Views/ # XAML pages
│ │ ├── MauiProgram.cs # DI registration
│ │ └── MyApp.csproj # Multi-targeted: net9.0-android, net9.0-ios, etc.
│ │
│ └── MyApp.Core/ # Class library for shared logic
│ ├── Models/
│ ├── ViewModels/
│ ├── Services/
│ │ ├── Interfaces/
│ │ └── Implementations/
│ └── MyApp.Core.csproj # Target: net9.0
│
├── tests/
│ ├── MyApp.UnitTests/ # Fast unit tests (xUnit)
│ │ └── MyApp.UnitTests.csproj # Target: net9.0
│ │
│ ├── MyApp.IntegrationTests/ # Integration tests
│ │ └── MyApp.IntegrationTests.csproj
│ │
│ ├── MyApp.DeviceTests/ # Device runner tests (runs on device/emulator)
│ │ └── MyApp.DeviceTests.csproj # Multi-targeted like the MAUI app
│ │
│ └── MyApp.UITests/ # Appium UI tests
│ └── MyApp.UITests.csproj # Target: net9.0
The key insight here is the MyApp.Core class library. By putting your ViewModels, services, and models in a project that targets plain net9.0, your unit test project can reference it without any MAUI complications. Your unit tests stay fast, your project references stay simple, and you avoid the headache of trying to build a multi-targeted project just to run some tests.
If you prefer keeping everything in the MAUI project (some teams find the extra project overhead not worth it for smaller apps), you can still make it work — but you'll need to be more careful with your test project configuration, which we'll cover in the next section.
Unit Testing ViewModels and Services with xUnit
Unit tests are the foundation of your testing strategy. They're fast, reliable, cheap to write, and they catch the majority of bugs before they ever reach a device. For .NET MAUI apps, that means testing your ViewModels, services, and business logic in a project that runs entirely on your development machine — no simulators, no emulators, no devices.
Setting Up the Test Project
Here's the project file for a unit test project that can reference your MAUI application. The critical detail is setting UseMaui to false and targeting plain net9.0. If you're referencing the MyApp.Core class library from the solution structure above, you won't need the UseMaui property at all — but if you're referencing the MAUI app project directly, this configuration is essential:
<Project Sdk="Microsoft.NET.Sdk">
<PropertyGroup>
<TargetFramework>net9.0</TargetFramework>
<UseMaui>false</UseMaui>
<ImplicitUsings>enable</ImplicitUsings>
<Nullable>enable</Nullable>
<IsPackable>false</IsPackable>
<IsTestProject>true</IsTestProject>
</PropertyGroup>
<ItemGroup>
<PackageReference Include="Microsoft.NET.Test.Sdk" Version="17.12.0" />
<PackageReference Include="xunit" Version="2.9.3" />
<PackageReference Include="xunit.runner.visualstudio" Version="2.8.2" />
<PackageReference Include="NSubstitute" Version="5.3.0" />
<PackageReference Include="FluentAssertions" Version="7.0.0" />
<PackageReference Include="coverlet.collector" Version="6.0.4" />
</ItemGroup>
<ItemGroup>
<ProjectReference Include="..\..\src\MyApp.Core\MyApp.Core.csproj" />
</ItemGroup>
</Project>
I'm using NSubstitute here instead of Moq. Both are excellent mocking libraries, but NSubstitute's syntax reads more naturally to me — and honestly, it avoids some of the reflection-based concerns that surfaced in the Moq ecosystem a while back. Use whichever you're comfortable with; the principles are identical.
Testing a ViewModel: The LoginViewModel Example
Let's work through a realistic example. Here's a LoginViewModel that handles user authentication with proper validation, async command execution, and navigation:
public class LoginViewModel : ObservableObject
{
private readonly IAuthenticationService _authService;
private readonly INavigationService _navigationService;
private readonly IConnectivityService _connectivityService;
private string _email = string.Empty;
private string _password = string.Empty;
private string? _errorMessage;
private bool _isBusy;
public LoginViewModel(
IAuthenticationService authService,
INavigationService navigationService,
IConnectivityService connectivityService)
{
_authService = authService;
_navigationService = navigationService;
_connectivityService = connectivityService;
LoginCommand = new AsyncRelayCommand(ExecuteLoginAsync, CanExecuteLogin);
}
public string Email
{
get => _email;
set
{
if (SetProperty(ref _email, value))
LoginCommand.NotifyCanExecuteChanged();
}
}
public string Password
{
get => _password;
set
{
if (SetProperty(ref _password, value))
LoginCommand.NotifyCanExecuteChanged();
}
}
public string? ErrorMessage
{
get => _errorMessage;
set => SetProperty(ref _errorMessage, value);
}
public bool IsBusy
{
get => _isBusy;
set => SetProperty(ref _isBusy, value);
}
public IAsyncRelayCommand LoginCommand { get; }
private bool CanExecuteLogin() =>
!string.IsNullOrWhiteSpace(Email) &&
!string.IsNullOrWhiteSpace(Password) &&
!IsBusy;
private async Task ExecuteLoginAsync()
{
if (!_connectivityService.HasInternetAccess)
{
ErrorMessage = "No internet connection. Please check your network.";
return;
}
try
{
IsBusy = true;
ErrorMessage = null;
var result = await _authService.LoginAsync(Email, Password);
if (result.IsSuccess)
await _navigationService.NavigateToAsync("//main");
else
ErrorMessage = result.ErrorMessage;
}
catch (Exception ex)
{
ErrorMessage = "An unexpected error occurred. Please try again.";
}
finally
{
IsBusy = false;
}
}
}
Now let's test it. Notice how each test focuses on a single behavior and uses the Arrange-Act-Assert pattern:
public class LoginViewModelTests
{
private readonly IAuthenticationService _authService;
private readonly INavigationService _navigationService;
private readonly IConnectivityService _connectivityService;
private readonly LoginViewModel _sut;
public LoginViewModelTests()
{
_authService = Substitute.For<IAuthenticationService>();
_navigationService = Substitute.For<INavigationService>();
_connectivityService = Substitute.For<IConnectivityService>();
_connectivityService.HasInternetAccess.Returns(true);
_sut = new LoginViewModel(_authService, _navigationService, _connectivityService);
}
[Fact]
public void LoginCommand_CannotExecute_WhenEmailIsEmpty()
{
_sut.Email = "";
_sut.Password = "validpassword";
_sut.LoginCommand.CanExecute(null).Should().BeFalse();
}
[Fact]
public void LoginCommand_CanExecute_WhenBothFieldsArePopulated()
{
_sut.Email = "[email protected]";
_sut.Password = "securepassword";
_sut.LoginCommand.CanExecute(null).Should().BeTrue();
}
[Fact]
public async Task LoginCommand_NavigatesToMain_OnSuccessfulLogin()
{
_sut.Email = "[email protected]";
_sut.Password = "securepassword";
_authService.LoginAsync("[email protected]", "securepassword")
.Returns(AuthResult.Success());
await _sut.LoginCommand.ExecuteAsync(null);
await _navigationService.Received(1).NavigateToAsync("//main");
}
[Fact]
public async Task LoginCommand_SetsErrorMessage_OnFailedLogin()
{
_sut.Email = "[email protected]";
_sut.Password = "wrongpassword";
_authService.LoginAsync("[email protected]", "wrongpassword")
.Returns(AuthResult.Failure("Invalid credentials."));
await _sut.LoginCommand.ExecuteAsync(null);
_sut.ErrorMessage.Should().Be("Invalid credentials.");
await _navigationService.DidNotReceive().NavigateToAsync(Arg.Any<string>());
}
[Fact]
public async Task LoginCommand_SetsErrorMessage_WhenOffline()
{
_sut.Email = "[email protected]";
_sut.Password = "securepassword";
_connectivityService.HasInternetAccess.Returns(false);
await _sut.LoginCommand.ExecuteAsync(null);
_sut.ErrorMessage.Should().Contain("No internet connection");
await _authService.DidNotReceive().LoginAsync(Arg.Any<string>(), Arg.Any<string>());
}
[Fact]
public async Task LoginCommand_SetsIsBusy_DuringExecution()
{
var busyStates = new List<bool>();
_sut.PropertyChanged += (s, e) =>
{
if (e.PropertyName == nameof(LoginViewModel.IsBusy))
busyStates.Add(_sut.IsBusy);
};
_sut.Email = "[email protected]";
_sut.Password = "securepassword";
_authService.LoginAsync(Arg.Any<string>(), Arg.Any<string>())
.Returns(AuthResult.Success());
await _sut.LoginCommand.ExecuteAsync(null);
busyStates.Should().ContainInOrder(true, false);
}
[Fact]
public async Task LoginCommand_HandlesException_Gracefully()
{
_sut.Email = "[email protected]";
_sut.Password = "securepassword";
_authService.LoginAsync(Arg.Any<string>(), Arg.Any<string>())
.ThrowsAsync(new HttpRequestException("Server unreachable"));
await _sut.LoginCommand.ExecuteAsync(null);
_sut.ErrorMessage.Should().Contain("unexpected error");
_sut.IsBusy.Should().BeFalse();
}
}
Look at what we achieved here: we tested validation logic, successful login, failed login, offline behavior, busy state management, and exception handling — all without a device, a simulator, or even a real HTTP connection. These tests run in a few milliseconds each.
That's the power of properly structured MVVM code. It's almost boring how well it works (and I mean that as a compliment).
Testing Platform Services and Dependency Injection
.NET MAUI's built-in dependency injection makes testability dramatically better than the old Xamarin.Forms approach. Every service you register in MauiProgram.cs can be swapped with a test double, and the platform-specific abstractions provided by Microsoft.Maui.Essentials already come as interfaces — which is honestly a gift for testing.
Creating Test Doubles for Platform Services
When your service depends on something like IConnectivity or IGeolocation, you don't need the actual device APIs during testing. You can either use a mocking library or create explicit fakes. Here's an example of testing a service that uses connectivity and geolocation:
public class LocationTrackingService : ILocationTrackingService
{
private readonly IGeolocation _geolocation;
private readonly IConnectivity _connectivity;
private readonly ILocationRepository _repository;
public LocationTrackingService(
IGeolocation geolocation,
IConnectivity connectivity,
ILocationRepository repository)
{
_geolocation = geolocation;
_connectivity = connectivity;
_repository = repository;
}
public async Task<TrackingResult> RecordCurrentLocationAsync()
{
try
{
var location = await _geolocation.GetLocationAsync(
new GeolocationRequest(GeolocationAccuracy.High, TimeSpan.FromSeconds(10)));
if (location is null)
return TrackingResult.Failure("Could not determine location.");
var record = new LocationRecord
{
Latitude = location.Latitude,
Longitude = location.Longitude,
Timestamp = location.Timestamp,
IsSynced = _connectivity.NetworkAccess == NetworkAccess.Internet
};
await _repository.SaveAsync(record);
return TrackingResult.Success(record);
}
catch (PermissionException)
{
return TrackingResult.Failure("Location permission denied.");
}
}
}
public class LocationTrackingServiceTests
{
[Fact]
public async Task RecordCurrentLocation_SavesWithSyncedTrue_WhenOnline()
{
var geolocation = Substitute.For<IGeolocation>();
var connectivity = Substitute.For<IConnectivity>();
var repository = Substitute.For<ILocationRepository>();
geolocation.GetLocationAsync(Arg.Any<GeolocationRequest>())
.Returns(new Location(47.6062, -122.3321));
connectivity.NetworkAccess.Returns(NetworkAccess.Internet);
var service = new LocationTrackingService(geolocation, connectivity, repository);
var result = await service.RecordCurrentLocationAsync();
result.IsSuccess.Should().BeTrue();
await repository.Received(1).SaveAsync(
Arg.Is<LocationRecord>(r => r.IsSynced == true));
}
[Fact]
public async Task RecordCurrentLocation_SavesWithSyncedFalse_WhenOffline()
{
var geolocation = Substitute.For<IGeolocation>();
var connectivity = Substitute.For<IConnectivity>();
var repository = Substitute.For<ILocationRepository>();
geolocation.GetLocationAsync(Arg.Any<GeolocationRequest>())
.Returns(new Location(47.6062, -122.3321));
connectivity.NetworkAccess.Returns(NetworkAccess.None);
var service = new LocationTrackingService(geolocation, connectivity, repository);
var result = await service.RecordCurrentLocationAsync();
await repository.Received(1).SaveAsync(
Arg.Is<LocationRecord>(r => r.IsSynced == false));
}
}
Integration Testing with a Real DI Container
Sometimes you want to verify that your DI registrations actually resolve correctly — that all the dependencies chain together without missing registrations. You can create a lightweight integration test that builds the real service provider:
[Fact]
public void AllViewModels_CanBeResolved_FromServiceProvider()
{
var services = new ServiceCollection();
// Register services the same way MauiProgram.cs does
services.AddTransient<IAuthenticationService, AuthenticationService>();
services.AddTransient<INavigationService, NavigationService>();
services.AddSingleton<IConnectivityService, ConnectivityService>();
services.AddTransient<LoginViewModel>();
services.AddTransient<MainViewModel>();
var provider = services.BuildServiceProvider();
// This will throw if any dependency is missing
var loginVm = provider.GetRequiredService<LoginViewModel>();
var mainVm = provider.GetRequiredService<MainViewModel>();
loginVm.Should().NotBeNull();
mainVm.Should().NotBeNull();
}
This kind of test catches registration mistakes that would otherwise only surface at runtime when a user navigates to a particular page. Simple, fast, and — I can't stress this enough — surprisingly valuable. I've lost count of the number of times a DI resolution test has saved me from a runtime crash.
Device Runner Tests: Running Tests on Actual Devices
Standard unit tests run on your development machine and can't access any platform APIs. That's usually exactly what you want — but sometimes you need to test code that truly depends on the platform. Things like file system operations with platform-specific paths, secure storage, biometric authentication, or platform-specific rendering behavior.
That's where device runner tests come in.
What Are Device Runner Tests?
A device runner test project is a full .NET MAUI application that hosts a test runner inside it. When you deploy it to a device or emulator, it runs your test suite directly on that platform, giving your test code access to the actual platform APIs, the real file system, and the genuine device capabilities. Think of it as your regular test project, but running as an app on a phone.
Setting Up the Device Test Project
The .NET MAUI template includes a device test project option. The project file looks like a standard MAUI project but with test runner packages included:
<Project Sdk="Microsoft.NET.Sdk">
<PropertyGroup>
<TargetFrameworks>net9.0-android;net9.0-ios;net9.0-maccatalyst</TargetFrameworks>
<OutputType>Exe</OutputType>
<UseMaui>true</UseMaui>
<SingleProject>true</SingleProject>
</PropertyGroup>
<ItemGroup>
<PackageReference Include="Microsoft.Maui.Controls" Version="$(MauiVersion)" />
<PackageReference Include="xunit" Version="2.9.3" />
<PackageReference Include="xunit.runner.devices.maui" Version="0.6.0" />
</ItemGroup>
<ItemGroup>
<ProjectReference Include="..\..\src\MyApp\MyApp.csproj" />
</ItemGroup>
</Project>
The MauiProgram.cs in your device test project configures the test runner:
public static class MauiProgram
{
public static MauiApp CreateMauiApp()
{
return MauiApp.CreateBuilder()
.ConfigureTests(new TestOptions
{
Assemblies = { typeof(MauiProgram).Assembly }
})
.UseVisualRunner()
.Build();
}
}
When Device Tests Are Worth the Overhead
Here's my honest take: device tests are slow to build, slow to deploy, and slow to run. They require an active emulator or a connected device. They add complexity to your CI pipeline. For the vast majority of your application logic, standard unit tests are the right choice.
But device tests earn their keep in specific scenarios: verifying that secure storage works correctly with the platform keychain, testing file I/O with platform-specific paths, confirming that your SQLite database works on-device with real file system constraints, or validating behavior that differs between Android and iOS. Don't default to device tests — reach for them deliberately when you need to exercise code that truly cannot run outside a platform context.
UI Testing with Appium
Unit tests verify that your logic is correct. Integration tests verify that your components work together. But neither of them proves that your user can actually tap a button, see the expected screen, and complete a task. For that, you need UI automation — and for .NET MAUI, Appium is the recommended path.
Why Appium?
Appium is an open-source test automation framework that uses the WebDriver protocol to drive native mobile applications. It supports both Android and iOS from a single test project, the .NET MAUI team has explicitly endorsed it as the UI testing solution, and it has a mature ecosystem of drivers, tools, and community support.
It's not perfect — UI tests are inherently slower and more brittle than unit tests — but Appium is the most reliable option we have right now for cross-platform UI automation of .NET MAUI apps.
Setting Up Appium
You'll need the Appium server installed, along with platform-specific drivers. Here's the setup:
# Install Appium globally
npm install -g appium
# Install the platform drivers
appium driver install uiautomator2 # For Android
appium driver install xcuitest # For iOS
# Verify the installation
appium driver list --installed
Your Appium test project is a standard .NET test project — no MAUI references needed, since Appium communicates with your app through the automation server, not through code references:
<Project Sdk="Microsoft.NET.Sdk">
<PropertyGroup>
<TargetFramework>net9.0</TargetFramework>
<ImplicitUsings>enable</ImplicitUsings>
<Nullable>enable</Nullable>
<IsPackable>false</IsPackable>
</PropertyGroup>
<ItemGroup>
<PackageReference Include="Appium.WebDriver" Version="5.1.0" />
<PackageReference Include="NUnit" Version="4.3.2" />
<PackageReference Include="NUnit3TestAdapter" Version="4.6.0" />
<PackageReference Include="Microsoft.NET.Test.Sdk" Version="17.12.0" />
</ItemGroup>
</Project>
AutomationId: The Key to Reliable Element Selection
Here's something that will save you hours of frustration: always set AutomationId on every element you need to interact with in tests. Don't rely on text content, class names, or XPath queries — those are fragile and break every time you change a label. AutomationId maps directly to the platform's accessibility identifier and gives you a stable, cross-platform handle for element selection.
I learned this the hard way after refactoring a button label and watching 15 UI tests fail simultaneously. Not fun.
<!-- In your XAML -->
<Entry AutomationId="EmailEntry" Placeholder="Email" />
<Entry AutomationId="PasswordEntry" Placeholder="Password" IsPassword="True" />
<Button AutomationId="LoginButton" Text="Sign In" Command="{Binding LoginCommand}" />
<Label AutomationId="ErrorLabel" Text="{Binding ErrorMessage}" />
Writing Cross-Platform UI Tests
Here's a base class that handles the driver setup for both platforms, and a concrete test that exercises the login flow:
public abstract class BaseUITest
{
protected AppiumDriver Driver { get; private set; } = null!;
protected abstract AppiumOptions GetPlatformOptions();
[SetUp]
public void SetUp()
{
var options = GetPlatformOptions();
Driver = new AppiumDriver(
new Uri("http://localhost:4723"), options);
Driver.Manage().Timeouts().ImplicitWait = TimeSpan.FromSeconds(10);
}
[TearDown]
public void TearDown()
{
Driver?.Quit();
}
protected AppiumElement FindByAutomationId(string id) =>
Driver.FindElement(MobileBy.Id(id));
protected void WaitForElement(string automationId, int timeoutSeconds = 15)
{
var wait = new WebDriverWait(Driver, TimeSpan.FromSeconds(timeoutSeconds));
wait.Until(d => d.FindElement(MobileBy.Id(automationId)).Displayed);
}
}
public class AndroidUITest : BaseUITest
{
protected override AppiumOptions GetPlatformOptions()
{
var options = new AppiumOptions();
options.PlatformName = "Android";
options.AutomationName = "UiAutomator2";
options.AddAdditionalAppiumOption("app",
"/path/to/com.mycompany.myapp-Signed.apk");
options.AddAdditionalAppiumOption("appPackage", "com.mycompany.myapp");
options.AddAdditionalAppiumOption("appActivity",
"crc64hash.MainActivity");
return options;
}
}
public class iOSUITest : BaseUITest
{
protected override AppiumOptions GetPlatformOptions()
{
var options = new AppiumOptions();
options.PlatformName = "iOS";
options.AutomationName = "XCUITest";
options.AddAdditionalAppiumOption("app",
"/path/to/MyApp.app");
options.AddAdditionalAppiumOption("platformVersion", "18.0");
options.AddAdditionalAppiumOption("deviceName", "iPhone 16");
return options;
}
}
[TestFixture]
public class LoginUITests : AndroidUITest // or iOSUITest
{
[Test]
public void SuccessfulLogin_NavigatesToMainPage()
{
// Arrange: Wait for login page to load
WaitForElement("EmailEntry");
// Act: Enter credentials and tap login
var emailEntry = FindByAutomationId("EmailEntry");
emailEntry.Clear();
emailEntry.SendKeys("[email protected]");
var passwordEntry = FindByAutomationId("PasswordEntry");
passwordEntry.Clear();
passwordEntry.SendKeys("Test1234!");
var loginButton = FindByAutomationId("LoginButton");
loginButton.Click();
// Assert: Main page should be visible
WaitForElement("MainPageTitle", timeoutSeconds: 20);
var title = FindByAutomationId("MainPageTitle");
Assert.That(title.Text, Is.EqualTo("Dashboard"));
}
[Test]
public void InvalidCredentials_ShowsErrorMessage()
{
WaitForElement("EmailEntry");
var emailEntry = FindByAutomationId("EmailEntry");
emailEntry.Clear();
emailEntry.SendKeys("[email protected]");
var passwordEntry = FindByAutomationId("PasswordEntry");
passwordEntry.Clear();
passwordEntry.SendKeys("badpassword");
FindByAutomationId("LoginButton").Click();
WaitForElement("ErrorLabel", timeoutSeconds: 15);
var errorLabel = FindByAutomationId("ErrorLabel");
Assert.That(errorLabel.Text, Does.Contain("Invalid"));
}
}
Dealing with Flaky Tests
UI tests are notorious for flakiness, and honestly, most of that flakiness comes from timing issues. The test expects an element to be present, but the animation hasn't finished yet, or the network response is a few hundred milliseconds slower than usual. Here are the strategies that actually help:
- Use explicit waits, not Thread.Sleep. The
WebDriverWaitwith a condition is almost always better than a fixed delay. It waits only as long as necessary and fails fast when something is genuinely wrong. - Set reasonable implicit wait timeouts. A 10-15 second implicit wait catches most normal delays without making your test suite unbearably slow.
- Avoid testing animations. Disable animations on your test devices (Android has a developer option for this, and you can set it programmatically in your test setup).
- Isolate test data. Each test should set up its own state and not depend on the result of a previous test. Reset your app state between tests if needed.
- Retry flaky tests with caution. Some teams add a retry attribute to UI tests, which is fine as a safety net — but if a test fails frequently, fix the root cause instead of papering over it with retries.
Integration Testing Patterns for Mobile
Between unit tests and full UI automation, there's a valuable middle ground: integration tests that verify how multiple components work together without driving the actual UI. These tests are faster than Appium tests, more realistic than unit tests with mocks, and they catch a whole category of bugs that falls through the cracks of both.
Testing HTTP Services with Handler Mocking
Instead of mocking your entire API service interface, you can test the real service implementation with a fake HTTP handler. This verifies your serialization, error handling, and URL construction — all the real code runs, only the actual network call is faked:
public class ApiServiceIntegrationTests
{
[Fact]
public async Task GetUserProfile_DeserializesCorrectly()
{
var responseJson = """
{
"id": 42,
"email": "[email protected]",
"displayName": "Jane Developer",
"avatarUrl": "https://cdn.example.com/avatars/42.jpg"
}
""";
var handler = new MockHttpMessageHandler(responseJson, HttpStatusCode.OK);
var httpClient = new HttpClient(handler)
{
BaseAddress = new Uri("https://api.example.com/")
};
var service = new ApiService(httpClient);
var profile = await service.GetUserProfileAsync(42);
profile.Should().NotBeNull();
profile!.Email.Should().Be("[email protected]");
profile.DisplayName.Should().Be("Jane Developer");
}
[Fact]
public async Task GetUserProfile_ReturnsNull_On404()
{
var handler = new MockHttpMessageHandler("", HttpStatusCode.NotFound);
var httpClient = new HttpClient(handler)
{
BaseAddress = new Uri("https://api.example.com/")
};
var service = new ApiService(httpClient);
var profile = await service.GetUserProfileAsync(999);
profile.Should().BeNull();
}
}
public class MockHttpMessageHandler : HttpMessageHandler
{
private readonly string _responseContent;
private readonly HttpStatusCode _statusCode;
public MockHttpMessageHandler(string responseContent, HttpStatusCode statusCode)
{
_responseContent = responseContent;
_statusCode = statusCode;
}
protected override Task<HttpResponseMessage> SendAsync(
HttpRequestMessage request, CancellationToken cancellationToken)
{
var response = new HttpResponseMessage(_statusCode)
{
Content = new StringContent(_responseContent, Encoding.UTF8, "application/json")
};
return Task.FromResult(response);
}
}
Testing Offline Scenarios and Sync Logic
Offline-first apps need tests that simulate connectivity transitions. This is one area where I've seen teams get burned repeatedly — the sync logic works fine in manual testing because you never think to toggle airplane mode at just the wrong moment. But your users will.
You can create a controllable connectivity stub and drive it through different states during a test:
public class FakeConnectivityService : IConnectivityService
{
public bool HasInternetAccess { get; set; } = true;
public event EventHandler<ConnectivityChangedEventArgs>? ConnectivityChanged;
public void SimulateGoOffline()
{
HasInternetAccess = false;
ConnectivityChanged?.Invoke(this,
new ConnectivityChangedEventArgs(false));
}
public void SimulateGoOnline()
{
HasInternetAccess = true;
ConnectivityChanged?.Invoke(this,
new ConnectivityChangedEventArgs(true));
}
}
[Fact]
public async Task SyncService_QueuesOperations_WhenOffline()
{
var connectivity = new FakeConnectivityService();
var repository = new InMemoryRepository();
var apiService = Substitute.For<IApiService>();
var syncService = new SyncService(connectivity, repository, apiService);
connectivity.SimulateGoOffline();
await syncService.SaveAndSyncAsync(new TaskItem { Title = "Buy groceries" });
var pendingOps = await repository.GetPendingSyncOperationsAsync();
pendingOps.Should().HaveCount(1);
await apiService.DidNotReceive().CreateTaskAsync(Arg.Any<TaskItem>());
}
Database Integration Tests with In-Memory SQLite
For repository tests, use an in-memory SQLite database that gives you a real SQL engine without file system dependencies. Each test gets a fresh database, keeping tests isolated and fast:
[Fact]
public async Task TaskRepository_RoundTrips_AllProperties()
{
using var connection = new SqliteConnection("DataSource=:memory:");
await connection.OpenAsync();
var options = new DbContextOptionsBuilder<AppDbContext>()
.UseSqlite(connection)
.Options;
using var context = new AppDbContext(options);
await context.Database.EnsureCreatedAsync();
var repository = new TaskRepository(context);
var task = new TaskItem
{
Title = "Write integration tests",
Description = "Cover repository layer",
IsCompleted = false,
CreatedAt = DateTime.UtcNow
};
await repository.AddAsync(task);
var retrieved = await repository.GetByIdAsync(task.Id);
retrieved.Should().NotBeNull();
retrieved!.Title.Should().Be("Write integration tests");
retrieved.Description.Should().Be("Cover repository layer");
}
Snapshot and Visual Regression Testing
Visual regression testing catches a category of bugs that no other testing approach can: the layout that looks correct to the code but wrong to the eye. A button that shifted 20 pixels to the left, a font size that changed after a dependency update, a dark mode theme that breaks on one specific page — these are real issues that pass every unit and integration test you could write.
Approaches to Visual Testing in .NET MAUI
The .NET MAUI ecosystem offers a few paths for snapshot testing. The framework itself provides VerifyScreenshot() functionality in device test projects, which captures a screenshot during test execution and compares it against a reference image. If the pixels differ beyond a configurable threshold, the test fails.
// In a device test project
[Fact]
public async Task LoginPage_MatchesSnapshot()
{
// Navigate to the login page
var page = new LoginPage(new LoginViewModel(
Substitute.For<IAuthenticationService>(),
Substitute.For<INavigationService>(),
Substitute.For<IConnectivityService>()));
// Capture and compare against baseline
await page.VerifyScreenshot();
}
You can also use Appium to take screenshots during UI tests and compare them with tools like ImageSharp or dedicated visual testing services. Some teams use Verify (the .NET snapshot testing library by Simon Cropp) to store and compare serialized representations of their view models or page states, which isn't pixel-perfect visual testing but catches many structural regressions.
Is Visual Testing Worth the Setup Cost?
Let me be straight with you: visual regression testing has a high initial setup cost and an ongoing maintenance burden. Every time you intentionally change the UI, you need to update the baseline images. Running these tests requires a consistent rendering environment — different emulator skins or OS versions will produce different screenshots that fail the comparison even though the app looks perfectly fine.
My recommendation: if your app has a strong design system with strict visual requirements (think banking apps, healthcare apps, brand-heavy consumer apps), visual regression testing pays for itself quickly. If your app's UI is still evolving rapidly and pixel-perfection isn't a primary concern, save the setup cost and invest that time in more unit and integration tests instead. You can always add visual testing later when the UI stabilizes.
Building a CI/CD Testing Pipeline
Tests that don't run automatically are tests that eventually stop running at all. I've seen it happen on every team that relied on "we'll run them before merging" — within a few months, nobody runs them. A proper CI/CD pipeline ensures every commit is validated, every pull request is tested, and regressions are caught before they reach users.
So, let's set this up for a .NET MAUI project.
Running Unit Tests in CI
Unit tests and integration tests that don't require a device are straightforward — they run on any CI agent with the .NET SDK installed. Here's a GitHub Actions workflow that runs your tests and collects code coverage:
name: Build and Test
on:
push:
branches: [ main ]
pull_request:
branches: [ main ]
jobs:
test:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- name: Setup .NET
uses: actions/setup-dotnet@v4
with:
dotnet-version: '9.0.x'
- name: Install MAUI Workload
run: dotnet workload install maui
- name: Restore dependencies
run: dotnet restore
- name: Build
run: dotnet build --no-restore --configuration Release
- name: Run Unit Tests with Coverage
run: |
dotnet test tests/MyApp.UnitTests/MyApp.UnitTests.csproj \
--no-build \
--configuration Release \
--logger "trx;LogFileName=test-results.trx" \
--collect:"XPlat Code Coverage" \
--results-directory ./TestResults
- name: Run Integration Tests
run: |
dotnet test tests/MyApp.IntegrationTests/MyApp.IntegrationTests.csproj \
--no-build \
--configuration Release \
--logger "trx;LogFileName=integration-results.trx" \
--results-directory ./TestResults
- name: Publish Test Results
uses: dorny/test-reporter@v1
if: always()
with:
name: Test Results
path: 'TestResults/**/*.trx'
reporter: dotnet-trx
- name: Generate Coverage Report
run: |
dotnet tool install --global dotnet-reportgenerator-globaltool
reportgenerator \
-reports:"TestResults/**/coverage.cobertura.xml" \
-targetdir:"CoverageReport" \
-reporttypes:"Html;Cobertura"
- name: Upload Coverage Report
uses: actions/upload-artifact@v4
with:
name: coverage-report
path: CoverageReport/
Running Device and UI Tests in CI
Device tests and Appium tests require emulators or real devices. For Android, you can spin up an emulator in your CI pipeline using reactivecircus/android-emulator-runner. For iOS, you'll need a macOS runner. For teams that need broad device coverage, cloud device farms like Azure DevOps device testing, AWS Device Farm, or BrowserStack provide managed device infrastructure.
The practical reality: most teams run unit and integration tests on every commit, and schedule device or UI tests to run nightly or on release branches. The build time and infrastructure cost of running Appium tests on every pull request usually isn't justified unless your team is large enough to absorb the overhead.
Testing Best Practices Checklist
After years of building and testing .NET MAUI applications, these are the practices I keep coming back to. Not because they're theoretically correct, but because they consistently produce test suites that are maintainable, trustworthy, and actually useful.
The Test Pyramid for .NET MAUI
Follow the classic test pyramid, adapted for mobile:
- Base layer (many): Unit tests for ViewModels, services, business logic, and utilities. These should be the bulk of your tests — fast, isolated, and deterministic.
- Middle layer (moderate): Integration tests for HTTP services, repository/database operations, DI verification, and navigation flows.
- Top layer (few): UI tests with Appium for critical user journeys — login, main workflows, and purchase flows. Keep these focused on happy paths and the most important error scenarios.
Naming and Structure
Name your tests so that a failure message tells you exactly what broke without having to read the test code. The pattern MethodName_ExpectedBehavior_WhenCondition (like LoginCommand_SetsErrorMessage_WhenOffline) has served me well. Group test classes to mirror the class they're testing: LoginViewModel gets a LoginViewModelTests class in a matching namespace.
Keep Tests Focused
- One logical assertion per test. A test named
LoginCommand_NavigatesToMain_OnSuccessshould verify navigation happened — not also check thatIsBusywas reset and the error message was cleared. Those are separate behaviors and deserve separate tests. - Arrange-Act-Assert consistently. Set up your preconditions, perform the action, verify the outcome. Keep these three phases visually distinct in your test code.
- Tests must be deterministic. No
DateTime.Now, noRandom, no dependency on test execution order. Inject clocks and random generators through interfaces if your code needs them. - Keep tests fast. Your unit test suite should complete in seconds, not minutes. If a test needs a 5-second timeout, it's probably an integration test in disguise.
When Not to Test
This is the part that often gets left out of testing guides, but it matters: not everything needs a test. Don't write tests for framework code — you don't need to verify that ObservableCollection fires CollectionChanged events or that data binding works. Don't write tests for trivial property getters and setters with no logic. Don't write tests that just mirror the implementation (you know the one — where you set up a mock to return 42 and then assert the result is 42, congratulations, you tested the mock).
Focus your testing energy where bugs actually live: in the conditional logic, the async coordination, the error handling paths, and the state transitions that are easy to get wrong. Test the code that makes you nervous when you change it. If a piece of code is so simple that a bug in it would be immediately obvious, your time is better spent writing tests for the complex stuff.
Testing .NET MAUI apps isn't easy, but it's absolutely achievable with the right strategy and structure. Start with a testable architecture, build up a solid unit test suite, add integration tests for your data and network layer, and use UI automation sparingly for your critical user journeys. The investment pays for itself the first time your tests catch a regression that would have shipped to your users.