Using Machine Learning and Azure Custom Vision for image classification in your apps

Last fall, me, my family and a friends family took a trip to the forest for collection mushrooms. This blog post will not be about the trip to the forest, but that my friend not was good to recognize mushroom species I got an idea to write an app for that. So this post will be about how I wrote that app using Azure and Xamarin.

Train a model

The first thing we need to do to detect mushrooms in photos is to train a model that I can use to make predictions. In Azure Cognitive Services there is a service called "Custom Vision". We can do two things with custom vision, we can use image classification by upload images and tag them and we can upload images and tag specific areas in the image so the trained model can be used for object detection. The object detection part is in preview at the time of writing. So what I did was to upload photos of different mushrooms and tagged them. While this blog post will focus on how to consume a trained model I recommend you to read the official documentation about the custom vision service, We will also focus on the image classification part, while that is enough to do a mushroom recognition app and the trained object detection models is not able to export right now. And in an app like this is really nice to be able to do classification without an internet connection.

Run prediction on a device using an exported model

When we are doing image classification in Custom Vision we can export models for doing prediction locally on the user's device, that makes it possible to do predictions without any connection to the service in Azure.

While I need to do the prediction in the platform project I created an interface to so I can use it from shared code.

public interface IImageClassifier
        event EventHandler<ClassificationEventArgs> ClassificationCompleted;
        Task Classify(byte[] bytes);
public class ClassificationEventArgs : EventArgs
     public Dictionary<string, float> Classifications { get; private set; }
     public ClassificationEventArgs(Dictionary<string, float> classifications)
          Classifications = classifications;

We will get the result from the classifier as a Dictionary of tags and how confident the result is. In this case, we show that we could identify a mushroom if the classification has a confidence that is higher than 90 percent.

public void DoClassification(byte[] bytes)
     classifier.ClassificationCompleted += ClassificationCompleted;
private void ClassificationCompleted(object sender, PredictedEventArgs e)
            var top = e.Predictions.OrderByDescending(x => x.Value).First();
            if(top.Value > 0.9)
                //Show what mushroom that was in the photo
                //Handle that mushrooms not could be identified

Using CoreML on iOS

CoreML is built-in in iOS from iOS 11 and above.

Import the model

While we have exported our model you need to import it to your iOS project, the mode should be placed in the Resources folder. When the model is added to the Resources folder next step is to use it in the code. Before we can use the model, we need to compile it. If we want we can pre-compile it with Xcode, but the compilation is really fast so in this case, it not necessary. After we have compiled the model we can use the compiled model url to store it in a reusable place, so we don't have to compile the model next time you want to use it. The code example below is not covering to store the compiled model.

var assetPath = NSBundle.MainBundle.GetUrlForResource("mushroom", "mlmodel");
var compiledUrl = MLModel.CompileModel(assetPath, out var error);
var model = MLModel.Create(compiledUrl, out error);

Make classifications

Now we have a model that we can start to use for classifications.

Before we are doing the prediction we will create a method to handle the result from the classification. When we get the result we will put it in a dictionary with the tag and how confident the result is for that tag.

void HandleVNRequest(VNRequest request, NSError error)
     if (error != null) return;
     var predictions = request.GetResults<VNClassificationObservation>().OrderByDescending(x => x.Confidence).ToDictionary(x => x.Identifier, x => x.Confidence);
     ClassificationCompleted?.Invoke(this, new ClassificationEventArgs(predictions));

Now when we have a method that makes the request for classification:

var classificationRequest = new VNCoreMLRequest(VNCoreMLModel.FromMLModel(model, out error), HandleVNRequest);
var data = NSData.FromArray(bytes);
var handler = new VNImageRequestHandler(data, CGImagePropertyOrientation.Up, new VNImageOptions());
handler.Perform(new[] { classificationRequest }, out error);

Using TensorFlow on Android

Android has an API for MachineLearning from version 8.1 and above, But while many Android devices running on a lower Android version I have chosen to use TensorFlow. TensorFlow is open source framework for Machine Learning created by Google. To use it on Xamarin.Android, Larry O'Brian has created bindings ( so TensorFlow can be used with Xamarin.Android. To make classifications of images we will use his library in this example. He as also written a blog post about this on the Xamarin blog.

Import the model

When we are exporting from Custom Vision to a Tensorflow we will get a zip file that contains a model file (model.pb) and a file with the labels (labels.text). We need the labels to know what we have tagged the image while that not is included like in the CoreML model.

The model- and label file should be placed in the Asset folder of your Android project.

When we have added the model- and the label file to the Assest folder we can start to write code. The first thing we need to do is to create a TensorFlowInferenceInterface from the model and a list of strings for the labels

var inferenceInterface = new TensorFlowInferenceInterface(Application.Context.Assets, "model.pb");
var sr = new StreamReader(assets.Open("labels.txt"));
var labelsString = sr.ReadToEnd()
var labels = labelsString.Split('\n').Select(s => s.Trim())
                         .Where(s => !string.IsNullOrEmpty(s)).ToList();

Make classifications

Images need to be in 227x227 pixels, that means that the first thing you have to do is to resize the image.

var bitmap = BitmapFactory.DecodeByteArray(bytes, 0, bytes.Length);
var resizedBitmap = Bitmap.CreateScaledBitmap(bitmap, 227, 227, false)
                          .Copy(Bitmap.Config.Argb8888, false);

TensorFlow models exported from Custom Vision cannot handle images, so the image needs to be converted to binary data. The images need to be converted to a float array of point values, one per red, green, and blue value for each pixel, and also some adjustments to the color values are necessary.

var floatValues = new float[227 * 227 * 3];
var intValues = new int[227 * 227];
resizedBitmap.GetPixels(intValues, 0, 227, 0, 0, 227, 227);
for (int i = 0; i < intValues.Length; ++i)
     var val = intValues[i];
     floatValues[i * 3 + 0] = ((val & 0xFF) - 104);
     floatValues[i * 3 + 1] = (((val >> 8) & 0xFF) - 117);
     floatValues[i * 3 + 2] = (((val >> 16) & 0xFF) - 123);

The last step is to do the classification. To get the result you need to create a float array and send that into the Fetch method. The last step is to map the output to a label.

var outputs = new float[labels.Count];
inferenceInterface.Feed("Placeholder", floatValues, 1, 227, 227, 3);
inferenceInterface.Run(new[] { "loss" });
inferenceInterface.Fetch("loss", outputs);
var result = new Dictionary<string, float>();
for (var i = 0; i < labels.Count; i++)
     var label = labels[i];
     result.Add(label, outputs[i]);
PredictionCompleted(this, new PredictedEventArgs(result));

VSTS and Android NDK

If you want to use AOT-compiling (AOT = Ahead of Time) for your Xamarin.Android it requires that Android NDK (Native Development Kit) is installed on the machine that will build your app. I using VSTS to build my apps and I using the "Hosted VS2017" build agent. After I enabled AOT and LLVM (Low Level Virtual Machine) to get better performance I got an error that said that Android NDK was missing on the machine.

But after some research I found out that Android NDK was installed on the machine. What I had to do was to point out the path to the NDK. You can do this by creating a build variable with name "AndroidNdkDirectory" and set "C:\Microsoft\AndroidNDK64\android-ndk-r15c" as value.

Store user data in an secure way

In many apps you want to store user data locally on the device, it could, for example, be passwords, credit card numbers etc. Even if the storage is sandboxed to your apps, you don't want to store it in clear text, you want to store it encrypted.

I have used Xamarin.Auth for many apps while it has an AccountStore class that can be used to store user data encrypted. But while it only supports iOS and Android and needed support for UWP in an app I decided to create my own library. I also felt that I don't want to install a big library when I just wanted one a little piece of it, and furthermore, the main focus was not storing user data encrypted.

So I decided to create TinyAccountManager, it is an open source project where the source can be found on GitHub, It works together with iOS, Android and UWP. And I will properly add support for Mac apps as well.

The easiest way to install it tou your projects is via NuGet,

Install-Package TinyAccountManager

The first you need to do is to initialize the AccountManager per platform.


The only property that are required is ServiceId.

var account = new Account()
    ServiceId = "TinyAccountManagerSample",
    Username = "dhindrik"
account.Properties.Add("Password", "MySecretPassword");
await AccountMananger.Current.Save(account);

Get and Exists

It's recommended that you use Exists before Get, if you using Get and there is no matching account it will throw an exception.

Account account = null;
var exists = await AccountManager.Current.Exists("TinyAccountManagerSample")
  account = await AccountManager.Current.Get("TinyAccountManagerSample")
await AccountManager.Current.Remove("TinyAccountManagerSample")


If you want to use IOC instead of the singleton pattern, you just register the implemenation for each platform with the IAccountManager interface. If you select this way you don't have to run Initialize on each platform

iOS: iOSAccountManager

Android: AndroidAccountManager

UWP: UWPAccountManager

You can find the complete documentation on GitHub, there are also a sample project.

Add custom tiles to map in Xamarin Forms

If we want to use other maps than the platforms default in our apps we need to provide tiles to the map view. To do that we need to create a custom renderer per platform.

In iOS we need to create an url template that contains {x}, {y} and {z}. Those be replaced with values from the map engine.

protected override void OnElementChanged(ElementChangedEventArgs<View> e)
    if(e.NewElement != null)
        var map = (MKMapView)Control;
        var urlTemplate = "https://urltomaptiles/{x}/{y}/{z}";
        var tileOverlay = new MKTileOverlay(urlTemplate);
        map.OverlayRenderer = OverlayRenderer;
private MKOverlayRenderer RenderOverlay(MKMapView mapView, IMKOverlay overlay)
    var tileOverlay = overlay as MKTileOverlay;
    if(tileOverlay != null)
         return new MKTileOverlayRenderer(tileOverlay);
    return new MKOverlayRenderer(overlay);

If we are getting tiles from a service that not supporting the url format with x-,y- and z value we can customize the url. To do that we need to subclass MKTileOverlay and override the URLForTilePath method. In that method we will write the code that created the url. I recommend to create a helper class for that so we can reuse it on the other platforms.

public class CustomTileOverlay : MKTileOverlay
     public override void LoadTileAtPath(MKTileOverlayPath path, MKTileOverlayLoadTileCompletionHandler result)
         base.LoadTileAtPath(path, result);
     public override NSUrl URLForTilePath(MKTileOverlayPath path)
         //Here we write the code for creating the url.
         var url = MapHelper.CreateTileUrl((int)path.X, (int)path.Y, (int)path.Z);
         return new NSUrl(url);

Instead of creating a MKTileOverlay we will create a CustomTileOverlay and add it to the map.

map.AddOverlay(new CustomTileOverlay());

Except to subclass MapRenderer we also need to implement the IOnMapReadyCallback interface. The method OnMapReady will handle when the GoogleMap object is ready so we can work with it. But first we need to request the GoogleMap object in the override of the OnElementChanged method.

public class ExtendedMapRenderer : MapRenderer, IOnMapReadyCallback
    protected override void OnElementChanged(ElementChangedEventArgs<Xamarin.Forms.View> e)
        if(e.NewElement != null)
    public void OnMapReady(GoogleMap googleMap)
        var options = new TileOverlayOptions();
        options.InvokeTileProvider(new CustomTileProvider());

In Android we always need to create an own tile provider.

public class CustomTileProvider : UrlTileProvider
    public CustomTileProvider() : base(256,256) {}
    public override URL GetTileUrl(int x, int y, int zoom)
        //Here we write the code for creating the url.
        var url = MapHelper.CreateTileUrl(x, y, zoom);
        return new URL(url);

UWP (Windows 10)
As in iOS, UWP using an url that contains {x},{y} and {z} that will be replaced by the map engine.

protected override void OnElementChanged(ElementChangedEventArgs<Map> e)
    if (e.NewElement != null)
        map = Control as MapControl;
        HttpMapTileDataSource dataSource = new HttpMapTileDataSource("https://urltomaptiles/{x}/{y}/{z}");      
        MapTileSource tileSource = new MapTileSource(dataSource);

If we want to modify the url we using the UriRequested event on HttpMapTileDataSource.

HttpMapTileDataSource dataSource = new HttpMapTileDataSource();
dataSource.UriRequested += DataSource_UriRequested;

The code for modifying the url is placed in the event handler.

private void DataSource_UriRequested(HttpMapTileDataSource sender, MapTileUriRequestedEventArgs args)
    var deferral = args.Request.GetDeferral();
    //Here we write the code for creating the url.
    var url = MapHelper.CreateTileUrl(args.X, args.Y, args.ZoomLevel);
    args.Request.Uri = new Uri(url);

How to solve Xamarin.Forms android build error after updated to Forms 2.0+

Exception while loading assemblies: System.IO.FileNotFoundException: Could not load assembly 'Microsoft.Windows.Design.Extensibility, Version=, Culture=neutral, PublicKeyToken=b03f5f7f11d50a3a'. Perhaps it doesn't exist in the Mono for Android profile?

If you updated the nuget packages for Xamarin Forms to 2.0+ and the android project stops to build there are a simple solution. Just delete the references which ends with .Design. In my project it was: Xamarin.Forms.Core.Design, Xamarin.Forms.Xaml.Design and Xamarin.Android.Support.Design. I have read about a solution to add the assemblies Microsoft.Windows.Design.Extensibility and System.Xaml. I don't know if it will work, I think to delete the .Design references is a better solution so I never tried to add those assemblies.

Xamarin.Forms and large images in android apps

Are you getting out of memory exception when your running your Android app? It's common that the reason is that a large image is loaded. If it's the case, take a look at this article at the Xamarin developer portal,

But how to implement it when you're building your apps with Xamarin.Forms? In this post I will show one solution how to implement it. We will do it with an custom view and a custom renderer.

First we will create the new view that will inherit from the standard Image view, I will name it LargeImage. While we doesn't want the default behavior we need to create our own source property of type string, I name it ImageSource. While we just want to change the behavior for the Android app and not for Windows and iOS we will set the base Source property in the property changed handler of the ImageSource property if the code not is running on Android.

public class LargeImage : Image
        public static readonly BindableProperty ImageSourceProperty =
        BindableProperty.Create("ImageSource", typeof(string), typeof(LargeImage), default(string), propertyChanged: (bindable, oldValue, newValue) => 
            if (Device.OS != TargetPlatform.Android)
                var image = (LargeImage)bindable;
                var baseImage = (Image)bindable;
                baseImage.Source = image.ImageSource; 
        public string ImageSource
            get { return GetValue(ImageSourceProperty) as string; }
            set { SetValue(ImageSourceProperty, value); }

Next step is to create a renderer for our new view. While we need the default Image behavior except for handling the source the renderer will inherit from ImageRenderer.

This renderer will only work for images in the Drawable folder, so if you have other type of image sources you need to modify the code.

In the renderer we need to handle the ImageSource property, we will do that in the OnPropertyChanged method. While we doesn't want to run the code before the image has width and height we added a if-statement that check if width and height is greater than zero. But we just want it to run once because of that width and height is greater than zero, because of that i have added a flag that I named _isDecoded. If ImageSource changed the code will run because that e.PropertyName will be ImageSource.

[assembly: ExportRenderer(typeof(LargeImage), typeof(LargeImageRenderer))]
namespace SampleApp.Droid.Renderers
    public class LargeImageRenderer : ImageRenderer
        protected override void OnElementChanged(ElementChangedEventArgs<Image> e)
        private bool _isDecoded;
        protected override void OnElementPropertyChanged(object sender, PropertyChangedEventArgs e)
            base.OnElementPropertyChanged(sender, e);
            var largeImage = (LargeImage)Element;
            if ((Element.Width > 0 && Element.Height > 0 && !_isDecoded) || (e.PropertyName == "ImageSource" && largeImage.ImageSource != null)) 
                BitmapFactory.Options options = new BitmapFactory.Options();
                options.InJustDecodeBounds = true;
                //Get the resource id for the image
                var field = typeof(Resource.Drawable).GetField(largeImage.ImageSource.Split('.').First());
                var value = (int)field.GetRawConstantValue();
                BitmapFactory.DecodeResource(Context.Resources, value,options);
                //The with and height of the elements (LargeImage) will be used to decode the image
                var width = (int)Element.Width;
                var height = (int)Element.Height;
                options.InSampleSize = CalculateInSampleSize(options, width, height);
                options.InJustDecodeBounds = false;
                var bitmap = BitmapFactory.DecodeResource(Context.Resources, value, options);
                //Set the bitmap to the native control
                _isDecoded = true;
        public int CalculateInSampleSize(BitmapFactory.Options options, int reqWidth, int reqHeight)
            // Raw height and width of image
            float height = options.OutHeight;
            float width = options.OutWidth;
            double inSampleSize = 1D;
            if (height > reqHeight || width > reqWidth)
                int halfHeight = (int)(height / 2);
                int halfWidth = (int)(width / 2);
                // Calculate a inSampleSize that is a power of 2 - the decoder will use a value that is a power of two anyway.
                while ((halfHeight / inSampleSize) > reqHeight && (halfWidth / inSampleSize) > reqWidth)
                    inSampleSize *= 2;
            return (int)inSampleSize;

MvvmLight Navigation Extension

When navigating in iOS we can choose to do a modal navigation. That means that we open a page that is on top of the other pages and not included in the navigation stack. When using MvvmLight we only have one method (NavigateTo) when we want to open a new page.

While I want to use MvvmLight and open "modals" I have created a MvvmLight extension for iOS (for storyboards only in this pre version) and Android, If you're interested in the source, it will be at GitHub,

While this is a pre release, feedback is very welcome!

Using the extension from shared code
To use it in your ViewModels you need to add the namespace to the class.

using MvvmLightNavigationExtension;
var navigation = ServiceLocator.Current.GetInstance();

We will configure the NavigationService in same way as when we using NavigationService from MvvmLight but we using NavigationServiceExtension() instead of NavigationService and our instance of NavigationServiceExtension should be registered to both INavigationService and INavigationServiceExtension.


 var nav = new MvvmLightNavigationExtension.iOS.NavigationServiceExtension();
 nav.Configure("Page1", "MainView");
 nav.Configure("Page2", "PageView");


 var nav = new MvvmLightNavigationExtension.Droid.NavigationServiceExtension();
 nav.Configure("Page1", "MainView");
 nav.Configure("Page2", "PageView");

MvvmLight and Xamarin.Android

Last week I wrote a blog post about Xamarin.iOS and MvvmLight. Now it's time for the second post about MvvmLight, this time about how to use it with Xamarin.Android.

Because I put the ViewModels in a separate project I can use the same ViewModels for both Android and iOS.

First we install the NuGet package for MvvmLight to the Android project.

Install-package MvvmLightLibs

The ViewModels that we will use is the same as in the iOS app and it will look like this.

public class MainViewModel : ViewModelBase
        private string _name;
        public string Name
                return _name;
                _name = value;
        public RelayCommand Send
                return new RelayCommand(() =>
                        var nav = ServiceLocator.Current.GetInstance<INavigationService>();
                        nav.NavigateTo(Views.Hello.ToString(), Name);                    


MvvmLight has a INavigationService interface that uses for navigation and each platform will have their own implementation. For Android we will do the configuration in MainActivity. Important is to check if navigation already has been initialized. The code will just run once.

      var nav = new NavigationService();            
      nav.Configure(Core.Views.Main.ToString(), typeof(MainActivity));
      nav.Configure(Core.Views.Hello.ToString(), typeof(HelloActivity));
     _isInitialized = true;

In my example I using Autofac for IoC, we can also use the IoC container that is in the MvvmLight package. When we have created the NavigationService we had to register it in the IoC container as a INavigationService.

var builder = new ContainerBuilder();
var container = builder.Build();
var serviceLocator = new AutofacServiceLocator(container);
ServiceLocator.SetLocatorProvider(() => serviceLocator);

To navigate we will resolve the INavigationService interface and use the NavigateTo method.

var nav = ServiceLocator.Current.GetInstance<INavigationService>();
nav.NavigateTo(Views.Hello.ToString(), "Navigation paramter");

To retrieve the parameter we are using the GetAndRemoveParameter in the NavigationService class. Note that this is an Android specific method so we have to cast the INavigationService to NavigationService.

var nav = (NavigationService)ServiceLocator.Current.GetInstance<INavigationService>();
var param = nav.GetAndRemoveParameter<string>(Intent);
ViewModel = ServiceLocator.Current.GetInstance<HelloViewModel>();
ViewModel.Name = param;


When using MVVM we want to use data bindings. In Android we have to create the bindnings in code. MvvmLight will help us with that. In the class for the Activity we hade to add a using to the MvvmLight helpers.

using GalaSoft.MvvmLight.Helpers;

The activity also has to inherit from ActivityBase (GalaSoft.MvvmLight.Views.ActivityBase).

public class HelloActivity : ActivityBase

The MvvmLight helper namespace will contains the extension methods SetBinding and SetCommand.

The fields that vi are creating bindings to need to be declared as public in the Activity.

public EditText Text { get; private set; }
protected override void OnCreate(Bundle bundle)
     var button = FindViewById<Button>(Resource.Id.send);
     Text = FindViewById <EditText>(;
     this.SetBinding(() => ViewModel.Name,() => Text.Text, BindingMode.TwoWay);
     button.SetCommand("Click", ViewModel.Send);

The SetCommand method's first argument will be which event that will execute the command.

I create the ViewModel in the OnCreate method using the ServiceLocator, I prefer to create it with the ServiceLocator directly instead of wrapping it in a ViewModelLocator which is a common way to do it when using MvvmLight.

The complete code for this sample is on GitHub,

How to succeed with Xamarin.Forms

Xamarin.Forms makes it possible to write UI for all the major mobile platforms, iOS, Android and Windows Phone with one shared code base. Many developers think that Xamarin.Forms isn't is good enough to create apps that will be published and they think Xamarin.Forms is more a tool for prototyping.

If you think Xamarin.Forms is a "magic" product that will fix everything, you will properly not succeed with Xamarin.Forms. For me Xamarin.Forms is a framework that helps me build apps for multiple platforms. If you look at Xamarin.Forms in that way you will increase your chances to success with Xamarin.Forms.

Xamarin.Forms is delivered with a lot of controls that uses the Xamarin.Forms framework to render native platform specific controls. A Xamarin.Forms control is in many way just a description of what the control can do. From that description is native controls rendered.

The power of Xamarin.Forms is that you can use the framework to create you own renderers if you want to renderer a control different then the standard renderer. You can also create your own controls by using custom renderers.

I guess one of the most common issues with Xamarin.Forms i ListView performance. It's not surprising, you maybe have 5 or 6 controls for each cell (row). If cells are reused you might have 8 cells. It means that forms need to create at least 40 renderer objects. And if any of the controls is a StackLayout or a RelativeLayout where it's render has do to a lot of calculations where to place controls I guess you will realize that it will use a lot of memory.

So if you instead create your own custom cell that has it own renderer, there will be only one renderer for each row or if you write a renderer for the whole list you will only have one renderer. Can you realize how much memory you will save on that? If not you will see it when you are scrolling in your ListView,

The biggest problem when using Xamarin.Forms is that the developers don't know how Xamarin.Forms works and they don't know much about the target platforms. If you want to create a excellent app with Xamarin.Forms you still need to have knowledge about the target platforms.

Now you maybe want to ask me why you should use Xamarin.Forms? The answer is that even if you have to write platform specific code for some views there are still much you can use of the controls that is delivered with Xamarin.Forms out of the box and the powerful Xamarin.Forms framework makes it possible to write platform specific code when what you get out of the box with Xamarin.Forms not is enough.

My recommendation is to so do as much as possible with what you get out of the box with forms and don't care about performance and if it doesn't look perfect from the beginning. When you have created your app and built all the business logic, then you can start to look at how to make the app perfect. Than you can start write platform specific code to get better performance and a better look of the app.