Full page camera in Xamarin.Forms
It has been a while since I coded CharpHat, which is an app that lets you snap a picture of anything and then put a nice C# graduation cap on it. That app was far from perfect, but it helped me to practice the usage of custom page renderers.
Today I decided to retake that project, but this time trying to isolate the code needed to build the interface and funcionality of the page, so that anyone looking forward to implement a full camera page in their apps could reuse the code for their own projects. So be sure to grab the source code for this post.
Forms abstractions
Here is the source code for this section.
Let’s start by creating the Xamarin.Forms page that will serve as our point of interaction with the custom code:
Business as usual, create a class deriving from ContentPage
. I have added an event handler as I want to access the picture taken by the user. Now let’s throw in some methods to call whenever an user performs an action in our camera page (in this case, the user will be allowed to take a photo or cancel the action):
For reference, see the properties inside the PhotoResultEventArgs
class:
Now, time to move on to the platform specifics.
In Xamarin.iOS
Here is the source code for this section.
To be honest, this implementation is the easiest by far. Start off by creating a class that inherits from PageRenderer
, and to add the ExportRenderer
attribute:
Now, an this is very important, you need to override the ViewDidLoad
method, since it gets called as soon as our page is loaded by the iOS mechanisms. For the sake of organisation let’s split the code in several other methods:
SetupUserInterface
As the name states, here is where you need to build the UI. As you may have guessed, it is all done by code, but don’t worry, it is very easy… as long as your UI isn’t so complex, but you can do whatever you need here.
For this sample the UI will consist of a couple of buttons and a surface where the live preview from the camera is going to be shown, so you need to declare them on a class-level scope:
To set the items in place you need to think as if you were working with a relative layout, meaning that you need to set the position of each item within the screen. For example, look at how the live camera preview view is positioned:
SetupEventHandlers
Now that the UI has been built, let’s hook up the event handlers to each control, luckly for this sample there are only two buttons on screen: one to take the picture and the other to cancel the whole thing.
The property Element
contains a reference to the page associated to the renderer, and is our way to interact with our Forms project. As for the method CapturePhoto
… we’ll see it later.
AuthorizeCameraUse
Now it’s time to ask the user for its permission to access the camera:
But wait a minute, before executing the code above, make sure you have added the key Privacy - Camera Usage Description
to the Info.plist in your project.
SetupLiveCameraStream
Now the “heavy” stuff.
Start by declaring at class-level scope an AVCaptureSession
, AVCaptureDeviceInput
and AVCaptureStillImageOutput
, as they will hel us access the camera, display the live feed and capture the photo.
Then, inside the SetupLiveCameraStream
method, initialize the capture session, create a preview layer with the same size as our liveCameraStream
, and add it as a sublayer of it:
Next, “create” a capture device (you can configure it to work according to your needs). And then, from it create the an input source for the capture session:
We have an input (the camera of the device), now we need an output which is going to be a jpeg photograph:
Finalize by setting the input and output of the capture session and starting it:
CapturePhoto
At last, the icing on the cake, the code to capture the photo. The code is pretty simple: Take the output and capture a still image from it, as we only need the bytes we get an NSData
containing the taken photo.
In Xamarin.Android
Here is the source code for this section.
This implementation isn’t as clean as it is in iOS. Mainly because Android puts a lot of ephasis in the use of listeners, rather than in event handlers. However, that is not a problem for us.
As with the iOS implementation, start by creating a new class and make it derive from PageRenderer
and also make it implement the TextureView.ISurfaceTextureListener
interface. Don’t forget the ExportRender
attribute:
Then, override the OnElementChanged
method (if you have creaated custom renderers before this method may be familar to you), this method is going to be called everytime the a CamerPage
is shown on screen:
SetupUserInterface
In this method we are supposed to create the camera page itself, you can do it by creating an axml file and calling all the Android inflating stuff… Or, like in this sample, you can create it by code.
For this sample, we’ll need a RelativeLayout
to work as a container, a TextureView
to display the live feed from the camera, and a Button
(a PaintCodeButton
actually) to snap the photograph. Declare all them at class-level scope:
Now, proceed to create them and add them to the screen, for example, see how we can create the container layout and add the TextureView
to it:
Before continuing, there is another method (OnLayout
) we need to override to give our main layout it’s size (and acommodate the UI accordingly):
SetupEventHandlers
As I said, Android relies mostly on event listeners rather than handlers, so the code for this method is pretty simple. We need to set an event handler for the “sutter” button and assign the listener that will be aware of the SurfaceTexture
status (remember that our page render implements an interface?):
And one more thing, let’s to override the default behavior of the “back” button, so that it acts as a cancel button for the camera:
TextureView.ISurfaceTextureListener implementation
Now is time to implement the core of our page. Start by writing the code for the OnSurfaceTextureAvailable
where we will prepare the output for the camera, but first we’ll need a camera, right?
At class-level scope declare a Camera
:
Inside the method, open the camera (by default it’ll try to open the back camera of the device) and get its parameters. We need them to select the right preview size, because we want things to look great in our app:
Once we have the parameters at hand, we can get the avaliable PreviewSizes
and get the one that fits our preview surface. In this case I’m using a simple linq expression to get the best preview size based on aspect ratio:
Finish by setting our surface as the preview texture, at this point the only thing left to do is to start the camera:
The other method we need to write code into is OnSurfaceTextureDestroyed
in order to stop the camera, so just write the following inside and it’ll be all:
StartCamera and StopCamera
These two methods are quite simple too, for StartCamera
we only need to rotate the preview to make it look right in the screen (in this case I’m setting it to be viewed vertically), and then finally, start the camera:
The StopCamera
method stops the preview and releases the camera, so that other apps can access to it:
TakePhoto
In order to get a photo, the only thing we need to do is get an sitll image from the live feed presented in the TextureView
, here is the code to do so and then return the image in bytes:
And that’s it, after all that code, you can now make use of this camera page. Keep reading to find a sample usage code:
Usage in Forms
If you download the source code and run it, you will see something like this:
Acknowledgements
The code for this post was entirely based on the code from the CharpHat, which at the same time was based on the Moments app by Pierce Boggan.