Skip to main content

Face detection and live filters


    Live video filters are becoming a popular trend fueled by Facebook (through their purchase of Msqrd) and Snapchat incorporating the features into their apps. These filters apply images or animations to your face using face tracking software. This technology has been around for awhile but is becoming increasingly more common due to the powerful CPU's that our mobile phones now have. Google provides an API that provides face tracking abilities through the Google Play Services library called Mobile Vision. I'm going to use their API to build a basic live filter app. The end result will look something like this:


    The bounding box wraps around the detected face and the sunglasses are the filter I chose (which is just a PNG image) which are drawn over the eyes. You could use any PNG image (with alpha for the background) you want, you will just have to adjust the layout according to where the image should be displayed. As you move your head, the box and sunglasses are redrawn in the correct spot. This app is hardly perfect and you may notice that the image is a bit jittery (especially when you blink) but it's a fun project to mess around with. This app is going to be built off of Google's FaceTracker sample for the library, so build that first.

    Now, in the Graphic abstract class inside the GraphicOverlay class add a getter for the mOverlay field:


public GraphicOverlay getOverlay(){
    return mOverlay;
}


    This will allow us to obtain a reference to the context through the parent GraphicOverlay object within a Graphic class. Next, in the FaceGraphic class specify two global instance fields:


private Bitmap bitmap;
private Bitmap sunglasses;


    The bitmap field will be a reference to the original Bitmap we create and the sunglasses field is just a scaled version of that Bitmap. When the Graphic class is updated with the new Face information, we have to scale the image to match the size of the face and move it to the correct position. Having two Bitmap objects will be helpful for this re-sizing process by retaining the image quality. Next, in the bottom of the FaceGraphic constructor, we instantiate the Bitmap objects:


bitmap = BitmapFactory.decodeResource(getOverlay().getContext().getResources(), R.drawable.sunglasses);
sunglasses = bitmap;


    Then we update the size of the sunglasses Bitmap in the updateFace method of the FaceGraphic class. So, in the updateFace method before the postInvalidate() call, place the following code:


sunglasses = Bitmap.createScaledBitmap(bitmap, (int) scaleX(face.getWidth()),
                (int) scaleY(((bitmap.getHeight() * face.getWidth()) / bitmap.getWidth())), false);


   
    Finally, we draw the image to the screen. So, replace the draw method in the FaceGraphic class with the following code:


@Override
public void draw(Canvas canvas) {
    Face face = mFace;
    if (face == null) {
        return;
    }
    float x = translateX(face.getPosition().x + face.getWidth() / 2);
    float y = translateY(face.getPosition().y + face.getHeight() / 2);
    // Draws a bounding box around the face.
    float xOffset = scaleX(face.getWidth() / 2.0f);
    float yOffset = scaleY(face.getHeight() / 2.0f);
    float left = x - xOffset;
    float top = y - yOffset;
    float right = x + xOffset;
    float bottom = y + yOffset;
    canvas.drawRect(left, top, right, bottom, mBoxPaint);
    //Get the left eye to place the sunglasses over the eyes
    float eyeY = top + sunglasses.getHeight() / 2;
    for(Landmark l : face.getLandmarks()){
        if(l.getType() == Landmark.LEFT_EYE){
            eyeY = l.getPosition().y + sunglasses.getHeight() / 2;
        }
    }
    canvas.drawBitmap(sunglasses, left, eyeY, new Paint());
}


    Now run the code and you'll see a box drawn around your head and the sunglasses image over your eyes even when you move. Note: I didn't provide a sunglasses image but you can easily get one from Google. Also, if the camera isn't the front-facing camera, in the FaceTrackerActivity, at the bottom of the createCameraSource() method, change the setFacing() method call to
CameraSource.CAMERA_FACING_FRONT:


mCameraSource = new CameraSource.Builder(context, detector)
        .setRequestedPreviewSize(640, 480)
        .setFacing(CameraSource.CAMERA_FACING_FRONT)
        .setRequestedFps(30.0f)
        .build();


    And you're finished. Go ahead and mess around by adding and changing images.

Comments

Popular posts from this blog

Setting Up Connection Pooling With Elastic Beanstalk

Amazon's Elastic Beanstalk is a service which automatically scales your application when needed. It uses Amazon's Elastic Compute Cloud (EC2) instances as deployable containers which when your app requires more resources more containers will be deployed. This removes the need to manually configure your EC2 instance whenever you need more connections or resources and attempts to add simplicity to the maintenance aspect of your application. So, when you get more users of your app, your app will scale accordingly.

    Unfortunately, along with the ability to scale automatically, comes less control and configuration. Things you would normally have the ability to configure to your liking, such as your server, you no longer can. Amazon attempts to address this issue with configuration files. You can provide configuration files which can set up your server. These files are either written in JSON or the horrible format YAML. Though these files provide some level of control, you ca…

Android Guitar Tuner

Recently I created a guitar tuner application for Android that is written with pure Java (no C++ or NDK usage). The design was inspired by the Google Chrome team's guitar tuner web app using the WebAudio API. I wanted to code a version written natively for Android that didn't have to rely on a WebView, the WebAudio APIs, or server-side logic. Also, I wanted this application to be available to as many versions of Android as possible (whereas the WebAudio API may only be supported in more recent versions of WebView available only on newer flavors of Android). So, I coded it from scratch. I used a portion of the open source TarsosDSP project (their YIN algorithm) to help with the pitch detection.

    The application is available in the Google Play Store for Android: https://play.google.com/store/apps/details?id=com.chrynan.guitartuner. The project is completely open source and the code can be found on the GitHub repository: https://github.com/chRyNaN/Android-Guitar-Tuner. Fi…