抬头仰望星空,是否能发现自己的渺小。

伪斜杠青年

人们总是混淆了欲望和理想

Mediapipe Android hair_segmentation aar compile

背景

elementary OS 5.1.7 Hera( Ubuntu 18.04.4 LTS )

官方github:https://github.com/google/mediapipe

编译环境配置

  • 源码下载:
git clone https://github.com/google/mediapipe.git
  • 代理开启:
export http_proxy=ip:port
export https_proxy=ip:port
chmod +x bazel-version-installer-os.sh
./bazel-version-installer-os.sh --user

安装完成执行

source /usr/local/lib/bazel/bin/bazel-complete.bash

查看版本:

bazel version
  • 安裝opencv, FFmpeg
sudo apt-get install libopencv-core-dev libopencv-highgui-dev \
libopencv-calib3d-dev libopencv-features2d-dev \
libopencv-imgproc-dev libopencv-video-dev
  • 安装gui 图形库
sudo apt-get install mesa-common-dev libegl1-mesa-dev libgles2-mesa-dev
  • 安装numpy 经过测试,在mediapipe脚本中使用的是python3,所以需要先安装pip3,再安装numpy
sudo apt install python3-pip
pip3 install numpy
  • 至此,基本环境安装完成

配置文件编辑

该部分参考官方文档:https://google.github.io/mediapipe/getting_started/android_archive_library.html

先在apps目录下创建一个文件夹(aar_example名字可改):

mkdir mediapipe/examples/android/src/java/com/google/mediapipe/apps/aar_example/

创建用于编译的配置文件:

vi mediapipe/examples/android/src/java/com/google/mediapipe/apps/aar_example/BUILD

输入:

#此行不需要改动
load("//mediapipe/java/com/google/mediapipe:mediapipe_aar.bzl", "mediapipe_aar")

mediapipe_aar(
    #生成的AAR名
    name = "hair_segmentation_aar",
    calculators = ["//mediapipe/graphs/hair_segmentation:mobile_calculators"],
)

calculators存在两个参数,一个是hair_segmentation,是mediappipe支持的graphs类型,可在mediapipe/tree/master/mediapipe/graphs看到所有支持的类型,mobile_calculators指作用平台,这里是android也就是mobile。

配置Android SDK 与 NDK

NDK需要完整,SDK则只需要某个platform以及platform tools即可。

配置

export ANDROID_HOME=/path/to/SDK
export ANDROID_NDK_HOME=/path/to/NDK

终端环境可在官网下载commandlinetools,然后命令进行platform安装。

'/path/to/commandlinetools/sdkmanager' --no_https \
"platforms;android-28" "platform-tools" \
"build-tools;28.0.3" --sdk_root=/path/to/SDK/

其中API level 28 按需配置。

AAR编译

经过以上准备后,直接编译即可(时间久耐心等待):

bazel build -c opt --host_crosstool_top=@bazel_tools//tools/cpp:toolchain \
--fat_apk_cpu=arm64-v8a,armeabi-v7a \
//mediapipe/examples/android/src/java/com/google/mediapipe/apps/aar_example:"hair_segmentation_aar"

aar_example:为配置文件所在目录

hair_segmentation_aar:配置文件中配置的aar name

如果编译将近突然socket断开导致失败,则尝试重新编译即可,不需要卷土重来。最后会输出至目录:

mediapipe/bazel-bin/mediapipe/examples/android/src/java/com/google/mediapipe/apps/aar_example/hair_segmentation_aar.aar

对于编译出来的aar太大,请加上参数--linkopt="-s"(不加220~230m,加完8m)

bazel build -c opt --linkopt="-s" \
--host_crosstool_top=@bazel_tools//tools/cpp:toolchain \
--fat_apk_cpu=arm64-v8a,armeabi-v7a //mediapipe/examples/android/src/java/com/google/mediapipe/apps/aar_example:"hair_segmentation_aar"

AAR使用

除了需要导入上述AAR包之外,还需要以下文件

  • graph构建
bazel build -c opt mediapipe/graphs/hair_segmentation:mobile_gpu_binary_graph
cp bazel-bin/mediapipe/graphs/hair_segmentation/mobile_gpu.binarypb \
/path/to/your/app/src/main/assets/
  • hair_segmentation.tflite获取
cp mediapipe/models/hair_segmentation.tflite \
/path/to/your/app/src/main/assets/
  • OpenCV 官网下载:download 拷贝:
cp -R ~/Downloads/OpenCV-android-sdk/sdk/native/libs/arm* \
/path/to/your/app/src/main/jniLibs/
  • 然后就是附加的依赖:gradle文件的部分
implementation name: 'hair_segmentation_aar', ext: 'aar'

// MediaPipe deps
implementation 'com.google.flogger:flogger:0.3.1'
implementation 'com.google.flogger:flogger-system-backend:0.3.1'
implementation 'com.google.code.findbugs:jsr305:3.0.2'
implementation 'com.google.guava:guava:27.0.1-android'
implementation 'com.google.guava:guava:27.0.1-android'
implementation 'com.google.protobuf:protobuf-java:3.11.4'
// CameraX core library
def camerax_version = "1.0.0-beta09"
implementation "androidx.camera:camera-core:$camerax_version"
implementation "androidx.camera:camera-camera2:$camerax_version"
implementation "androidx.camera:camera-lifecycle:$camerax_version"
  • 接着是xml布局:
<?xml version="1.0" encoding="utf-8"?>
<androidx.constraintlayout.widget.ConstraintLayout xmlns:android="http://schemas.android.com/apk/res/android"
    xmlns:app="http://schemas.android.com/apk/res-auto"
    xmlns:tools="http://schemas.android.com/tools"
    android:layout_width="match_parent"
    android:layout_height="match_parent">
    <FrameLayout
        android:id="@+id/preview_display_layout"
        android:layout_width="match_parent"
        android:layout_height="match_parent">
        <TextView
            android:id="@+id/no_camera_access_view"
            android:layout_height="match_parent"
            android:layout_width="match_parent"
            android:gravity="center"
            android:text="@string/no_camera_access" />
    </FrameLayout>
</androidx.constraintlayout.widget.ConstraintLayout>
  • Activity:
package com.lckiss.madiepipehairsegment

import android.graphics.SurfaceTexture
import android.os.Bundle
import android.util.Log
import android.util.Size
import android.view.SurfaceHolder
import android.view.SurfaceView
import android.view.View
import android.view.ViewGroup
import androidx.appcompat.app.AppCompatActivity
import com.google.mediapipe.components.CameraHelper.CameraFacing
import com.google.mediapipe.components.CameraXPreviewHelper
import com.google.mediapipe.components.ExternalTextureConverter
import com.google.mediapipe.components.FrameProcessor
import com.google.mediapipe.components.PermissionHelper
import com.google.mediapipe.framework.AndroidAssetUtil
import com.google.mediapipe.glutil.EglManager

/** Main activity of MediaPipe example apps.  */class MainActivity : AppCompatActivity() {
    companion object {
        private const val TAG = "MainActivity"
        private const val BINARY_GRAPH_NAME = "mobile_gpu.binarypb"
        private const val INPUT_VIDEO_STREAM_NAME = "input_video"
        private const val OUTPUT_VIDEO_STREAM_NAME = "output_video"
        private const val OUTPUT_LANDMARKS_STREAM_NAME = "input_video"
        private val CAMERA_FACING = CameraFacing.FRONT

        // Flips the camera-preview frames vertically before sending them into FrameProcessor to be
        // processed in a MediaPipe graph, and flips the processed frames back when they are displayed.
        // This is needed because OpenGL represents images assuming the image origin is at the bottom-left
        // corner, whereas MediaPipe in general assumes the image origin is at top-left.
        private const val FLIP_FRAMES_VERTICALLY = true

        init {
            // Load all native libraries needed by the app.
            System.loadLibrary("mediapipe_jni")
            System.loadLibrary("opencv_java3")
        }
    }

    // {@link SurfaceTexture} where the camera-preview frames can be accessed.
    private var previewFrameTexture: SurfaceTexture? = null

    // {@link SurfaceView} that displays the camera-preview frames processed by a MediaPipe graph.
    private lateinit var previewDisplayView: SurfaceView

    // Creates and manages an {@link EGLContext}.
    private lateinit  var eglManager: EglManager

    // Sends camera-preview frames into a MediaPipe graph for processing, and displays the processed
    // frames onto a {@link Surface}.
    private lateinit var processor: FrameProcessor

    // Converts the GL_TEXTURE_EXTERNAL_OES texture from Android camera into a regular texture to be
    // consumed by {@link FrameProcessor} and the underlying MediaPipe graph.
    private lateinit var converter: ExternalTextureConverter

    // Handles camera access via the {@link CameraX} Jetpack support library.
    private lateinit var cameraHelper: CameraXPreviewHelper

    override fun onCreate(savedInstanceState: Bundle?) {
        super.onCreate(savedInstanceState)
        setContentView(R.layout.activity_main)
        previewDisplayView = SurfaceView(this)
        setupPreviewDisplayView()
        // Initialize asset manager so that MediaPipe native libraries can access the app assets, e.g.,
        // binary graphs.
        AndroidAssetUtil.initializeNativeAssetManager(this)
        eglManager = EglManager(null)
        processor = FrameProcessor(
                this,
                eglManager.nativeContext,
                BINARY_GRAPH_NAME,
                INPUT_VIDEO_STREAM_NAME,
                OUTPUT_VIDEO_STREAM_NAME)
        processor.videoSurfaceOutput.setFlipY(FLIP_FRAMES_VERTICALLY)
        PermissionHelper.checkAndRequestCameraPermissions(this)
    }

    override fun onResume() {
        super.onResume()
        converter = ExternalTextureConverter(eglManager.context)
        converter.setFlipY(FLIP_FRAMES_VERTICALLY)
        converter.setConsumer(processor)
        if (PermissionHelper.cameraPermissionsGranted(this)) {
            startCamera()
        }
    }

    override fun onPause() {
        super.onPause()
        converter.close()
    }

    override fun onRequestPermissionsResult(
            requestCode: Int, permissions: Array<String>, grantResults: IntArray) {
        super.onRequestPermissionsResult(requestCode, permissions, grantResults)
        PermissionHelper.onRequestPermissionsResult(requestCode, permissions, grantResults)
    }

    private fun setupPreviewDisplayView() {
        previewDisplayView.visibility = View.GONE
        val viewGroup = findViewById<ViewGroup>(R.id.preview_display_layout)
        viewGroup.addView(previewDisplayView)


        previewDisplayView
                .getHolder()
                .addCallback(
                        object : SurfaceHolder.Callback {
                            override fun surfaceCreated(holder: SurfaceHolder) {
                                processor.videoSurfaceOutput.setSurface(holder.surface)
                            }

                            override fun surfaceChanged(holder: SurfaceHolder, format: Int, width: Int, height: Int) {
                                // (Re-)Compute the ideal size of the camera-preview display (the area that the
                                // camera-preview frames get rendered onto, potentially with scaling and rotation)
                                // based on the size of the SurfaceView that contains the display.
                                // val viewSize = Size(width, height)
                                // val displaySize = cameraHelper.computeDisplaySizeFromViewSize(viewSize)

                                // Connect the converter to the camera-preview frames as its input (via
                                // previewFrameTexture), and configure the output width and height as the computed
                                // display size.

                                viewGroup.post {
                                    val viewSize = Size(viewGroup.width, viewGroup.height)
                                    converter.setSurfaceTextureAndAttachToGLContext(
                                            previewFrameTexture, viewSize.width, viewSize.height)
                                }
                            }

                            override fun surfaceDestroyed(holder: SurfaceHolder) {
                                processor.videoSurfaceOutput.setSurface(null)
                            }
                        })
    }

    private fun startCamera() {
        cameraHelper = CameraXPreviewHelper()
        cameraHelper.setOnCameraStartedListener { surfaceTexture: SurfaceTexture? ->
            Log.d(TAG, "startCamera: ")
            previewFrameTexture = surfaceTexture
            // Make the display view visible to start showing the preview. This triggers the
            // SurfaceHolder.Callback added to (the holder of) previewDisplayView.
            previewDisplayView.visibility = View.VISIBLE
        }
        cameraHelper.startCamera(this, CAMERA_FACING,  /*surfaceTexture=*/null)
    }
}

结论

经过一系列尝试, 如网上大多数文章一样,文中仅简单实现了视频流的头发识别与颜色替换。同时由于未能找到相关DEMO,对于区域获取以及颜色替换均未知。但不排除可通过修改C层进行实现。但我并不熟~

就目前头发切割技术来说,位置识别可使用华为的 ML SDK,对于华为设备则还有华为的 Huawei HiAI Engine,可在华为官方寻找,以及阿里达摩院也出了一套可以试用,至于,免费的,没找到。

效果????真就这样,功能不多也不少。。。

hair_segmentation

参考文档

MediaPipe Android Archive

Building MediaPipe Examples

Why my aar file so large more than 100 MB ?

Mediapipe 安裝教學


0条评论

发表评论