TITLE: MediaPipe Face Solution Configuration Options DESCRIPTION: This section details the configurable parameters for the MediaPipe Face Solution, influencing its behavior for face detection and landmark tracking. These options allow users to optimize the solution for video streams, static images, control the number of detected faces, refine landmark accuracy, and set detection confidence thresholds. SOURCE: https://github.com/google-ai-edge/mediapipe/blob/master/docs/solutions/face_mesh.md#_snippet_0 LANGUAGE: APIDOC CODE: ``` Configuration Options: static_image_mode: boolean Description: If set to 'false', the solution treats input images as a video stream, optimizing for latency by tracking faces after initial detection. It will try to detect faces in the first input images, and upon successful detection, further localizes the face landmarks. In subsequent images, it tracks those landmarks without invoking another detection until it loses track of any faces. If 'true', face detection runs on every input image, ideal for processing a batch of static, possibly unrelated images. Default: false max_num_faces: integer Description: Specifies the maximum number of faces the solution should detect and track simultaneously. Default: 1 refine_landmarks: boolean Description: Determines whether to apply further refinement to landmark coordinates around the eyes and lips, and to output additional landmarks for the irises by applying the Attention Mesh Model. Default: false min_detection_confidence: float (range [0.0, 1.0]) Description: The minimum confidence value from the face detection model for a detection to be considered successful. Default: 0.5 ``` ---------------------------------------- TITLE: MediaPipe Face Geometry Calculators DESCRIPTION: Provides processing units for generating virtual environments, extracting 3D face transforms from landmark data, and rendering face effects within the MediaPipe framework. SOURCE: https://github.com/google-ai-edge/mediapipe/blob/master/mediapipe/modules/face_geometry/README.md#_snippet_1 LANGUAGE: APIDOC CODE: ``` FaceGeometryEnvGeneratorCalculator: Generates an environment that describes a virtual scene. FaceGeometryPipelineCalculator: Extracts face 3D transform for multiple faces from a vector of landmark lists. FaceGeometryEffectRendererCalculator: Renders a face effect. ``` ---------------------------------------- TITLE: MediaPipe Face Geometry Protocol Buffers DESCRIPTION: Defines the data structures used for representing environmental parameters, metadata for face geometry pipelines, 3D face transform data, and generic 3D mesh surfaces within MediaPipe's face geometry module. SOURCE: https://github.com/google-ai-edge/mediapipe/blob/master/mediapipe/modules/face_geometry/README.md#_snippet_0 LANGUAGE: APIDOC CODE: ``` face_geometry.Environment: Describes an environment; includes the camera frame origin point location as well as virtual camera parameters. face_geometry.GeometryPipelineMetadata: Describes metadata needed to estimate face 3D transform based on the face landmark module result. face_geometry.FaceGeometry: Describes 3D transform data for a single face; includes a face mesh surface and a face pose in a given environment. face_geometry.Mesh3d: Describes a 3D mesh triangular surface. ``` ---------------------------------------- TITLE: MediaPipe Face Mesh Android Solution API Configuration Options DESCRIPTION: Details the configurable options for the MediaPipe Face Mesh Android solution. These options include static image mode, maximum number of faces, landmark refinement, and the choice to run inference on GPU or CPU. SOURCE: https://github.com/google-ai-edge/mediapipe/blob/master/docs/solutions/face_mesh.md#_snippet_7 LANGUAGE: APIDOC CODE: ``` staticImageMode: boolean - Description: Whether to treat the input as a static image or a video stream. maxNumFaces: number - Description: Maximum number of faces to detect. refineLandmarks: boolean - Description: Whether to refine face landmarks. runOnGpu: boolean - Description: Run the pipeline and the model inference on GPU or CPU. ``` ---------------------------------------- TITLE: MediaPipe Canonical Face Model Assets DESCRIPTION: References to various formats of the canonical face model used in MediaPipe for face geometry. These assets, including FBX, OBJ, and a UV visualization image, are crucial for understanding and working with 3D face transformations within the MediaPipe framework. SOURCE: https://github.com/google-ai-edge/mediapipe/blob/master/__wiki__/MediaPipe-Face-Mesh.md#_snippet_3 LANGUAGE: APIDOC CODE: ``` FBX: mediapipe/modules/face_geometry/data/canonical_face_model.fbx OBJ: mediapipe/modules/face_geometry/data/canonical_face_model.obj UV visualization: mediapipe/modules/face_geometry/data/canonical_face_model_uv_visualization.png ``` ---------------------------------------- TITLE: MediaPipe Face Detection Solution APIs DESCRIPTION: Defines the configuration options and output structure for the MediaPipe Face Detection solution, including model selection, confidence thresholds, and the format of detected faces. These APIs allow customization of detection behavior and describe the structure of the detection results. SOURCE: https://github.com/google-ai-edge/mediapipe/blob/master/docs/solutions/face_detection.md#_snippet_0 LANGUAGE: APIDOC CODE: ``` Configuration Options: model_selection: integer (0 or 1) - Description: Selects the detection model. Use 0 for short-range (within 2 meters), 1 for full-range (within 5 meters). - Default: 0 - Note: Not available for JavaScript (use "model" instead). model: string ("short" or "full") - Description: Specifies the detection model. "short" for short-range, "full" for full-range. - Default: "" (empty string) - Note: Valid only for JavaScript solution. selfie_mode: boolean - Description: Indicates whether to flip images/video frames horizontally. - Default: false - Note: Valid only for JavaScript solution. min_detection_confidence: float ([0.0, 1.0]) - Description: Minimum confidence value from the face detection model for a detection to be considered successful. - Default: 0.5 Output Structure: detections: Collection of detected faces - Description: Each face is represented as a detection proto message containing a bounding box and 6 key points. - Bounding Box: - xmin: float ([0.0, 1.0]) - normalized by image width - width: float ([0.0, 1.0]) - normalized by image width - ymin: float ([0.0, 1.0]) - normalized by image height - height: float ([0.0, 1.0]) - normalized by image height - Key Points (6 total: right eye, left eye, nose tip, mouth center, right ear tragion, left ear tragion): - x: float ([0.0, 1.0]) - normalized by image width - y: float ([0.0, 1.0]) - normalized by image height ``` ---------------------------------------- TITLE: Process Static Images with MediaPipe Face Mesh in Python DESCRIPTION: Demonstrates how to use the MediaPipe Face Mesh solution in Python to detect and track face landmarks in static images. It initializes the FaceMesh model with specific configurations, processes images, and draws the detected landmarks, including tesselation, contours, and irises. SOURCE: https://github.com/google-ai-edge/mediapipe/blob/master/docs/solutions/face_mesh.md#_snippet_2 LANGUAGE: python CODE: ``` import cv2 import mediapipe as mp mp_drawing = mp.solutions.drawing_utils mp_drawing_styles = mp.solutions.drawing_styles mp_face_mesh = mp.solutions.face_mesh # For static images: IMAGE_FILES = [] drawing_spec = mp_drawing.DrawingSpec(thickness=1, circle_radius=1) with mp_face_mesh.FaceMesh( static_image_mode=True, max_num_faces=1, refine_landmarks=True, min_detection_confidence=0.5) as face_mesh: for idx, file in enumerate(IMAGE_FILES): image = cv2.imread(file) # Convert the BGR image to RGB before processing. results = face_mesh.process(cv2.cvtColor(image, cv2.COLOR_BGR2RGB)) # Print and draw face mesh landmarks on the image. if not results.multi_face_landmarks: continue annotated_image = image.copy() for face_landmarks in results.multi_face_landmarks: print('face_landmarks:', face_landmarks) mp_drawing.draw_landmarks( image=annotated_image, landmark_list=face_landmarks, connections=mp_face_mesh.FACEMESH_TESSELATION, landmark_drawing_spec=None, connection_drawing_spec=mp_drawing_styles .get_default_face_mesh_tesselation_style()) mp_drawing.draw_landmarks( image=annotated_image, landmark_list=face_landmarks, connections=mp_face_mesh.FACEMESH_CONTOURS, landmark_drawing_spec=None, connection_drawing_spec=mp_drawing_styles .get_default_face_mesh_contours_style()) mp_drawing.draw_landmarks( image=annotated_image, landmark_list=face_landmarks, connections=mp_face_mesh.FACEMESH_IRISES, landmark_drawing_spec=None, connection_drawing_spec=mp_drawing_styles .get_default_face_mesh_iris_connections_style()) cv2.imwrite('/tmp/annotated_image' + str(idx) + '.png', annotated_image) ``` ---------------------------------------- TITLE: Real-time Face Mesh Detection from Webcam with MediaPipe DESCRIPTION: This Python snippet initializes MediaPipe Face Mesh and processes a live webcam feed to detect faces and draw their mesh annotations (tesselation, contours, irises) in real-time. It uses OpenCV for camera capture and display, handling frame reading, color conversion, and rendering. The output is a flipped image for a selfie-view display. SOURCE: https://github.com/google-ai-edge/mediapipe/blob/master/docs/solutions/face_mesh.md#_snippet_3 LANGUAGE: python CODE: ``` drawing_spec = mp_drawing.DrawingSpec(thickness=1, circle_radius=1) cap = cv2.VideoCapture(0) with mp_face_mesh.FaceMesh( max_num_faces=1, refine_landmarks=True, min_detection_confidence=0.5, min_tracking_confidence=0.5) as face_mesh: while cap.isOpened(): success, image = cap.read() if not success: print("Ignoring empty camera frame.") # If loading a video, use 'break' instead of 'continue'. continue # To improve performance, optionally mark the image as not writeable to # pass by reference. image.flags.writeable = False image = cv2.cvtColor(image, cv2.COLOR_BGR2RGB) results = face_mesh.process(image) # Draw the face mesh annotations on the image. image.flags.writeable = True image = cv2.cvtColor(image, cv2.COLOR_RGB2BGR) if results.multi_face_landmarks: for face_landmarks in results.multi_face_landmarks: mp_drawing.draw_landmarks( image=image, landmark_list=face_landmarks, connections=mp_face_mesh.FACEMESH_TESSELATION, landmark_drawing_spec=None, connection_drawing_spec=mp_drawing_styles .get_default_face_mesh_tesselation_style()) mp_drawing.draw_landmarks( image=image, landmark_list=face_landmarks, connections=mp_face_mesh.FACEMESH_CONTOURS, landmark_drawing_spec=None, connection_drawing_spec=mp_drawing_styles .get_default_face_mesh_contours_style()) mp_drawing.draw_landmarks( image=image, landmark_list=face_landmarks, connections=mp_face_mesh.FACEMESH_IRISES, landmark_drawing_spec=None, connection_drawing_spec=mp_drawing_styles .get_default_face_mesh_iris_connections_style()) # Flip the image horizontally for a selfie-view display. cv2.imshow('MediaPipe Face Mesh', cv2.flip(image, 1)) if cv2.waitKey(5) & 0xFF == 27: break cap.release() ``` ---------------------------------------- TITLE: Implement Real-time Face Detection in JavaScript with MediaPipe DESCRIPTION: This JavaScript code demonstrates how to initialize and use MediaPipe Face Detection in a web browser. It sets up a video stream from the webcam, processes frames for face detection, and draws the results (bounding boxes and landmarks) onto a canvas element. It utilizes MediaPipe's `FaceDetection` and `Camera` classes. SOURCE: https://github.com/google-ai-edge/mediapipe/blob/master/docs/solutions/face_detection.md#_snippet_5 LANGUAGE: javascript CODE: ``` ``` ---------------------------------------- TITLE: MediaPipe Face Geometry Subgraphs DESCRIPTION: Defines reusable graph configurations for extracting 3D face transforms from detection or landmark data, facilitating complex pipeline construction within MediaPipe. Note that `FaceGeometry` is deprecated in favor of `FaceGeometryFromLandmarks`. SOURCE: https://github.com/google-ai-edge/mediapipe/blob/master/mediapipe/modules/face_geometry/README.md#_snippet_2 LANGUAGE: APIDOC CODE: ``` FaceGeometryFromDetection: Extracts 3D transform from face detection for multiple faces. FaceGeometryFromLandmarks: Extracts 3D transform from face landmarks for multiple faces. FaceGeometry: Extracts 3D transform from face landmarks for multiple faces. Deprecated, please use `FaceGeometryFromLandmarks` in the new code. ``` ---------------------------------------- TITLE: MediaPipe Face Mesh Desktop CPU Configuration DESCRIPTION: Specifies the MediaPipe graph and Bazel build target for running the Face Mesh solution on desktop CPUs. The graph defines the processing pipeline, and the build target compiles and runs the application. The maximum number of faces can be adjusted in the graph file by modifying the `ConstantSidePacketCalculator` option. SOURCE: https://github.com/google-ai-edge/mediapipe/blob/master/docs/solutions/face_mesh.md#_snippet_11 LANGUAGE: MediaPipe Graph Path CODE: ``` mediapipe/graphs/face_mesh/face_mesh_desktop_live.pbtxt ``` LANGUAGE: Bazel Build Target CODE: ``` mediapipe/examples/desktop/face_mesh:face_mesh_cpu ``` ---------------------------------------- TITLE: Import MediaPipe Face Detection Modules in Python DESCRIPTION: This snippet demonstrates the necessary import statements for using the MediaPipe Face Detection and drawing utility modules in a Python application. It initializes the `mp_face_detection` and `mp_drawing` objects, which are essential for setting up and visualizing face detection tasks. SOURCE: https://github.com/google-ai-edge/mediapipe/blob/master/docs/solutions/face_detection.md#_snippet_1 LANGUAGE: Python CODE: ``` import cv2 import mediapipe as mp mp_face_detection = mp.solutions.face_detection mp_drawing = mp.solutions.drawing_utils ``` ---------------------------------------- TITLE: Configure MediaPipe Face Mesh for Desktop CPU DESCRIPTION: This configuration details the graph and build target for running MediaPipe Face Mesh on a desktop CPU. It specifies the `.pbtxt` graph definition and the Bazel build target for compilation and execution. SOURCE: https://github.com/google-ai-edge/mediapipe/blob/master/__wiki__/MediaPipe-Face-Mesh.md#_snippet_0 LANGUAGE: APIDOC CODE: ``` Graph: mediapipe/graphs/face_mesh/face_mesh_desktop_live.pbtxt Target: mediapipe/examples/desktop/face_mesh:face_mesh_cpu ``` ---------------------------------------- TITLE: Perform Face Detection on Static Images using MediaPipe in Python DESCRIPTION: This Python snippet demonstrates how to use MediaPipe's Face Detection module to detect faces in a list of static image files. It reads images using OpenCV, processes them with MediaPipe, draws bounding boxes and key points, and saves the annotated images. Dependencies include MediaPipe and OpenCV. SOURCE: https://github.com/google-ai-edge/mediapipe/blob/master/docs/solutions/face_detection.md#_snippet_2 LANGUAGE: python CODE: ``` # For static images: IMAGE_FILES = [] with mp_face_detection.FaceDetection( model_selection=1, min_detection_confidence=0.5) as face_detection: for idx, file in enumerate(IMAGE_FILES): image = cv2.imread(file) # Convert the BGR image to RGB and process it with MediaPipe Face Detection. results = face_detection.process(cv2.cvtColor(image, cv2.COLOR_BGR2RGB)) # Draw face detections of each face. if not results.detections: continue annotated_image = image.copy() for detection in results.detections: print('Nose tip:') print(mp_face_detection.get_key_point( detection, mp_face_detection.FaceKeyPoint.NOSE_TIP)) mp_drawing.draw_detection(annotated_image, detection) cv2.imwrite('/tmp/annotated_image' + str(idx) + '.png', annotated_image) ``` ---------------------------------------- TITLE: MediaPipe Face Mesh JavaScript Solution API Configuration Options DESCRIPTION: Details the configurable options for the MediaPipe Face Mesh JavaScript solution. These options control aspects like the maximum number of faces to detect, whether to refine landmarks, and confidence thresholds for detection and tracking. SOURCE: https://github.com/google-ai-edge/mediapipe/blob/master/docs/solutions/face_mesh.md#_snippet_6 LANGUAGE: APIDOC CODE: ``` maxNumFaces: number - Description: Maximum number of faces to detect. refineLandmarks: boolean - Description: Whether to refine face landmarks. minDetectionConfidence: number (0.0 - 1.0) - Description: Minimum confidence score for face detection to be considered successful. minTrackingConfidence: number (0.0 - 1.0) - Description: Minimum confidence score for face tracking to be considered successful. ``` ---------------------------------------- TITLE: HTML Setup for MediaPipe Face Detection Web Application DESCRIPTION: This HTML snippet provides the basic structure for a web application utilizing MediaPipe Face Detection. It includes necessary script imports for MediaPipe utility libraries (camera, control, drawing) and the core Face Detection module, along with video and canvas elements for input and output display. SOURCE: https://github.com/google-ai-edge/mediapipe/blob/master/docs/solutions/face_detection.md#_snippet_4 LANGUAGE: html CODE: ```
``` ---------------------------------------- TITLE: Initialize and Use MediaPipe Face Detector in JavaScript DESCRIPTION: This snippet demonstrates how to initialize the MediaPipe Face Detector task from a model path and use it to detect faces within an HTML image element. It requires the MediaPipe Tasks Vision WASM module and a pre-trained face detection model. SOURCE: https://github.com/google-ai-edge/mediapipe/blob/master/mediapipe/tasks/web/vision/README.md#_snippet_0 LANGUAGE: JavaScript CODE: ``` const vision = await FilesetResolver.forVisionTasks( "https://cdn.jsdelivr.net/npm/@mediapipe/tasks-vision/wasm" ); const faceDetector = await FaceDetector.createFromModelPath(vision, "https://storage.googleapis.com/mediapipe-models/face_detector/blaze_face_short_range/float16/1/blaze_face_short_range.tflite" ); const image = document.getElementById("image") as HTMLImageElement; const detections = faceDetector.detect(image); ``` ---------------------------------------- TITLE: MediaPipe Face Effect Mobile Configuration DESCRIPTION: Specifies the MediaPipe graph and build targets for the Face Effect application on mobile platforms (Android and iOS). This example is optimized for single-face detection to enhance user experience. SOURCE: https://github.com/google-ai-edge/mediapipe/blob/master/docs/solutions/face_mesh.md#_snippet_13 LANGUAGE: MediaPipe Graph Path CODE: ``` mediapipe/graphs/face_effect/face_effect_gpu.pbtxt ``` LANGUAGE: Android Build Target CODE: ``` mediapipe/examples/android/src/java/com/google/mediapipe/apps/faceeffect ``` LANGUAGE: iOS Build Target CODE: ``` mediapipe/examples/ios/faceeffect ``` ---------------------------------------- TITLE: Configure MediaPipe Face Mesh for Desktop GPU DESCRIPTION: This configuration details the graph and build target for running MediaPipe Face Mesh on a desktop GPU. It uses a GPU-specific graph definition and a Bazel build target. The maximum number of faces to detect/process can be adjusted by modifying the `ConstantSidePacketCalculator` option within the graph file. SOURCE: https://github.com/google-ai-edge/mediapipe/blob/master/__wiki__/MediaPipe-Face-Mesh.md#_snippet_1 LANGUAGE: APIDOC CODE: ``` Graph: mediapipe/graphs/face_mesh/face_mesh_desktop_live_gpu.pbtxt Target: mediapipe/examples/desktop/face_mesh:face_mesh_gpu ``` ---------------------------------------- TITLE: Perform Real-time Face Detection from Webcam using MediaPipe in Python DESCRIPTION: This Python snippet shows how to perform real-time face detection from a webcam feed using MediaPipe. It continuously captures frames, processes them for face detection, draws annotations, and displays the output. It requires OpenCV for camera access and display, and MediaPipe for detection. SOURCE: https://github.com/google-ai-edge/mediapipe/blob/master/docs/solutions/face_detection.md#_snippet_3 LANGUAGE: python CODE: ``` # For webcam input: cap = cv2.VideoCapture(0) with mp_face_detection.FaceDetection( model_selection=0, min_detection_confidence=0.5) as face_detection: while cap.isOpened(): success, image = cap.read() if not success: print("Ignoring empty camera frame.") # If loading a video, use 'break' instead of 'continue'. continue # To improve performance, optionally mark the image as not writeable to # pass by reference. image.flags.writeable = False image = cv2.cvtColor(image, cv2.COLOR_BGR2RGB) results = face_detection.process(image) # Draw the face detection annotations on the image. image.flags.writeable = True image = cv2.cvtColor(image, cv2.COLOR_RGB2BGR) if results.detections: for detection in results.detections: mp_drawing.draw_detection(image, detection) # Flip the image horizontally for a selfie-view display. cv2.imshow('MediaPipe Face Detection', cv2.flip(image, 1)) if cv2.waitKey(5) & 0xFF == 27: break cap.release() ``` ---------------------------------------- TITLE: Initialize MediaPipe Face Detection and Process Image Input on Android DESCRIPTION: This Java code snippet demonstrates how to set up MediaPipe Face Detection for static image processing on Android. It includes configuring detection options, handling detection results and errors, and integrating with an `ActivityResultLauncher` to select an image from the gallery for analysis. The detected face landmarks, specifically the nose tip, are logged, and the results are visualized on a custom `ImageView`. SOURCE: https://github.com/google-ai-edge/mediapipe/blob/master/docs/solutions/face_detection.md#_snippet_7 LANGUAGE: java CODE: ``` // For reading images from gallery and drawing the output in an ImageView. FaceDetectionOptions faceDetectionOptions = FaceDetectionOptions.builder() .setStaticImageMode(true) .setModelSelection(0).build(); FaceDetection faceDetection = new FaceDetection(this, faceDetectionOptions); // Connects MediaPipe Face Detection Solution to the user-defined ImageView // instance that allows users to have the custom drawing of the output landmarks // on it. See mediapipe/examples/android/solutions/facedetection/src/main/java/com/google/mediapipe/examples/facedetection/FaceDetectionResultImageView.java // as an example. FaceDetectionResultImageView imageView = new FaceDetectionResultImageView(this); faceDetection.setResultListener( faceDetectionResult -> { if (faceDetectionResult.multiFaceDetections().isEmpty()) { return; } int width = faceDetectionResult.inputBitmap().getWidth(); int height = faceDetectionResult.inputBitmap().getHeight(); RelativeKeypoint noseTip = faceDetectionResult .multiFaceDetections() .get(0) .getLocationData() .getRelativeKeypoints(FaceKeypoint.NOSE_TIP); Log.i( TAG, String.format( "MediaPipe Face Detection nose tip coordinates (pixel values): x=%f, y=%f", noseTip.getX() * width, noseTip.getY() * height)); // Request canvas drawing. imageView.setFaceDetectionResult(faceDetectionResult); runOnUiThread(() -> imageView.update()); }); faceDetection.setErrorListener( (message, e) -> Log.e(TAG, "MediaPipe Face Detection error:" + message)); // ActivityResultLauncher to get an image from the gallery as Bitmap. ActivityResultLauncher imageGetter = registerForActivityResult( new ActivityResultContracts.StartActivityForResult(), result -> { Intent resultIntent = result.getData(); if (resultIntent != null && result.getResultCode() == RESULT_OK) { Bitmap bitmap = null; try { bitmap = MediaStore.Images.Media.getBitmap( this.getContentResolver(), resultIntent.getData()); // Please also rotate the Bitmap based on its orientation. } catch (IOException e) { Log.e(TAG, "Bitmap reading error:" + e); } if (bitmap != null) { faceDetection.send(bitmap); } } }); Intent pickImageIntent = new Intent(Intent.ACTION_PICK); pickImageIntent.setDataAndType(MediaStore.Images.Media.INTERNAL_CONTENT_URI, "image/*"); imageGetter.launch(pickImageIntent); ``` ---------------------------------------- TITLE: MediaPipe Face Mesh Desktop GPU Configuration DESCRIPTION: Specifies the MediaPipe graph and Bazel build target for running the Face Mesh solution on desktop GPUs. The graph defines the processing pipeline, and the build target compiles and runs the application. The maximum number of faces can be adjusted in the graph file by modifying the `ConstantSidePacketCalculator` option. SOURCE: https://github.com/google-ai-edge/mediapipe/blob/master/docs/solutions/face_mesh.md#_snippet_12 LANGUAGE: MediaPipe Graph Path CODE: ``` mediapipe/graphs/face_mesh/face_mesh_desktop_live_gpu.pbtxt ``` LANGUAGE: Bazel Build Target CODE: ``` mediapipe/examples/desktop/face_mesh:face_mesh_gpu ``` ---------------------------------------- TITLE: Configure MediaPipe Face Detection for Android Camera Input DESCRIPTION: This Java code snippet demonstrates the complete setup for integrating MediaPipe Face Detection with an Android camera. It initializes the `FaceDetection` solution, configures `CameraInput` to feed frames, and sets up `SolutionGlSurfaceView` for real-time OpenGL rendering of detection results, including logging specific keypoints like the nose tip. SOURCE: https://github.com/google-ai-edge/mediapipe/blob/master/docs/solutions/face_detection.md#_snippet_6 LANGUAGE: java CODE: ``` // For camera input and result rendering with OpenGL. FaceDetectionOptions faceDetectionOptions = FaceDetectionOptions.builder() .setStaticImageMode(false) .setModelSelection(0).build(); FaceDetection faceDetection = new FaceDetection(this, faceDetectionOptions); faceDetection.setErrorListener( (message, e) -> Log.e(TAG, "MediaPipe Face Detection error:" + message)); // Initializes a new CameraInput instance and connects it to MediaPipe Face Detection Solution. CameraInput cameraInput = new CameraInput(this); cameraInput.setNewFrameListener( textureFrame -> faceDetection.send(textureFrame)); // Initializes a new GlSurfaceView with a ResultGlRenderer instance // that provides the interfaces to run user-defined OpenGL rendering code. // See mediapipe/examples/android/solutions/facedetection/src/main/java/com/google/mediapipe/examples/facedetection/FaceDetectionResultGlRenderer.java // as an example. SolutionGlSurfaceView glSurfaceView = new SolutionGlSurfaceView<>( this, faceDetection.getGlContext(), faceDetection.getGlMajorVersion()); glSurfaceView.setSolutionResultRenderer(new FaceDetectionResultGlRenderer()); glSurfaceView.setRenderInputImage(true); faceDetection.setResultListener( faceDetectionResult -> { if (faceDetectionResult.multiFaceDetections().isEmpty()) { return; } RelativeKeypoint noseTip = faceDetectionResult .multiFaceDetections() .get(0) .getLocationData() .getRelativeKeypoints(FaceKeypoint.NOSE_TIP); Log.i( TAG, String.format( "MediaPipe Face Detection nose tip normalized coordinates (value range: [0, 1]): x=%f, y=%f", noseTip.getX(), noseTip.getY())); // Request GL rendering. glSurfaceView.setRenderData(faceDetectionResult); glSurfaceView.requestRender(); }); // The runnable to start camera after the GLSurfaceView is attached. glSurfaceView.post( () -> cameraInput.start( this, faceDetection.getGlContext(), CameraInput.CameraFacing.FRONT, glSurfaceView.getWidth(), glSurfaceView.getHeight())); ``` ---------------------------------------- TITLE: Initialize and Use MediaPipe Face Stylizer in JavaScript DESCRIPTION: This snippet illustrates the process of initializing the MediaPipe Face Stylizer task from a model path and applying face stylization to an HTML image. It depends on the MediaPipe Tasks Vision WASM module and a pre-trained face stylization model. SOURCE: https://github.com/google-ai-edge/mediapipe/blob/master/mediapipe/tasks/web/vision/README.md#_snippet_2 LANGUAGE: JavaScript CODE: ``` const vision = await FilesetResolver.forVisionTasks( "https://cdn.jsdelivr.net/npm/@mediapipe/tasks-vision/wasm" ); const faceStylizer = await FaceStylizer.createFromModelPath(vision, "https://storage.googleapis.com/mediapipe-models/face_stylizer/blaze_face_stylizer/float32/1/blaze_face_stylizer.task" ); const image = document.getElementById("image") as HTMLImageElement; const stylizedImage = faceStylizer.stylize(image); ``` ---------------------------------------- TITLE: JavaScript MediaPipe Face Mesh Web Camera Integration DESCRIPTION: This JavaScript code demonstrates how to integrate MediaPipe Face Mesh with a web camera. It initializes FaceMesh, sets configuration options like maxNumFaces and refineLandmarks, processes video frames, and draws face landmarks on a canvas using MediaPipe's drawing utilities. SOURCE: https://github.com/google-ai-edge/mediapipe/blob/master/docs/solutions/face_mesh.md#_snippet_5 LANGUAGE: JavaScript CODE: ``` ``` ---------------------------------------- TITLE: MediaPipe Face Mesh Solution API Reference DESCRIPTION: Comprehensive API documentation for the MediaPipe Face Mesh solution, detailing its configurable parameters and the structure of its output data. This includes parameters like `min_tracking_confidence` and the `multi_face_landmarks` output format, along with other supported configuration options. SOURCE: https://github.com/google-ai-edge/mediapipe/blob/master/docs/solutions/face_mesh.md#_snippet_1 LANGUAGE: APIDOC CODE: ``` MediaPipe Face Mesh Solution API: Configuration Options: - min_tracking_confidence (float): Description: Minimum confidence value [0.0, 1.0] from the landmark-tracking model for face landmarks to be considered tracked successfully. If confidence is below this, face detection is re-invoked on the next input image. Setting a higher value increases robustness but may increase latency. Ignored if static_image_mode is true. Default: 0.5 - static_image_mode (bool): (Refer to general documentation for details) - max_num_faces (int): (Refer to general documentation for details) - refine_landmarks (bool): (Refer to general documentation for details) - min_detection_confidence (float): (Refer to general documentation for details) Output: - multi_face_landmarks (Collection of FaceLandmarks): Description: Collection of detected/tracked faces. Each face is represented as a list of 468 face landmarks. Each landmark is composed of x, y, and z coordinates. x, y: Normalized to [0.0, 1.0] by the image width and height respectively. z: Represents the landmark depth with the depth at the center of the head being the origin. Smaller values indicate closer proximity to the camera. The magnitude of z uses roughly the same scale as x. ``` ---------------------------------------- TITLE: Configure MediaPipe Face Mesh for Android Video Input DESCRIPTION: This Java code demonstrates how to initialize and configure MediaPipe Face Mesh for real-time video processing on Android. It sets up options like GPU usage and maximum faces, integrates with a VideoInput source, renders results using a SolutionGlSurfaceView, and includes logic for selecting a video from the device gallery. SOURCE: https://github.com/google-ai-edge/mediapipe/blob/master/docs/solutions/face_mesh.md#_snippet_10 LANGUAGE: java CODE: ``` // For video input and result rendering with OpenGL. FaceMeshOptions faceMeshOptions = FaceMeshOptions.builder() .setStaticImageMode(false) .setRefineLandmarks(true) .setMaxNumFaces(1) .setRunOnGpu(true).build(); FaceMesh faceMesh = new FaceMesh(this, faceMeshOptions); faceMesh.setErrorListener( (message, e) -> Log.e(TAG, "MediaPipe Face Mesh error:" + message)); // Initializes a new VideoInput instance and connects it to MediaPipe Face Mesh Solution. VideoInput videoInput = new VideoInput(this); videoInput.setNewFrameListener( textureFrame -> faceMesh.send(textureFrame)); // Initializes a new GlSurfaceView with a ResultGlRenderer instance // that provides the interfaces to run user-defined OpenGL rendering code. // See mediapipe/examples/android/solutions/facemesh/src/main/java/com/google/mediapipe/examples/facemesh/FaceMeshResultGlRenderer.java // as an example. SolutionGlSurfaceView glSurfaceView = new SolutionGlSurfaceView<>( this, faceMesh.getGlContext(), faceMesh.getGlMajorVersion()); glSurfaceView.setSolutionResultRenderer(new FaceMeshResultGlRenderer()); glSurfaceView.setRenderInputImage(true); faceMesh.setResultListener( faceMeshResult -> { NormalizedLandmark noseLandmark = result.multiFaceLandmarks().get(0).getLandmarkList().get(1); Log.i( TAG, String.format( "MediaPipe Face Mesh nose normalized coordinates (value range: [0, 1]): x=%f, y=%f", noseLandmark.getX(), noseLandmark.getY())); // Request GL rendering. glSurfaceView.setRenderData(faceMeshResult); glSurfaceView.requestRender(); }); ActivityResultLauncher videoGetter = registerForActivityResult( new ActivityResultContracts.StartActivityForResult(), result -> { Intent resultIntent = result.getData(); if (resultIntent != null) { if (result.getResultCode() == RESULT_OK) { glSurfaceView.post( () -> videoInput.start( this, resultIntent.getData(), faceMesh.getGlContext(), glSurfaceView.getWidth(), glSurfaceView.getHeight())); } } }); Intent pickVideoIntent = new Intent(Intent.ACTION_PICK); pickVideoIntent.setDataAndType(MediaStore.Video.Media.INTERNAL_CONTENT_URI, "video/*"); videoGetter.launch(pickVideoIntent); ``` ---------------------------------------- TITLE: MediaPipe Face Detection Subgraph API Reference DESCRIPTION: Details the different MediaPipe subgraphs designed for face detection, specifying their effective range and the hardware (CPU or GPU) used for processing and inference. Each subgraph is optimized for specific use cases. SOURCE: https://github.com/google-ai-edge/mediapipe/blob/master/mediapipe/modules/face_detection/README.md#_snippet_0 LANGUAGE: APIDOC CODE: ``` FaceDetectionFullRangeCpu: - Description: Detects faces. Works best for faces within 5 meters from the camera. - Processing: CPU input, and inference is executed on CPU. - Reference: https://github.com/google-ai-edge/mediapipe/tree/master/mediapipe/modules/face_detection/face_detection_full_range_cpu.pbtxt FaceDetectionFullRangeGpu: - Description: Detects faces. Works best for faces within 5 meters from the camera. - Processing: GPU input, and inference is executed on GPU. - Reference: https://github.com/google-ai-edge/mediapipe/tree/master/mediapipe/modules/face_detection/face_detection_full_range_gpu.pbtxt FaceDetectionShortRangeCpu: - Description: Detects faces. Works best for faces within 2 meters from the camera. - Processing: CPU input, and inference is executed on CPU. - Reference: https://github.com/google-ai-edge/mediapipe/tree/master/mediapipe/modules/face_detection/face_detection_short_range_cpu.pbtxt FaceDetectionShortRangeGpu: - Description: Detects faces. Works best for faces within 2 meters from the camera. - Processing: GPU input, and inference is executed on GPU. - Reference: https://github.com/google-ai-edge/mediapipe/tree/master/mediapipe/modules/face_detection/face_detection_short_range_gpu.pbtxt ``` ---------------------------------------- TITLE: Configure MediaPipe Face Effect for Mobile GPU DESCRIPTION: This configuration outlines the graph and build targets for the real-time mobile face effect application using MediaPipe. It includes the GPU graph definition and specific build targets for Android and iOS platforms, designed for single-face detection to enhance user experience. SOURCE: https://github.com/google-ai-edge/mediapipe/blob/master/__wiki__/MediaPipe-Face-Mesh.md#_snippet_2 LANGUAGE: APIDOC CODE: ``` Graph: mediapipe/graphs/face_effect/face_effect_gpu.pbtxt Android target: mediapipe/examples/android/src/java/com/google/mediapipe/apps/faceeffect iOS target: mediapipe/examples/ios/faceeffect ``` ---------------------------------------- TITLE: Initialize MediaPipe Face Detection for Android Video Input DESCRIPTION: This Java code demonstrates the setup of MediaPipe's Face Detection solution for processing video input on Android. It initializes the FaceDetection model, configures a VideoInput to feed frames, sets up a GlSurfaceView for OpenGL rendering of results, and includes an ActivityResultLauncher to pick video files from the device. SOURCE: https://github.com/google-ai-edge/mediapipe/blob/master/docs/solutions/face_detection.md#_snippet_8 LANGUAGE: java CODE: ``` // For video input and result rendering with OpenGL. FaceDetectionOptions faceDetectionOptions = FaceDetectionOptions.builder() .setStaticImageMode(false) .setModelSelection(0).build(); FaceDetection faceDetection = new FaceDetection(this, faceDetectionOptions); faceDetection.setErrorListener( (message, e) -> Log.e(TAG, "MediaPipe Face Detection error:" + message)); // Initializes a new VideoInput instance and connects it to MediaPipe Face Detection Solution. VideoInput videoInput = new VideoInput(this); videoInput.setNewFrameListener( textureFrame -> faceDetection.send(textureFrame)); // Initializes a new GlSurfaceView with a ResultGlRenderer instance // that provides the interfaces to run user-defined OpenGL rendering code. // See mediapipe/examples/android/solutions/facedetection/src/main/java/com/google/mediapipe/examples/facedetection/FaceDetectionResultGlRenderer.java // as an example. SolutionGlSurfaceView glSurfaceView = new SolutionGlSurfaceView<>( this, faceDetection.getGlContext(), faceDetection.getGlMajorVersion()); glSurfaceView.setSolutionResultRenderer(new FaceDetectionResultGlRenderer()); glSurfaceView.setRenderInputImage(true); faceDetection.setResultListener( faceDetectionResult -> { if (faceDetectionResult.multiFaceDetections().isEmpty()) { return; } RelativeKeypoint noseTip = faceDetectionResult .multiFaceDetections() .get(0) .getLocationData() .getRelativeKeypoints(FaceKeypoint.NOSE_TIP); Log.i( TAG, String.format( "MediaPipe Face Detection nose tip normalized coordinates (value range: [0, 1]): x=%f, y=%f", noseTip.getX(), noseTip.getY())); // Request GL rendering. glSurfaceView.setRenderData(faceDetectionResult); glSurfaceView.requestRender(); }); ActivityResultLauncher videoGetter = registerForActivityResult( new ActivityResultContracts.StartActivityForResult(), result -> { Intent resultIntent = result.getData(); if (resultIntent != null) { if (result.getResultCode() == RESULT_OK) { glSurfaceView.post( () -> videoInput.start( this, resultIntent.getData(), faceDetection.getGlContext(), glSurfaceView.getWidth(), glSurfaceView.getHeight())); } } }); Intent pickVideoIntent = new Intent(Intent.ACTION_PICK); pickVideoIntent.setDataAndType(MediaStore.Video.Media.INTERNAL_CONTENT_URI, "video/*"); videoGetter.launch(pickVideoIntent); ``` ---------------------------------------- TITLE: Process Static Images with MediaPipe Face Mesh on Android DESCRIPTION: This snippet illustrates how to use MediaPipe Face Mesh to process static images from the Android gallery. It configures FaceMesh for static image mode, connects it to a custom ImageView for drawing results, and uses ActivityResultLauncher to pick an image, sending the bitmap to FaceMesh for processing and displaying landmark coordinates. SOURCE: https://github.com/google-ai-edge/mediapipe/blob/master/docs/solutions/face_mesh.md#_snippet_9 LANGUAGE: java CODE: ``` // For reading images from gallery and drawing the output in an ImageView. FaceMeshOptions faceMeshOptions = FaceMeshOptions.builder() .setStaticImageMode(true) .setRefineLandmarks(true) .setMaxNumFaces(1) .setRunOnGpu(true).build(); FaceMesh faceMesh = new FaceMesh(this, faceMeshOptions); // Connects MediaPipe Face Mesh Solution to the user-defined ImageView instance // that allows users to have the custom drawing of the output landmarks on it. // See mediapipe/examples/android/solutions/facemesh/src/main/java/com/google/mediapipe/examples/facemesh/FaceMeshResultImageView.java // as an example. FaceMeshResultImageView imageView = new FaceMeshResultImageView(this); faceMesh.setResultListener( faceMeshResult -> { int width = faceMeshResult.inputBitmap().getWidth(); int height = faceMeshResult.inputBitmap().getHeight(); NormalizedLandmark noseLandmark = result.multiFaceLandmarks().get(0).getLandmarkList().get(1); Log.i( TAG, String.format( "MediaPipe Face Mesh nose coordinates (pixel values): x=%f, y=%f", noseLandmark.getX() * width, noseLandmark.getY() * height)); // Request canvas drawing. imageView.setFaceMeshResult(faceMeshResult); runOnUiThread(() -> imageView.update()); }); faceMesh.setErrorListener( (message, e) -> Log.e(TAG, "MediaPipe Face Mesh error:" + message)); // ActivityResultLauncher to get an image from the gallery as Bitmap. ActivityResultLauncher imageGetter = registerForActivityResult( new ActivityResultContracts.StartActivityForResult(), result -> { Intent resultIntent = result.getData(); if (resultIntent != null && result.getResultCode() == RESULT_OK) { Bitmap bitmap = null; try { bitmap = MediaStore.Images.Media.getBitmap( this.getContentResolver(), resultIntent.getData()); // Please also rotate the Bitmap based on its orientation. } catch (IOException e) { Log.e(TAG, "Bitmap reading error:" + e); } if (bitmap != null) { faceMesh.send(bitmap); } } }); Intent pickImageIntent = new Intent(Intent.ACTION_PICK); pickImageIntent.setDataAndType(MediaStore.Images.Media.INTERNAL_CONTENT_URI, "image/*"); imageGetter.launch(pickImageIntent); ``` ---------------------------------------- TITLE: HTML Structure for MediaPipe Face Mesh Web Application DESCRIPTION: This HTML snippet sets up the basic page structure for a MediaPipe Face Mesh web application. It includes necessary MediaPipe library imports (camera_utils, control_utils, drawing_utils, face_mesh) from CDN and defines video and canvas elements for input and output display. SOURCE: https://github.com/google-ai-edge/mediapipe/blob/master/docs/solutions/face_mesh.md#_snippet_4 LANGUAGE: HTML CODE: ```
```