2023年12月30日发(作者:)
Camera-API2分析
LT
一. android6.0源码分析之Camera API2.0简介
前面几篇主要分析的是Android Camera API1.0的架构以 及初始化流程,而google在android5.0(Lollipop)开始对Camera的架构进行了调整,为了适应HAL3,新添加实现了
CameraDeviceClient,而Camera API1.0已经被deprecated(即可能在更新的版本里会不支持此API).
1、Camera API2.0的架构图
Camera API2.0下的Camera架构与API1.0有所区别,下面将给出Camera API2.0以及Camera HAL3.2+下的Camera的总体架构图:
由图可知,Java层 要想与C++层的CameraService层进行通信,都是通过Java层的IPC
Binder进制进行的,主要包括以及两个接口来实现,其会在 Java层维护一个CameraDeviceImpl即Camera设备的代理,而CameraService2
以及CameraDeviceImpl的初始化会在此文的第二,第三节进行分析。而Java层对Camera的具体操作的操作流程大致为,Java层通过Device代理发送一个 CaptureRequest,而C++层进行相应的处理,再调用相应的回调来通知Java相应的处理结果,并将相应的Capture数据保存在 Surface Buffer里,这样Java层在回调函数中可以对数据进行相应的处理。而对于具体操作流程的分析,请参考文章开始时的Camera2相关文章的连接。
2、Java层的CameraService的实现和应用
从Camera API2开始,Camera的实现方式有所不同,最主要的区别是不再使用JNI来调用本地代码,从而获得本地CameraService,并实现其C/S 模式的通信,而是直接在Java层通过Java层的IPC Binder机制来获取Java层的CameraService的代理对象,从而直接在Java层获取本地的CameraService与Camera Device进行相应的通信。
相应的代码及目录:
:frameworks/base/core/java/android/hardware
:frameworks/av/services/camera/libcameraservice
:frameworks/base/core/java/android/hardware/camera2
获取CameraService的核心代码如下:
//
prvate void connectCameraServiceLocked(){
if(mCameraService != null)return;
//获取Binder
IBinder cameraServiceBinder = vice(CAMERA_SERVICE_BINDER_NAME);
if(cameraServiceBinder == null){
return;
}
try{
Death(this,/*flags*/ 0);
}catch(RemoteException e){
return;
}
ICameraService cameraServiceRaw = rface(cameraServiceBinder);
//根据cameraServiceRaw 创建CameraService实例
ICameraService cameraService = tance(cameraServiceRaw);
...
try{
//添加监听
tener(this);
//赋值给mCameraService的全局变量
mCameraService = cameraService;
}catch(CameraRuntimeException e){
...
}
}
由代码可知,通过Java层的Binder从ServiceManager里获取了一个Java层的CameraService实例,在打开 Camera的流程中,会通过此CameraService(Native的CameraService)与3
Camera通信,而其中的通信通过 ICameraDeviceUser来实现,接下来分析ICameraDeviceUser的实现。
3、的通信实现
Java层与C++ CameraService层之间的通信,通过封装了一个CameraDeviceUser来实现,它只是在Java层使用了AIDL技术来实现 Client,即在Java层维护了一个CameraDevice,这样的好处是响应速度更快,因为这样不需要通过每次进入Native层来完成通信,而可以通过Java层的IPC Binder机制即可完成。即API2.0通过AIDL实现一个接口ICameraDeviceUser,从而在Java层维护一个Camera proxy,之后的通信都是通过此代理CameraDeviceImpl来实现。
相关代码及目录:
:frameworks/base/core/java/android/hardware/camera2
:frameworks/av/camera/camera2
:frameworks/base/core/java/android/hardware/camera2/impl
获取Camera Device的Java层代理的核心代码如下:
//
private CameraDevice openCameraDeviceUserAsync(...){
//初始化Camera Java层代理对象
CameraDevice device = null;
try{
synchronized(mLock){
//初始化ICameraDeviceUser
ICameraDeviceUser cameraUser = null;
//初始化具体的CameraDevice代理
DeviceImpl deviceImpl = new re.
DeviceImpl(cameraId,callback,handler,characteristics);
BinderHolder holder = new BinderHolder();
ICameraDeviceCallbacks callbacks = lbacks();
...
try{
//如果支持HAL3.2+的devices
if(supportsCamera2ApiLocked(cameraId)){
//获取CameraService
ICameraService cameraService = ().getCameraService();
...
//连接设备
tDevice(callbacks,id,mContextgetOpPackageName()
,USE_CALLING_UID,holder);
//通过Binder获得打开的Camera设备返回的Camera代理
cameraUser = rface(der());
}else{//否则用遗产API
cameraUser = tBinderShim(callbacks,id);
}
}catch(...){
...
}
//包装代理对象
oteDevice(cameraUser);
4
device = deviceImpl;
}
}catch(...){
...
}
返回Camera代理
return device;
}
由代码可知,首先获取CameraService,然后通过它来开启Camera,而开启成功后,C++层会返回一个Camera device代理对象,此处即为ICameraDeviceUser,所以在Java层对其进行相应的封装,变成一个CameraDeviceImpl对象,此后,只要需要对Camera进行操作,都会调用CameraDeviceImpl对象的相关方法,并通过ICameraDeviceUser以及Java IPC
Binder来与本地的Camera device进行通信,至此,Camera API2.0的框架就分析结束了,具体的操作,如Camera的初始化,preview,capture等流程的分析,请看文章开始时,所列出的分析链接。
二. android6.0源码分析之Camera2 HAL分析
在上一篇文章对Camera API2.0的框架进行了简单的介绍,其中Camera HAL屏蔽了底层的实现细节,并且为上层提供了相应的接口,具体的HAL的原理,个人觉得老罗的文章Android硬件抽象层(HAL)概要介绍和学习计划分析的很详细,这里不做分析,本文将只分析Camera
HAL的初始化等相关流程。
1、Camera HAL的初始化
Camera HAL的初始加载是在Native的CameraService初始化流程中的,而CameraService初始化是在Main_的main方法开始的:
//Main_
int main(int argc __unused, char** argv){
…
sp
//获取ServieManager
sp
ALOGI("ServiceManager: %p", ());
AudioFlinger::instantiate();
//初始化media服务
MediaPlayerService::instantiate();
//初始化资源管理服务
ResourceManagerService::instantiate();
//初始化Camera服务
CameraService::instantiate();
//初始化音频服务
AudioPolicyService::instantiate();
SoundTriggerHwService::instantiate();
//初始化Radio服务
RadioService::instantiate();
registerExtensions();
5
//开始线程池
ProcessState::self()->startThreadPool();
IPCThreadState::self()->joinThreadPool();
}
其中,CameraService继承自BinderService,instantiate也是在BinderService中定义的,此方法就是调用publish方法,所以来看publish方法:
// BinderService.h
static status_t publish(bool allowIsolated = false) {
sp
//将服务添加到ServiceManager
return sm->addService(String16(SERVICE::getServiceName()),new
SERVICE(), allowIsolated);
}
这里,将会把CameraService服务加入到ServiceManager进行管理。
而在前面的文章android6.0源码分析之Camera API2.0简介中,需要通过Java层的IPC Binder来获取此CameraService对象,在此过程中会初始CameraService的sp类型的对象,而对于sp,此处不做过多的分析,具体的可以查看深入理解Android卷Ⅰ中的第五章中的相关内容。此处,在CameraService的构造时,会调用CameraService的onFirstRef方法:
//
void CameraService::onFirstRef()
{
BnCameraService::onFirstRef();
...
camera_module_t *rawModule;
//根据CAMERA_HARDWARE_MODULE_ID(字符串camera)来获取camera_module_t对象
int err = hw_get_module(CAMERA_HARDWARE_MODULE_ID,
(const hw_module_t **)&rawModule);
//创建CameraModule对象
mModule = new CameraModule(rawModule);
//模块初始化
err = mModule->init();
...
//通过Module获取Camera的数量
mNumberOfCameras = mModule->getNumberOfCameras();
mNumberOfNormalCameras = mNumberOfCameras;
//初始化闪光灯
mFlashlight = new CameraFlashlight(*mModule, *this);
status_t res = mFlashlight->findFlashUnits();
int latestStrangeCameraId = INT_MAX;
for (int i = 0; i < mNumberOfCameras; i++) {
//初始化CameraID
String8 cameraId = String8::format("%d", i);
struct camera_info info;
bool haveInfo = true;
//获取Camera信息
status_t rc = mModule->getCameraInfo(i, &info);
...
//如果Module版本高于2.4,找出冲突的设备参数
6
if (mModule->getModuleApiVersion() >= CAMERA_MODULE_API_VERSION_2_4 && haveInfo) {
cost = ce_cost;
conflicting_devices = cting_devices;
conflicting_devices_length = cting_devices_length;
}
//将冲突设备加入冲突set集中
std::set
for (size_t i = 0; i < conflicting_devices_length; i++) {
e(String8(conflicting_devices[i]));
}
...
}
//如果Module的API大于2.1,则设置回调
if (mModule->getModuleApiVersion() >= CAMERA_MODULE_API_VERSION_2_1) {
mModule->setCallbacks(this);
}
//若大于2.2,则设置供应商的Tag
if (mModule->getModuleApiVersion() >= CAMERA_MODULE_API_VERSION_2_2) {
setUpVendorTags();
}
//将此服务注册到CameraDeviceFactory
CameraDeviceFactory::registerService(this);
CameraService::pingCameraServiceProxy();
}
onFirstRef方法中,首先会通过HAL框架的hw_get_module来获取CameraModule对象,然后会对其进行相应的初始化,并会进行一些参数的设置,如camera的数量,闪光灯的初始化,以及回调函数的设置等,到这里,Camera2 HAL的模块就初始化结束了,下面给出初始化时序图:
2、Camera HAL的open流程分析
通过阅读android6.0源码发现,它提供了高通的Camera实现,并且提供了高通的Camera库,也实现了高通的Camera HAL的相应接口,对于高通的Camera,它在后台会有一个守7
护进程daemon,daemon是介于应用和驱动之间翻译ioctl的中间层(委托处 理)。本节将以Camera中的open流程为例,来分析Camera HAL的工作过程,在应用对硬件发出open请求后,会通过Camera HAL来发起open请求,而Camera HAL的open入口在进行了定义:
//
camera_module_t HAL_MODULE_INFO_SYM = {
//它里面包含模块的公共方法信息
common: camera_common,
get_number_of_cameras:
qcamera::QCamera2Factory::get_number_of_cameras,
get_camera_info: qcamera::QCamera2Factory::get_camera_info,
set_callbacks: qcamera::QCamera2Factory::set_callbacks,
get_vendor_tag_ops:
qcamera::QCamera3VendorTags::get_vendor_tag_ops,
open_legacy: qcamera::QCamera2Factory::open_legacy,
set_torch_mode: NULL,
init : NULL,
reserved: {0}
};
static hw_module_t camera_common = {
tag: HARDWARE_MODULE_TAG,
module_api_version: CAMERA_MODULE_API_VERSION_2_3,
hal_api_version: HARDWARE_HAL_API_VERSION,
id: CAMERA_HARDWARE_MODULE_ID,
name: "QCamera Module",
author: "Qualcomm Innovation Center Inc",
//它的方法数组里绑定了open接口
methods: &qcamera::QCamera2Factory::mModuleMethods,
dso: NULL,
reserved: {0}
};
struct hw_module_methods_t QCamera2Factory::mModuleMethods = {
//open方法的绑定
open: QCamera2Factory::camera_device_open,
};
Camera HAL层的open入口其实就是camera_device_open方法:
//
int QCamera2Factory::camera_device_open(const struct hw_module_t
*module, const char *id,
struct hw_device_t **hw_device){
...
return gQCamera2Factory->cameraDeviceOpen(atoi(id), hw_device);
8
}
它调用了cameraDeviceOpen方法,而其中的hw_device就是最后要返回给应用层的CameraDeviceImpl在Camera HAL层的对象,继续分析cameraDeviceOpen方法:
//
int QCamera2Factory::cameraDeviceOpen(int camera_id, struct
hw_device_t **hw_device){
...
//Camera2采用的Camera HAL版本为HAL3.0
if ( mHalDescriptors[camera_id].device_version ==
CAMERA_DEVICE_API_VERSION_3_0 ) {
//初始化QCamera3HardwareInterface对象,这里构造函数里将会进行configure_streams以及
//process_capture_result等的绑定
QCamera3HardwareInterface *hw = new
QCamera3HardwareInterface(
mHalDescriptors[camera_id].cameraId, mCallbacks);
//通过QCamera3HardwareInterface来打开Camera
rc = hw->openCamera(hw_device);
...
} else if (mHalDescriptors[camera_id].device_version ==
CAMERA_DEVICE_API_VERSION_1_0) {
//HAL API为2.0
QCamera2HardwareInterface *hw = new
QCamera2HardwareInterface((uint32_t)camera_id);
rc = hw->openCamera(hw_device);
...
} else {
...
}
return rc;
}
此方法有两个关键点:一个是QCamera3HardwareInterface对象的创建,它是用户空间与内核空间进行交互的接口;另一个是调用它的openCamera方法来打开Camera,下面将分别进行分析。
2.1 QCamera3HardwareInterface构造函数分析
在它的构造函数里面有一个关键的初始化,即 = &mCameraOps,它会定义Device操作的接口:
9
//
camera3_device_ops_t QCamera3HardwareInterface::mCameraOps = {
initialize:
QCamera3HardwareInterface::initialize,
//配置流数据的相关处理
configure_streams:
QCamera3HardwareInterface::configure_streams,
register_stream_buffers: NULL,
construct_default_request_settings:
QCamera3HardwareInterface::construct_default_request_settings,
//处理结果的接口
process_capture_request:
QCamera3HardwareInterface::process_capture_request,
get_metadata_vendor_tag_ops: NULL,
dump:
QCamera3HardwareInterface::dump,
flush:
QCamera3HardwareInterface::flush,
reserved: {0},
};
其中,会在configure_streams中配置好流的处理handle:
//
int QCamera3HardwareInterface::configure_streams(const struct
camera3_device *device,
camera3_stream_configuration_t *stream_list){
//获得QCamera3HardwareInterface对象
QCamera3HardwareInterface *hw
=reinterpret_cast
...
//调用它的configureStreams进行配置
int rc = hw->configureStreams(stream_list);
..
return rc;
}
继续追踪configureStream方法:
//
int
QCamera3HardwareInterface::configureStreams(camera3_stream_configuration_t *streamList){
...
//初始化Camera版本
al_version = CAM_HAL_V3;
...
10
//开始配置stream
...
//初始化相关Channel为NULL
if (mMetadataChannel) {
delete mMetadataChannel;
mMetadataChannel = NULL;
}
if (mSupportChannel) {
delete mSupportChannel;
mSupportChannel = NULL;
}
if (mAnalysisChannel) {
delete mAnalysisChannel;
mAnalysisChannel = NULL;
}
//创建Metadata Channel,并对其进行初始化
mMetadataChannel = new
QCamera3MetadataChannel(mCameraHandle->camera_handle,
mCameraHandle->ops,
captureResultCb,&gCamCapability[mCameraId]->padding_info,
CAM_QCOM_FEATURE_NONE, this);
...
//初始化
rc = mMetadataChannel->initialize(IS_TYPE_NONE);
...
//如果h/w support可用,则创建分析stream的Channel
if (gCamCapability[mCameraId]->hw_analysis_supported) {
mAnalysisChannel = new
QCamera3SupportChannel(mCameraHandle->camera_handle,
mCameraHandle->ops,&gCamCapability[mCameraId]->padding_info,
CAM_QCOM_FEATURE_PP_SUPERSET_HAL3,CAM_STREAM_TYPE_ANALYSIS,
&gCamCapability[mCameraId]->analysis_recommended_res,this);
...
}
bool isRawStreamRequested = false;
//清空stream配置信息
memset(&mStreamConfigInfo, 0, sizeof(cam_stream_size_info_t));
//为requested stream分配//
int
QCamera3HardwareInterface::configureStreams(camera3_stream_configuration_t *streamList){
11
...
//初始化Ca相关的channel对象
for (size_t i = 0; i < streamList->num_streams; i++) {
camera3_stream_t *newStream = streamList->streams[i];
uint32_t stream_usage = newStream->usage;
_sizes[_streams].width = (int32_t)newStream-
>width;
_sizes[_streams].height = (int32_t)newStream-
>height;
if ((newStream->stream_type ==
CAMERA3_STREAM_BIDIRECTIONAL||newStream->usage &
GRALLOC_USAGE_HW_CAMERA_ZSL) &&newStream->format ==
HAL_PIXEL_FORMAT_IMPLEMENTATION_DEFINED &&
jpegStream){
[_streams] =
CAM_STREAM_TYPE_SNAPSHOT;
ocess_mask[_streams] =
CAM_QCOM_FEATURE_NONE;
} else if(newStream->stream_type == CAMERA3_STREAM_INPUT) {
} else {
switch (newStream->format) {
//为非zsl streams查找他们的format
...
}
}
if (newStream->priv == NULL) {
//为新的stream构造Channel
switch (newStream->stream_type) {//分类型构造
case CAMERA3_STREAM_INPUT:
newStream->usage |= GRALLOC_USAGE_HW_CAMERA_READ;
newStream->usage |=
GRALLOC_USAGE_HW_CAMERA_WRITE;//WR for inplace algo's
break;
case CAMERA3_STREAM_BIDIRECTIONAL:
...
break;
case CAMERA3_STREAM_OUTPUT:
...
break;
default:
break;
}
//根据前面的得到的stream的参数类型以及format分别对各类型的12
channel进行构造
if (newStream->stream_type == CAMERA3_STREAM_OUTPUT ||
newStream->stream_type ==
CAMERA3_STREAM_BIDIRECTIONAL) {
QCamera3Channel *channel = NULL;
switch (newStream->format) {
case HAL_PIXEL_FORMAT_IMPLEMENTATION_DEFINED:
/* use higher number of buffers for HFR mode */
...
//创建Regular Channel
channel = new
QCamera3RegularChannel(mCameraHandle->camera_handle,
mCameraHandle->ops,
captureResultCb,&gCamCapability[mCameraId]-
>padding_info,this,newStream,(cam_stream_type_t)[
_streams],ocess_mask[
_streams],mMetadataChannel,numBuffers);
... mPendingRequestInfo
newStream->max_buffers =
channel->getNumBuffers();
newStream->priv = channel;
break;
case HAL_PIXEL_FORMAT_YCbCr_420_888:
//创建YWV Channel
...
break;
case HAL_PIXEL_FORMAT_RAW_OPAQUE:
case HAL_PIXEL_FORMAT_RAW16:
case HAL_PIXEL_FORMAT_RAW10:
//创建Raw Channel
...
break;
case HAL_PIXEL_FORMAT_BLOB:
//创建QCamera3PicChannel
...
break;
default:
break;
}
} else if (newStream->stream_type == CAMERA3_STREAM_INPUT)
{
newStream->max_buffers =
MAX_INFLIGHT_REPROCESS_REQUESTS;
} else {
}
13
for (List
it=();it != ();
it++) {
if ((*it)->stream == newStream) {
(*it)->channel = (QCamera3Channel*)
newStream->priv;
break;
}
}
} else {
}
if (newStream->stream_type != CAMERA3_STREAM_INPUT)
_streams++;
}
}
if (isZsl) {
if (mPictureChannel) {
mPictureChannel->overrideYuvSize(zslStream->width,
zslStream->height);
}
} else if (mPictureChannel && m_bIs4KVideo) {
mPictureChannel->overrideYuvSize(videoWidth, videoHeight);
}
//RAW DUMP channel
if (mEnableRawDump && isRawStreamRequested == false){
cam_dimension_t rawDumpSize;
rawDumpSize = getMaxRawSize(mCameraId);
mRawDumpChannel = new
QCamera3RawDumpChannel(mCameraHandle->camera_handle,
mCameraHandle->ops,rawDumpSize,&gCamCapability[mCameraId]->padding_info,
this, CAM_QCOM_FEATURE_NONE);
...
}
//进行相关Channel的配置
...
/* Initialize mPendingRequestInfo and mPendnigBuffersMap */
for (List
();
i != (); i++) {
clearInputBuffer(i->input_buffer);
i = (i);
}
();
// Initialize/Reset the pending buffers list
_buffers = 0;
();
();
14
return rc;
}
此方法内容比较多,只抽取其中核心的代码进行说明,它首先会根据HAL的版本来对stream进行相应的配置初始化,然后再根据stream类型对 stream_list的stream创建相应的Channel,主要有 QCamera3MetadataChannel,QCamera3SupportChannel等,然后再进行相应的配置,其中QCamera3MetadataChannel在后面的处理capture request的时候会用到,这里就不做分析,而Camerametadata则是Java层和CameraService之间传递的元数据,见android6.0源码分析之Camera API2.0简介中的Camera2架构图,至此,QCamera3HardwareInterface构造结束,与本文相关的就是配置了。
2.2 openCamera分析
本节主要分析Module是如何打开Camera的,openCamera的代码如下:
//
int QCamera3HardwareInterface::openCamera(struct hw_device_t
**hw_device){
int rc = 0;
if (mCameraOpened) {//如果Camera已经被打开,则此次打开的设备为NULL,并且打开结果为PERMISSION_DENIED
*hw_device = NULL;
return PERMISSION_DENIED;
}
//调用openCamera方法来打开
rc = openCamera();
//打开结果处理
if (rc == 0) {
//获取打开成功的hw_device_t对象
*hw_device = &;
} else
*hw_device = NULL;
}
return rc;
}
它调用了openCamera()方法来打开Camera:
//
int QCamera3HardwareInterface::openCamera()
{
...
//打开camera,获取mCameraHandle
15
mCameraHandle = camera_open((uint8_t)mCameraId);
...
mCameraOpened = true;
//注册mm-camera-interface里的事件处理,其中camEctHandle为事件处理Handle
rc =
mCameraHandle->ops->register_event_notify(mCameraHandle->camera_handle,camEvtHandle
,(void *)this);
return NO_ERROR;
}
它调用camera_open方法来打开Camera,并且向CameraHandle注册了Camera 时间处理的Handle–camEvtHandle,首先分析camera_open方法,这里就将进入高通的Camera的实现了,而Mm_camera_interface.c是高通提供的相关操作的接口,接下来分析高通Camera的camera_open方法:
//Mm_camera_interface.c
mm_camera_vtbl_t * camera_open(uint8_t camera_idx)
{
int32_t rc = 0;
mm_camera_obj_t* cam_obj = NULL;
/* opened already 如果已经打开*/
if(NULL != g_cam__obj[camera_idx]) {
/* Add reference */
g_cam__obj[camera_idx]->ref_count++;
pthread_mutex_unlock(&g_intf_lock);
return &g_cam__obj[camera_idx]->vtbl;
}
cam_obj = (mm_camera_obj_t *)malloc(sizeof(mm_camera_obj_t));
...
/* initialize camera obj */
memset(cam_obj, 0, sizeof(mm_camera_obj_t));
cam_obj->ctrl_fd = -1;
cam_obj->ds_fd = -1;
cam_obj->ref_count++;
cam_obj->my_hdl = mm_camera_util_generate_handler(camera_idx);
cam_obj->_handle = cam_obj->my_hdl; /* set handler */
//mm_camera_ops里绑定了相关的操作接口
cam_obj-> = &mm_camera_ops;
pthread_mutex_init(&cam_obj->cam_lock, NULL);
pthread_mutex_lock(&cam_obj->cam_lock);
pthread_mutex_unlock(&g_intf_lock);
//调用mm_camera_open方法来打开camera
rc = mm_camera_open(cam_obj);
16
pthread_mutex_lock(&g_intf_lock);
...
//结果处理,并返回
...
}
由代码可知,这里将会初始化一个mm_camera_obj_t对象,其中,ds_fd为socket fd,而mm_camera_ops则绑定了相关的接口,最后调用mm_camera_open来打开Camera,首先来看看 mm_camera_ops绑定了哪些方法:
//Mm_camera_interface.c
static mm_camera_ops_t mm_camera_ops = {
.query_capability = mm_camera_intf_query_capability,
//注册事件通知的方法
.register_event_notify = mm_camera_intf_register_event_notify,
.close_camera = mm_camera_intf_close,
.set_parms = mm_camera_intf_set_parms,
.get_parms = mm_camera_intf_get_parms,
.do_auto_focus = mm_camera_intf_do_auto_focus,
.cancel_auto_focus = mm_camera_intf_cancel_auto_focus,
.prepare_snapshot = mm_camera_intf_prepare_snapshot,
.start_zsl_snapshot = mm_camera_intf_start_zsl_snapshot,
.stop_zsl_snapshot = mm_camera_intf_stop_zsl_snapshot,
.map_buf = mm_camera_intf_map_buf,
.unmap_buf = mm_camera_intf_unmap_buf,
.add_channel = mm_camera_intf_add_channel,
.delete_channel = mm_camera_intf_del_channel,
.get_bundle_info = mm_camera_intf_get_bundle_info,
.add_stream = mm_camera_intf_add_stream,
.link_stream = mm_camera_intf_link_stream,
.delete_stream = mm_camera_intf_del_stream,
//配置stream的方法
.config_stream = mm_camera_intf_config_stream,
.qbuf = mm_camera_intf_qbuf,
.get_queued_buf_count = mm_camera_intf_get_queued_buf_count,
.map_stream_buf = mm_camera_intf_map_stream_buf,
.unmap_stream_buf = mm_camera_intf_unmap_stream_buf,
.set_stream_parms = mm_camera_intf_set_stream_parms,
.get_stream_parms = mm_camera_intf_get_stream_parms,
.start_channel = mm_camera_intf_start_channel,
.stop_channel = mm_camera_intf_stop_channel,
.request_super_buf = mm_camera_intf_request_super_buf,
.cancel_super_buf_request =
mm_camera_intf_cancel_super_buf_request,
.flush_super_buf_queue = mm_camera_intf_flush_super_buf_queue,
.configure_notify_mode = mm_camera_intf_configure_notify_mode,
17
//处理capture的方法
.process_advanced_capture =
mm_camera_intf_process_advanced_capture
};
接着分析mm_camera_open方法:
//Mm_camera.c
int32_t mm_camera_open(mm_camera_obj_t *my_obj){
...
do{
n_try--;
//根据设备名字,打开相应的设备驱动fd
my_obj->ctrl_fd = open(dev_name, O_RDWR | O_NONBLOCK);
if((my_obj->ctrl_fd >= 0) || (errno != EIO) || (n_try <= 0 ))
{
break;
}
usleep(sleep_msec * 1000U);
}while (n_try > 0);
...
//打开domain socket
n_try = MM_CAMERA_DEV_OPEN_TRIES;
do {
n_try--;
my_obj->ds_fd = mm_camera_socket_create(cam_idx,
MM_CAMERA_SOCK_TYPE_UDP);
usleep(sleep_msec * 1000U);
} while (n_try > 0);
...
//初始化锁
pthread_mutex_init(&my_obj->msg_lock, NULL);
pthread_mutex_init(&my_obj->cb_lock, NULL);
pthread_mutex_init(&my_obj->evt_lock, NULL);
pthread_cond_init(&my_obj->evt_cond, NULL);
//开启线程,它的线程体在mm_camera_dispatch_app_event方法中
mm_camera_cmd_thread_launch(&my_obj->evt_thread,
mm_camera_dispatch_app_event,
(void *)my_obj);
mm_camera_poll_thread_launch(&my_obj->evt_poll_thread,
MM_CAMERA_POLL_TYPE_EVT);
mm_camera_evt_sub(my_obj, TRUE);
return rc;
...
}
由代码可知,它会打开Camera的设备文件,然后开启dispatch_app_event线程,线程方法18
体mm_camera_dispatch_app_event方法代码如下:
//Mm_camera.c
static void mm_camera_dispatch_app_event(mm_camera_cmdcb_t
*cmd_cb,void* user_data){
mm_camera_cmd_thread_name("mm_cam_event");
int i;
mm_camera_event_t *event = &cmd_cb->;
mm_camera_obj_t * my_obj = (mm_camera_obj_t *)user_data;
if (NULL != my_obj) {
pthread_mutex_lock(&my_obj->cb_lock);
for(i = 0; i < MM_CAMERA_EVT_ENTRY_MAX; i++) {
if(my_obj->[i].evt_cb) {
//调用camEvtHandle方法
my_obj->[i].evt_cb(
my_obj->my_hdl,
event,
my_obj->[i].user_data);
}
}
pthread_mutex_unlock(&my_obj->cb_lock);
}
}
最后会调用mm-camera-interface中注册好的事件处理evt_cb,它就是在前面注册好的camEvtHandle:
//
void QCamera3HardwareInterface::camEvtHandle(uint32_t
/*camera_handle*/,mm_camera_event_t *evt,
void *user_data){
//获取QCamera3HardwareInterface接口指针
QCamera3HardwareInterface *obj = (QCamera3HardwareInterface
*)user_data;
if (obj && evt) {
switch(evt->server_event_type) {
case CAM_EVENT_TYPE_DAEMON_DIED:
camera3_notify_msg_t notify_msg;
memset(¬ify_msg, 0,
sizeof(camera3_notify_msg_t));
notify_ = CAMERA3_MSG_ERROR;
notify__code =
CAMERA3_MSG_ERROR_DEVICE;
notify__stream = NULL;
notify__number = 0;
obj->mCallbackOps->notify(obj->mCallbackOps,
¬ify_msg);
break;
19
case CAM_EVENT_TYPE_DAEMON_PULL_REQ:
pthread_mutex_lock(&obj->mMutex);
obj->mWokenUpByDaemon = true;
//开启process_capture_request
obj->unblockRequestIfNecessary();
pthread_mutex_unlock(&obj->mMutex);
break;
default:
break;
}
} else {
}
}
由代码可知,它会调用QCamera3HardwareInterface的unblockRequestIfNecessary来发起结果处理请求:
//
void QCamera3HardwareInterface::unblockRequestIfNecessary()
{
// Unblock process_capture_request
//开启process_capture_request
pthread_cond_signal(&mRequestCond);
}567
在初始化QCamera3HardwareInterface对象的时候,就绑定了处理Metadata的回调captureResultCb方法:它主要是对数据源进行相应的处理,而具体的capture请求的结果处理还是由process_capture_request来进行处理的,而这里会调用方法unblockRequestIfNecessary来触发process_capture_request方法执行,而在Camera框架中,发起请求时会启动一个RequestThread线程,在它的threadLoop方法中,会不停的调用process_capture_request方法来进行请求的处理,而它最后会回调Camera3Device中的processCaptureResult方法来进行结果处理:
//
void Camera3Device::processCaptureResult(const
camera3_capture_result *result) {
...
{
...
if (mUsePartialResult && result->result != NULL) {
if (mDeviceVersion >= CAMERA_DEVICE_API_VERSION_3_2) {
...
20
if (isPartialResult) {
(result->result);
}
} else {
camera_metadata_ro_entry_t partialResultEntry;
res = find_camera_metadata_ro_entry(result->result,
ANDROID_QUIRKS_PARTIAL_RESULT,
&partialResultEntry);
if (res != NAME_NOT_FOUND
&& > 0 &&
.u8[0]
==ANDROID_QUIRKS_PARTIAL_RESULT_PARTIAL) {
isPartialResult = true;
(
result->result);
(
ANDROID_QUIRKS_PARTIAL_RESULT);
}
}
if (isPartialResult) {
// Fire off a 3A-only result if possible
if (!nt3A) {
//处理3A结果
nt3A
=processPartial3AResult(frameNumber,
tedResult,Extras);
}
}
}
...
//查找camera元数据入口
camera_metadata_ro_entry_t entry;
res = find_camera_metadata_ro_entry(result->result,
ANDROID_SENSOR_TIMESTAMP, &entry);
if (shutterTimestamp == 0) {
Array(result->output_buffers,
result->num_output_buffers);
} else {
重要的分析//返回处理的outputbuffer
returnOutputBuffers(result->output_buffers,
result->num_output_buffers, shutterTimestamp);
}
if (result->result != NULL && !isPartialResult) {
21
if (shutterTimestamp == 0) {
gMetadata = result->result;
tedResult =
collectedPartialResult;
} else {
CameraMetadata metadata;
metadata = result->result;
//发送Capture结构,即调用通知回调
sendCaptureResult(metadata, Extras,
collectedPartialResult, frameNumber,
hasInputBufferInRequest,
gerCancelOverride);
}
}
removeInFlightRequestIfReadyLocked(idx);
} // scope for mInFlightLock
if (result->input_buffer != NULL) {
if (hasInputBufferInRequest) {
Camera3Stream *stream =
Camera3Stream::cast(result->input_buffer->stream);
重要的分析//返回处理的inputbuffer
res =
stream->returnInputBuffer(*(result->input_buffer));
} else {}
}
}
分析returnOutputBuffers方法,inputbuffer的runturnInputBuffer方法流程类似:
//
void Camera3Device::returnOutputBuffers(const
camera3_stream_buffer_t *outputBuffers, size_t
numBuffers, nsecs_t timestamp) {
for (size_t i = 0; i < numBuffers; i++)
{
Camera3Stream *stream =
Camera3Stream::cast(outputBuffers[i].stream);
status_t res = stream->returnBuffer(outputBuffers[i],
timestamp);
...
}
}
方法里调用了returnBuffer方法:
//
status_t Camera3Stream::returnBuffer(const camera3_stream_buffer
22
&buffer,nsecs_t timestamp) {
//返回buffer
status_t res = returnBufferLocked(buffer, timestamp);
if (res == OK) {
fireBufferListenersLocked(buffer, /*acquired*/false,
/*output*/true);
();
}
return res;
}
再继续看returnBufferLocked,它调用了returnAnyBufferLocked方法,而
returnAnyBufferLocked方法又调用了returnBufferCheckedLocked方法,现在分析
returnBufferCheckedLocked:
//
status_t Camera3OutputStream::returnBufferCheckedLocked(const
camera3_stream_buffer &buffer,
nsecs_t timestamp,bool output,/*out*/sp
*releaseFenceOut) {
...
// Fence management - always honor release fence from HAL
sp
int anwReleaseFence = releaseFence->dup();
if ( == CAMERA3_BUFFER_STATUS_ERROR) {
// Cancel buffer
res = currentConsumer->cancelBuffer((),
container_of(, ANativeWindowBuffer,
handle),
anwReleaseFence);
...
} else {
...
res = currentConsumer->queueBuffer((),
container_of(, ANativeWindowBuffer,
handle),
anwReleaseFence);
...
}
...
return res;
}
由代码可知,如果Buffer没有出现状态错误,它会调用currentConsumer的queueBuffer方法,而具体的Consumer 则是在应用层初始化Camera时进行绑定的,典型的Consumer有23
SurfaceTexture,ImageReader等,而在Native层 中,它会调用BufferQueueProducer的queueBuffer方法:
//
status_t BufferQueueProducer::queueBuffer(int slot,
const QueueBufferInput &input, QueueBufferOutput *output) {
...
//初始化Frame可用的监听器
sp
sp
int callbackTicket = 0;
BufferItem item;
{ // Autolock scope
...
const sp
graphicBuffer(mSlots[slot].mGraphicBuffer);
Rect bufferRect(graphicBuffer->getWidth(),
graphicBuffer->getHeight());
Rect croppedRect;
ect(bufferRect, &croppedRect);
...
//如果队列为空
if (mCore->()) {
mCore->_back(item);
frameAvailableListener = mCore->mConsumerListener;
} else {
//否则,不为空,对Buffer进行处理,并获取FrameAvailableListener监听
BufferQueueCore::Fifo::iterator
front(mCore->());
if (front->mIsDroppable) {
if (mCore->stillTracking(front)) {
mSlots[front->mSlot].mBufferState =
BufferSlot::FREE;
mCore->_front(front->mSlot);
}
*front = item;
frameReplacedListener = mCore->mConsumerListener;
} else {
mCore->_back(item);
frameAvailableListener = mCore->mConsumerListener;
}
}
mCore->mBufferHasBeenQueued = true;
mCore->ast();
output->inflate(mCore->mDefaultWidth,
24
mCore->mDefaultHeight,mCore->mTransformHint,
static_cast
// Take a ticket for the callback functions
callbackTicket = mNextCallbackTicket++;
mCore->validateConsistencyLocked();
} // Autolock scope
...
{
...
if (frameAvailableListener != NULL) {
//回调SurfaceTexture中定义好的监听IConsumerListener的onFrameAvailable方法来对数据进行处理
frameAvailableListener->onFrameAvailable(item);
} else if (frameReplacedListener != NULL) {
frameReplacedListener->onFrameReplaced(item);
}
++mCurrentCallbackTicket;
ast();
}
return NO_ERROR;
}
由代码可知,它最后会调用Consumer的回调FrameAvailableListener的onFrameAvailable方法,到这里, 就比较清晰为什么我们在写Camera应用,为其初始化Surface时,我们需要重写FrameAvailableListener了,因为在此方法里 面,会进行结果的处理,至此,Camera
HAL的Open流程就分析结束了。下面给出流程的时序图:
25
三. android6.0源码分析之Camera API2.0下的初始化流程分析
在文章android源码分析之Camera API2.0简介中,对Camera API2.0的框架以及代码做了简单介绍,本文将基于android6.0源码,分析Camera API2.0下的Camera2内置应用中,对Camera的初始化的流程分析,主要涉及Camera HAL3.0,Java 层的IPC Binder,Native层的CameraService的C/S服务架构等关键点。
1、Camera2初始化的应用层流程分析
Camera2的初始化流程与Camera1.0有所区别,本文将就Camera2的内置应用来分析Camera2.0的初始化过程。 Camera2.0首先启动的是CameraActivity,而它继承自QuickActivity,在代码中你会发现没有重写OnCreate等生命 周期方法,因为此处采用的是模板方法的设计模式,在QuickActivity中的onCreate方法调用的是onCreateTasks等方法,所以 要看onCreate方法就只须看onCreateTasks方法即可:
//
@Override
public void onCreateTasks(Bundle state) {
Profile profile = ("teTasks")
.start();
...
mOnCreateTime = tTimeMillis();
mAppContext = getApplicationContext();
mMainHandler = new MainHandler(this, getMainLooper());
…
26
try {
//初始化OneCameraOpener对象
①mOneCameraOpener = eOneCameraOpener(
mFeatureConfig, mAppContext,mActiveCameraDeviceTracker,
playMetrics(this));
mOneCameraManager = eOneCameraManager();
} catch (OneCameraException e) {...}
…
//建立模块信息
②odules(mAppContext, mModuleManager, mFeatureConfig);
…
//进行初始化
③(this, isSecureCamera(), isCaptureIntent());
…
}
如代码所示,重要的有以上三点,先看第一点:
//
public static OneCameraOpener provideOneCameraOpener(OneCameraFeatureConfig
featureConfig, Context context, ActiveCameraDeviceTracker
activeCameraDeviceTracker,DisplayMetrics displayMetrics)
throws OneCameraException {
//创建OneCameraOpener对象
Optional
featureConfig, context, activeCameraDeviceTracker, displayMetrics);
if (!ent()) {
manager = ();
}
...
return ();
}
它调用Camera2OneCameraOpenerImpl的create方法来获得一个OneCameraOpener对象,以供CameraActivity之后的操作使用,继续看create方法:
//
public static Optional
featureConfig, Context context, ActiveCameraDeviceTracker
activeCameraDeviceTracker, DisplayMetrics displayMetrics) {
...
CameraManager cameraManager;
try {
cameraManager = ce().provideCameraManager();
} catch (IllegalStateException ex) {...}
//新建一个Camera2OneCameraOpenerImpl对象
OneCameraOpener oneCameraOpener = new Camera2OneCameraOpenerImpl(
featureConfig, context, cameraManager,
activeCameraDeviceTracker, displayMetrics);
return (oneCameraOpener);
}
很明显,它首先获取一个cameraManger对象,然后根据这个cameraManager对象来新创建了一个 Camera2OneCameraOpenerImpl对象,所以第一步主要是为了获取一个OneCameraOpener对象,它的实现为 Camera2OneCameraOpenerImpl类。
继续看第二步,odules:
//
public static void setupModules(Context context, ModuleManager moduleManager,
OneCameraFeatureConfig config) {
27
Resources res = ources();
int photoModuleId = ources().getInteger(
_mode_photo);
//注册Photo模块
registerPhotoModule(moduleManager, photoModuleId,
,gCaptureModule());
//计算你还Photo模块设置为默认的模块
aultModuleIndex(photoModuleId);
//注册Videa模块
registerVideoModule(moduleManager, eger(
_mode_video),);
if (htCycleCapture(context)) {//开启闪光
//注册广角镜头
registerWideAngleModule(moduleManager, eger(
_mode_panorama),SettingsScopeNamespaces
.PANORAMA);
//注册光球模块
registerPhotoSphereModule(moduleManager,eger(
_mode_photosphere),
MA);
}
//若需重新聚焦
if (ocusCapture(context)) {
//注册重聚焦模块
registerRefocusModule(moduleManager, eger(
_mode_refocus),
S);
}
//如果有色分离模块
if (mAsSeparateModule(config)) {
//注册色分离模块
registerGcamModule(moduleManager, eger(
_mode_gcam),,
PlusSupportLevel());
}
int imageCaptureIntentModuleId = eger(
_mode_capture_intent);
registerCaptureIntentModule(moduleManager,
imageCaptureIntentModuleId,,
gCaptureModule());
}
代码根据配置信息,进行一系列模块的注册,其中PhotoModule和VideoModule被注册,而其他的module则是根据配置来进行的,因为打开Camera应用,既可以拍照片也可以拍视频,此处,只分析PhoneModule的注册:
//
private static void registerPhotoModule(ModuleManager moduleManager, final
int moduleId, final String namespace,
final boolean enableCaptureModule) {
//向ModuleManager注册PhotoModule模块
erModule(new Agent() {
@Override
public int getModuleId() {
return moduleId;
}
@Override
public boolean requestAppForCamera() {
return !enableCaptureModule;
}
@Override
public String getScopeNamespace() {
return namespace;
28
}
@Override
public ModuleController createModule(AppController app, Intent
intent) {
Log.v(TAG, "EnableCaptureModule = " + enableCaptureModule);
//创建ModuleController
return enableCaptureModule ? new CaptureModule(app)
: new PhotoModule(app);
}
});
}
由代码可知,它最终是由ModuleManager来新建一个CaptureModule实例,而CaptureModule其实实现了 ModuleController ,即创建了一个CaptureModule模式下的ModuleController对象,而真正的CaptureModule的具体实现为 ModuleManagerImpl。
至此,前两步已经获得了OneCameraOpener以及新建了ModuleController,并进行了注册,接下来分析第三步,(this, isSecureCamera(), isCaptureIntent()):
//
public void init(CameraActivity activity, boolean isSecureCamera, boolean
isCaptureIntent) {
...
HandlerThread thread = new HandlerThread("aHandler");
();
mCameraHandler = new Handler(per());
//获取第一步中创建的OneCameraOpener对象
mOneCameraOpener = eraOpener();
try {
//获取前面创建的OneCameraManager对象
mOneCameraManager = eOneCameraManager();
} catch (OneCameraException e) {
Log.e(TAG, "Unable to provide a OneCameraManager. ", e);
}
`...
//新建CaptureModule的UI
mUI = new CaptureModuleUI(activity, mAppController.
getModuleLayoutRoot(), mUIListener);
//设置预览状态的监听
viewStatusListener(mPreviewStatusListener);
synchronized (mSurfaceTextureLock) {
//获取SurfaceTexture
mPreviewSurfaceTexture = eraAppUI()
.getSurfaceTexture();
}
}
首先获取前面创建的OneCameraOpener对象以及OneCameraManager对象,然后再设置预览状态监听,这里主要分析预览状态的监听:
//
private final PreviewStatusListener mPreviewStatusListener = new
PreviewStatusListener() {
...
@Override
public void onSurfaceTextureAvailable(SurfaceTexture surface,
int width, int height) {
updatePreviewTransform(width, height, true);
synchronized (mSurfaceTextureLock) {
29
mPreviewSurfaceTexture = surface;
}
//打开Camera
reopenCamera();
}
@Override
public boolean onSurfaceTextureDestroyed(SurfaceTexture surface) {
Log.d(TAG, "onSurfaceTextureDestroyed");
synchronized (mSurfaceTextureLock) {
mPreviewSurfaceTexture = null;
}
//关闭Camera
closeCamera();
return true;
}
@Override
public void onSurfaceTextureSizeChanged(SurfaceTexture surface,
int width, int height) {
//更新预览尺寸
updatePreviewBufferSize();
}
...
};
由代码可知,当SurfaceTexture的状态变成可用的时候,会调用reopenCamera()方法来打开Camera,所以继续分析reopenCamera()方法:
//
private void reopenCamera() {
if (mPaused) {
return;
}
_POOL_e(new Runnable() {
@Override
public void run() {
closeCamera();
if(!ed()) {
//开启Camera并开始预览
openCameraAndStartPreview();
}
}
});
}
它采用异步任务的方法,开启一个异步线程来进行启动操作,首先关闭打开的Camera,然后如果AppController不处于暂停状态,则打开Camera并启动Preview操作,所以继续分析openCameraAndStartPreview方法:
//
private void openCameraAndStartPreview() {
...
if (mOneCameraOpener == null) {
Log.e(TAG, "no available OneCameraManager, showing error dialog");
//释放CameraOpenCloseLock锁
e();
alErrorHandler().onGenericCameraAccessFailure();
("No OneCameraManager");
return;
}
// Derive objects necessary for camera creation.
MainThread mainThread = ();
30
//查找需要打开的CameraId
CameraId cameraId = rstCameraFacing(
mCameraFacing);
...
//打开Camera
(cameraId, captureSetting, mCameraHandler,
mainThread, imageRotationCalculator, mBurstController,
mSoundPlayer,new OpenCallback() {
@Override
public void onFailure() {
//进行失败的处理
...
}
@Override
public void onCameraClosed() {
...
}
@Override
public void onCameraOpened(@Nonnull final OneCamera camera) {
Log.d(TAG, "onCameraOpened: " + camera);
mCamera = camera;
if (ed()) {
onFailure();
return;
}
...
e(new Runnable() {
@Override
public void run() {
//通知UI,Camera状态变化
eraAppUI().onChangeCamera();
//使能拍照按钮
tonManager().enableCameraButton();
}
});
//至此,Camera打开成功,开始预览
review(new Surface(getPreviewSurfaceTexture()),
new CaptureReadyCallback() {
@Override
public void onSetupFailed() {
...
}
@Override
public void onReadyForCapture() {
//释放锁
e();
e(new Runnable() {
@Override
public void run() {
...
onPreviewStarted();
...
onReadyStateChanged(true);
//设置CaptureModule为Capture准备的状态监听
dyStateChangedListener(
);
lizeZoom(Zoom());
usStateListener(
);
}
});
}
});
}
}, alErrorHandler());
("()");
}
}
31
首先,它主要会调用Camera2OneCameraOpenerImpl的open方法来打开Camera,并定义了开启的回调函数,对开启结束后的结果进行处理,如失败则释放mCameraOpenCloseLock,并暂停mAppController,如果打开成功,通知UI成功,并开启 Camera的Preview,并且定义了Preview的各种回调操作,这里主要分析Open过程,所以继续分析:
//
@Override
public void open(
...
raOpening(cameraKey);
//打开Camera,此处调用框架层的CameraManager类的openCamera,进入frameworks层
mera(ue(),
new allback() {
private boolean isFirstCallback = true;
@Override
...
@Override
public void onOpened(CameraDevice device) {
//第一次调用此回调
if (isFirstCallback) {
isFirstCallback = false;
try {
CameraCharacteristics characteristics = mCameraManager
.getCameraCharacteristics(());
...
//创建OneCamera对象
OneCamera oneCamera = (device,
characteristics, mFeatureConfig, captureSetting,
mDisplayMetrics, mContext, mainThread,
imageRotationCalculator, burstController, soundPlayer,
fatalErrorHandler);
if (oneCamera != null) {
//如果oneCamera不为空,则回调onCameraOpened,后面将做分析
raOpened(oneCamera);
} else {
...
ure();
}
} catch (CameraAccessException e) {
ure();
} catch (OneCameraAccessException e) {
Log.d(TAG, "Could not create OneCamera", e);
ure();
}
}
}
}, handler);
...
}
至此,Camera的初始化流程中应用层的分析就差不多了,下一步将会调用CameraManager的openCamera方法来进入框架层,并进行Camera的初始化,下面将应用层的初始化时序
32
2、Camera2初始化的框架层流程分析
由上面的分析可知,将由应用层进入到框架层处理,将会调用CameraManager的openCamera方法,并且定义了CameraDevice的状态回调函数,具体的回调操作此处不做分析,继续跟踪openCamera()方法:
//(frameworks/base/core/java/android/hardware/camera2)
@RequiresPermission()
public void openCamera(@NonNull String cameraId,@NonNull final
allback callback, @Nullable Handler handler)
throws CameraAccessException {
...
openCameraDeviceUserAsync(cameraId, callback, handler);
}
由代码可知,此处与Camera1.0有明显不同,Camera1.0是通过一个异步的线程以及JNI来调用 android_hardware_里面的native_setup方法来连接Camera,其使用的是C++的Binder来与 CameraService进行通信的,而此处则不一样,它直接使用的是Java层的Binder来进行通信,先看 openCameraDeviceUserAsync代码:
//(frameworks/base/core/java/android/hardware/camera2)
private CameraDevice openCameraDeviceUserAsync(String cameraId,
allback callback, Handler handler)
throws CameraAccessException {
CameraCharacteristics characteristics = getCameraCharacteristics(
cameraId);
CameraDevice device = null;
try {
synchronized (mLock) {
ICameraDeviceUser cameraUser = null;
33
//初始化一个CameraDevice对象
DeviceImpl deviceImpl =
new DeviceImpl(cameraId,
callback, handler, characteristics);
BinderHolder holder = new BinderHolder();
//获取回调
ICameraDeviceCallbacks callbacks = lbacks();
int id = nt(cameraId);
try {
if (supportsCamera2ApiLocked(cameraId)) {
//通过Java层的Binder获取CameraService
ICameraService cameraService = ()
.getCameraService();
...
//通过CameraService连接Camera设备
tDevice(callbacks, id, mContext
.getOpPackageName(), USE_CALLING_UID, holder);
//获取连接成功的CameraUser对象,它用来与CameraService通信
cameraUser = rface(
der());
} else {
//使用遗留的API
cameraUser = tBinderShim(
callbacks, id);
}
} catch (CameraRuntimeException e) {
...
} catch (RemoteException e) {
...
//将其包装成DeviceImpl对象,供应用层使用
oteDevice(cameraUser);
device = deviceImpl;
}
} catch (NumberFormatException e) {
...
} catch (CameraRuntimeException e) {
throw ked();
}
return device;
}
此方法的目的是通过CameraService来连接并获取CameraDevice对象,该对象用来与Camera进行通信操作。代码首先通过 Java层的Binder机制获取CameraService,然后调用其connectDevice方法来连接CaneraDevice,最后 Camera返回的是CameraDeviceUser对象,而接着将其封装成Jav层CameraDevice对象,而之后所有与Camera的通信都 通过CameraDevice的接口来进行。接下来分析一下Native层下的CameraDevice的初始化过程:
//,其中device为输出对象
status_t CameraService::connectDevice(const sp
const String16& clientPackageName,int clientUid,/*out*/sp
status_t ret = NO_ERROR;
String8 id = String8::format("%d", cameraId);
sp
ret = connectHelper
CAMERA_HAL_API_VERSION_UNSPECIFIED, clientPackageName, clientUid, API_2, false, false,
/*out*/client);//client为输出对象
...
device = client;
return NO_ERROR;
}
Native层的connectDevice方法就是调用了connectHelper方法,所以继续分析connectHelper:
34
//CameraService.h
template
status_t CameraService::connectHelper(const sp
int halVersion, const String16& clientPackageName, int clientUid,
apiLevel effectiveApiLevel, bool legacyMode, bool shimUpdateOnly,
/*out*/sp
status_t ret = NO_ERROR;
String8 clientName8(clientPackageName);
int clientPid = getCallingPid();
...
sp
{
...
//如果有必要,给FlashLight关闭设备的机会
mFlashlight->prepareDeviceOpen(cameraId);
//获取CameraId
int id = cameraIdToInt(cameraId);
...
//获取Device的版本,此处为Device3
int deviceVersion = getDeviceVersion(id, /*out*/&facing);
sp
//获取client对象
if((ret = makeClient(this, cameraCb, clientPackageName, cameraId, facing, clientPid,
clientUid, getpid(), legacyMode, halVersion, deviceVersion, effectiveApiLevel,
/*out*/&tmp)) != NO_ERROR) {
return ret;
}
client = static_cast
//调用client的初始化函数来初始化模块
if ((ret = client->initialize(mModule)) != OK) {
ALOGE("%s: Could not initialize client from HAL module.", __FUNCTION__);
return ret;
}
sp
if (remoteCallback != nullptr) {
remoteCallback->linkToDeath(this);
}
} // lock is destroyed, allow further connect calls
//将client赋值给输出Device
device = client;
return NO_ERROR;
}
CameraService根据Camera的相关参数来获取一个client,如makeClient方法,然后再调用client的initialize来进行初始化,首先看makeClient:
//
status_t CameraService::makeClient(const sp
const sp
int facing, int clientPid, uid_t clientUid, int servicePid, bool legacyMode,
int halVersion, int deviceVersion, apiLevel effectiveApiLevel,
/*out*/sp
//将字符串的CameraId转换成整形
int id = cameraIdToInt(cameraId);
...
if (halVersion < 0 || halVersion == deviceVersion) {//判断Camera HAL版本是否和Device的版本相同
switch(deviceVersion) {
case CAMERA_DEVICE_API_VERSION_1_0:
if (effectiveApiLevel == API_1) { // Camera1 API route
sp
*client = new CameraClient(cameraService, tmp, packageName, id, facing,
clientPid, clientUid, getpid(), legacyMode);
} else { // Camera2 API route
ALOGW("Camera using old HAL version: %d", deviceVersion);
35
return -EOPNOTSUPP;
}
break;
case CAMERA_DEVICE_API_VERSION_2_0:
case CAMERA_DEVICE_API_VERSION_2_1:
case CAMERA_DEVICE_API_VERSION_3_0:
case CAMERA_DEVICE_API_VERSION_3_1:
case CAMERA_DEVICE_API_VERSION_3_2:
case CAMERA_DEVICE_API_VERSION_3_3:
if (effectiveApiLevel == API_1) { // Camera1 API route
sp
*client = new Camera2Client(cameraService, tmp, packageName, id, facing,
clientPid, clientUid, servicePid, legacyMode);
} else { // Camera2 API route
sp
static_cast
*client = new CameraDeviceClient(cameraService, tmp, packageName, id,
facing, clientPid, clientUid, servicePid);
}
break;
default:
// Should not be reachable
ALOGE("Unknown camera device HAL version: %d", deviceVersion);
return INVALID_OPERATION;
}
} else {
// A particular HAL version is requested by caller. Create CameraClient
// based on the requested HAL version.
if (deviceVersion > CAMERA_DEVICE_API_VERSION_1_0 &&
halVersion == CAMERA_DEVICE_API_VERSION_1_0) {
// Only support higher HAL version device opened as HAL1.0 device.
sp
*client = new CameraClient(cameraService, tmp, packageName, id, facing,
clientPid, clientUid, servicePid, legacyMode);
} else {
// Other combinations (e.g. HAL3.x open as HAL2.x) are not supported yet.
ALOGE("Invalid camera HAL version %x: HAL %x device can only be"
" opened as HAL %x device", halVersion, deviceVersion,
CAMERA_DEVICE_API_VERSION_1_0);
return INVALID_OPERATION;
}
}
return NO_ERROR;
}
其中就是创建一个Client对象,由于此处分析的是Camera API2.0,其HAL的版本是3.0+,而Device的版本则其Device的版本即为3.0+,所以会创建一个 CameraDeviceClient对象,至此,makeClient已经创建了client对象,并返回了,接着看它的初始化:
//
status_t CameraDeviceClient::initialize(CameraModule *module)
{
ATRACE_CALL();
status_t res;
//调用Camera2ClientBase的初始化函数来初始化CameraModule模块
res = Camera2ClientBase::initialize(module);
if (res != OK) {
return res;
}
String8 threadName;
//初始化FrameProcessor
mFrameProcessor = new FrameProcessorBase(mDevice);
threadName = String8::format("CDU-%d-FrameProc", mCameraId);
mFrameProcessor->run(());
//并注册监听,监听的实现就在CameraDeviceClient类中
mFrameProcessor->registerListener(FRAME_PROCESSOR_LISTENER_MIN_ID,
FRAME_PROCESSOR_LISTENER_MAX_ID, /*listener*/this,/*sendPartials*/true);
return OK;
}
36
它会调用Camera2ClientBase的initialize方法来初始化,并且会初始化一个FrameProcessor
来进行帧处理,主要是回调每一帧的ExtraResult到应用中,也就是3A相关的数据信息。而Camera1.0中各种Processor模块,即将 数据打包处理后再返回到应用的模块都已经不存在,而Camera2.0中将由MediaRecorder、SurfaceView、 ImageReader等来直接处理,总体来说效率更好。继续看initialize:
//
template
status_t Camera2ClientBase
...
//调用Device的initialie方法
res = mDevice->initialize(module);
...
res = mDevice->setNotifyCallback(this);
return OK;
}
代码就是调用了Device的initialize方法,此处的Device是在Camera2ClientBase的构造函数中创建的:
//
template
Camera2ClientBase
const sp
int cameraId,int cameraFacing,int clientPid,uid_t clientUid,int servicePid):
TClientBase(cameraService, remoteCallback, clientPackageName,cameraId, cameraFacing,
clientPid, clientUid, servicePid),mSharedCameraCallbacks(remoteCallback),
mDeviceVersion(cameraService->getDeviceVersion(cameraId))
{
...
mInitialClientPid = clientPid;
mDevice = CameraDeviceFactory::createDevice(cameraId);
...
}
目前Camera API是2.0,而Device的API已经是3.0+了,继续看CameraDeviceFactory的createDevice方法:
//
sp
sp
...
int deviceVersion = svc->getDeviceVersion(cameraId, /*facing*/NULL);
sp
switch (deviceVersion) {
case CAMERA_DEVICE_API_VERSION_2_0:
case CAMERA_DEVICE_API_VERSION_2_1:
device = new Camera2Device(cameraId);
break;
case CAMERA_DEVICE_API_VERSION_3_0:
case CAMERA_DEVICE_API_VERSION_3_1:
case CAMERA_DEVICE_API_VERSION_3_2:
case CAMERA_DEVICE_API_VERSION_3_3:
device = new Camera3Device(cameraId);
37
break;
default:
ALOGE("%s: Camera %d: Unknown HAL device version %d",
__FUNCTION__, cameraId, deviceVersion);
device = NULL;
break;
}
return device;
}
很显然,它将会创建一个Camera3Device对象,所以,Device的initialize就是调用了Camera3Device的initialize方法:
//
status_t Camera3Device::initialize(CameraModule *module)
{
...
camera3_device_t *device;
//打开Camera HAL层的Deivce
res = module->open((),
reinterpret_cast
...
//交叉检查Device的版本
if (device->n < CAMERA_DEVICE_API_VERSION_3_0) {
SET_ERR_L("Could not open camera: "
"Camera device should be at least %x, reports %x instead",
CAMERA_DEVICE_API_VERSION_3_0,
device->n);
device->(&device->common);
return BAD_VALUE;
}
...
//调用回调函数来进行初始化,即调用打开Device的initialize方法来进行初始化
res = device->ops->initialize(device, this);
...
//启动请求队列线程
mRequestThread = new RequestThread(this, mStatusTracker, device, aeLockAvailable);
res = mRequestThread->run(String8::format("C3Dev-%d-ReqQueue", mId).string());
if (res != OK) {
SET_ERR_L("Unable to start request queue thread: %s (%d)",
strerror(-res), res);
device->(&device->common);
();
return res;
}
...
//返回初始成功
return OK;
}
首先,会依赖HAL框架打开并获得相应的Device对象,具体的流程请参考android6.0源码分析之Camera2 HAL分析,然后再回调此对象的initialize方法进行初始化,最后再启动RequestThread等线程,并返回initialize成功。至此Camera API2.0下的初始化过程就分析结束了。框架层的初始化时序图如下:
38
四.android6.0源码分析之Camera API2.0下的Preview(预览)流程分析
本文将基于android6.0的源码,对Camera API2.0下Camera的preview的流程进行分析。在文章android6.0源码分析之Camera API2.0下的初始化流程分析中, 已经对Camera2内置应用的Open即初始化流程进行了详细的分析,而在open过程中,定义了一个PreviewCallback,当时并未详细分 析,即Open过程中,会自动开启预览过程,即会调用OneCameraImpl的startPreview方法,它是捕获和绘制屏幕预览帧的开始,预览 才会真正开始提供一个表面。
1、Camera2 preview的应用层流程分析
preview流程都是从startPreview开始的,所以来看startPreview方法的代码:
//
@Override
public void startPreview(Surface previewSurface, CaptureReadyCallback listener) {
mPreviewSurface = previewSurface;
//根据Surface以及CaptureReadyCallback回调来建立preview环境
setupAsync(mPreviewSurface, listener);
}
这其中有一个比较重要的回调CaptureReadyCallback,先分析setupAsync方法:
//
private void setupAsync(final Surface previewSurface, final CaptureReadyCallback listener) {
(new Runnable() {
@Override
public void run() {
39
//建立preview环境
setup(previewSurface, listener);
}
});
}
这里通过CameraHandler来post一个Runnable对象,它只会调用Runnable的run方法,它仍然属于UI线程,并没有创建新的线程。所以,继续分析setup方法:
//
private void setup(Surface previewSurface, final CaptureReadyCallback listener) {
try {
if (mCaptureSession != null) {
aptures();
mCaptureSession = null;
}
List
(previewSurface);
(face());
//创建CaptureSession会话来与Camera Device发送Preview请求
CaptureSession(outputSurfaces, new allback() {
@Override
public void onConfigureFailed(CameraCaptureSession session) {
//如果配置失败,则回调CaptureReadyCallback的onSetupFailed方法
pFailed();
}
@Override
public void onConfigured(CameraCaptureSession session) {
mCaptureSession = session;
mAFRegions = ZERO_WEIGHT_3A_REGION;
mAERegions = ZERO_WEIGHT_3A_REGION;
mZoomValue = 1f;
mCropRegion = cropRegionForZoom(mZoomValue);
//调用repeatingPreview来启动preview
boolean success = repeatingPreview(null);
if (success) {
//若启动成功,则回调CaptureReadyCallback的onReadyForCapture,表示准备拍照成功
yForCapture();
} else {
//若启动失败,则回调CaptureReadyCallback的onSetupFailed,表示preview建立失败
pFailed();
}
}
@Override
public void onClosed(CameraCaptureSession session) {
ed(session);
}
}, mCameraHandler);
} catch (CameraAccessException ex) {
Log.e(TAG, "Could not set up capture session", ex);
pFailed();
}
}
首先,调用Device的createCaptureSession方法来创建一个会话,并定义了会话的状态回调
allback(),其中,当会话创建成功,则会回调onConfigured()方法,在其 中,首先调用repeatingPreview来启动preview,然后处理preview的结果并调用先前定义的 CaptureReadyCallback来通知用户进行Capture操作。先分析repeatingPreview40
方法:
//
private boolean repeatingPreview(Object tag) {
try {
//通过CameraDevice对象创建一个CaptureRequest的preview请求
r builder = CaptureRequest(
TE_PREVIEW);
//添加预览的目标Surface
get(mPreviewSurface);
//设置预览模式
(L_MODE, L_MODE_AUTO);
addBaselineCaptureKeysToRequest(builder);
//利用会话发送请求,mCaptureCallback为
eatingRequest((), mCaptureCallback,mCameraHandler);
Log.v(TAG, ("Sent repeating Preview request, zoom = %.2f", mZoomValue));
return true;
} catch (CameraAccessException ex) {
Log.e(TAG, "Could not access camera setting up preview.", ex);
return false;
}
}
首先调用CameraDeviceImpl的createCaptureRequest方法创建类型为TEMPLATE_PREVIEW 的CaptureRequest,然后调用CameraCaptureSessionImpl的setRepeatingRequest方法将此请求发送 出去:
//
@Override
public synchronized int setRepeatingRequest(CaptureRequest request, CaptureCallback callback,
Handler handler) throws CameraAccessException {
if (request == null) {
throw new IllegalArgumentException("request must not be null");
} else if (ocess()) {
throw new IllegalArgumentException("repeating reprocess requests are not supported");
}
checkNotClosed();
handler = checkHandler(handler, callback);
...
//将此请求添加到待处理的序列里
return addPendingSequence(eatingRequest(request,createCaptureCallbackProxy(
handler, callback), mDeviceHandler));
}
至此应用层的preview的请求流程分析结束,继续分析其结果处理,如果preview开启成功,则会回调CaptureReadyCallback的onReadyForCapture方法,现在分析CaptureReadyCallback回调:
//
new CaptureReadyCallback() {
@Override
public void onSetupFailed() {
e();
Log.e(TAG, "Could not set up preview.");
e(new Runnable() {
@Override
public void run() {
if (mCamera == null) {
Log.d(TAG, "Camera closed, aborting.");
return;
}
41
();
mCamera = null;
}
});
}
@Override
public void onReadyForCapture() {
e();
e(new Runnable() {
@Override
public void run() {
Log.d(TAG, "Ready for capture.");
if (mCamera == null) {
Log.d(TAG, "Camera closed, aborting.");
return;
}
//
onPreviewStarted();
onReadyStateChanged(true);
dyStateChangedListener();
lizeZoom(Zoom());
usStateListener();
}
});
}
}
根据前面的分析,预览成功后会回调onReadyForCapture方法,它主要是通知主线程的状态改变,并设置Camera的ReadyStateChangedListener的监听,其回调方法如下:
//
@Override
public void onReadyStateChanged(boolean readyForCapture) {
if (readyForCapture) {
eraAppUI().enableModeOptions();
}
tterEnabled(readyForCapture);
}
如代码所示,当其状态变成准备好拍照,则将会调用CameraActivity的setShutterEnabled方法,即使能快门按键,此时也 就是说预览成功结束,可以按快门进行拍照了,所以,到这里,应用层的preview的流程基本分析完毕,下图是应用层的关键调用的流程时序图:
42
2、Camera2 preview的Native层流程分析
分析Preview的Native的代码真是费了九牛二虎之力,若有分析不正确之处,请各位大神指正,在第一小节的后段最后会调用CameraDeviceImpl的setRepeatingRequest方法来提交请求,而在android6.0源码分析之Camera API2.0简介中,分析了Camera2框架Java IPC通信使用了CameraDeviceUser来进行通信,所以看Native层的ICameraDeviceUser的onTransact方法来处理请求的提交:
//
status_t BnCameraDeviceUser::onTransact(uint32_t code, const Parcel& data, Parcel* reply,
uint32_t flags){
switch(code) {
…
//请求提交
case SUBMIT_REQUEST: {
CHECK_INTERFACE(ICameraDeviceUser, data, reply);
// arg0 = request
sp
if (t32() != 0) {
request = new CaptureRequest();
request->readFromParcel(const_cast
}
// arg1 = streaming (bool)
bool repeating = t32();
// return code: requestId (int32)
reply->writeNoException();
int64_t lastFrameNumber = -1;
//将实现BnCameraDeviceUser的对下岗的submitRequest方法代码写入Binder
reply->writeInt32(submitRequest(request, repeating, &lastFrameNumber));
43
reply->writeInt32(1);
reply->writeInt64(lastFrameNumber);
return NO_ERROR;
} break;
...
}
CameraDeviceClientBase继承了BnCameraDeviceUser类,所以CameraDeviceClientBase 相当于IPC Binder中的client,所以会调用其submitRequest方法,此处,至于IPC Binder通信原理不做分析,其参照其它资料:
//
status_t CameraDeviceClient::submitRequest(sp
/*out*/int64_t* lastFrameNumber) {
List
_back(request);
return submitRequestList(requestList, streaming, lastFrameNumber);
}
简单的调用,继续分析submitRequestList:
// CameraDeviceClient
status_t CameraDeviceClient::submitRequestList(List
int64_t* lastFrameNumber) {
...
//Metadata链表
List
...
for (List
sp
...
//初始化Metadata数据
CameraMetadata metadata(request->mMetadata);
...
//设置Stream的容量
Vector
acity(request->());
//循环初始化Surface
for (size_t i = 0; i < request->(); ++i) {
sp
if (surface == 0) continue;
sp
int idx = fKey(IInterface::asBinder(gbp));
...
int streamId = t(idx);
_back(streamId);
}
//更新数据
(ANDROID_REQUEST_OUTPUT_STREAMS, &outputStreamIds[0],
());
if (request->mIsReprocess) {
(ANDROID_REQUEST_INPUT_STREAMS, &, 1);
}
(ANDROID_REQUEST_ID, &requestId, /*size*/1);
loopCounter++; // loopCounter starts from 1
//压栈
_back(metadata);
}
mRequestIdCounter++;
if (streaming) {
//预览会走此条通道
res = mDevice->setStreamingRequestList(metadataRequestList, lastFrameNumber);
if (res != OK) {
...
44
} else {
_back(requestId);
}
} else {
//Capture等走此条通道
res = mDevice->captureList(metadataRequestList, lastFrameNumber);
if (res != OK) {
...
}
}
if (res == OK) {
return requestId;
}
return res;
}
setStreamingRequestList和captureList方法都调用了submitRequestsHelper方法,只是他们的 repeating参数一个ture,一个为false,而本节分析的preview调用的是setStreamingRequestList方法,并且 API2.0下Device的实现为Camera3Device,所以看它的submitRequestsHelper实现:
//
status_t Camera3Device::submitRequestsHelper(const List
bool repeating,/*out*/int64_t *lastFrameNumber) {
...
RequestList requestList;
//在这里面会进行CaptureRequest的创建,并调用configureStreamLocked进行stream的配置,主要是设置了一个回调captureResultCb,即后面要分析的重要的回调
res = convertMetadataListToRequestListLocked(requests, /*out*/&requestList);
...
if (repeating) {
//眼熟不,这个方法名和应用层中CameraDevice的setRepeatingRequests一样
res = mRequestThread->setRepeatingRequests(requestList, lastFrameNumber);
} else {
//不需重复,即repeating为false时,调用此方法来讲请求提交
res = mRequestThread->queueRequestList(requestList, lastFrameNumber);
}
...
return res;
}
从代码可知,在Camera3Device里创建了要给RequestThread线程,调用它的setRepeatingRequests或者 queueRequestList方法来将应用层发送过来的Request提交,继续看setRepeatingRequests方法:
//
status_t Camera3Device::RequestThread::setRepeatingRequests(const RequestList &requests,
/*out*/int64_t *lastFrameNumber) {
Mutex::Autolock l(mRequestLock);
if (lastFrameNumber != NULL) {
*lastFrameNumber = mRepeatingLastFrameNumber;
}
();
//将其插入mRepeatingRequest链表
((),
(), ());
unpauseForNewRequests();
mRepeatingLastFrameNumber = NO_IN_FLIGHT_REPEATING_FRAMES;
return OK;
45
}
至此,Native层的preview过程基本分析结束,下面的工作将会交给Camera HAL层来处理,先给出Native层的调用时序图:
3、Camera2 preview的CameraHAL层流程分析
本节将不再对Camera的HAL层的初始化以及相关配置进行分析,只对preview等相关流程中的frame metadata的处理流程进行分析,具体的CameraHAL分析请参考android6.0源码分析之Camera2 HAL分析. 在第二小节的submitRequestsHelper方法中调用convertMetadataListToRequestListLocked的时候 会进行CaptureRequest的创建,并调用configureStreamLocked进行stream的配置,主要是设置了一个回调 captureResultCb,所以Native层在request提交后,会回调此captureResultCb方法,首先分析 captureResultCb:
//
void QCamera3HardwareInterface::captureResultCb(mm_camera_super_buf_t *metadata_buf,
camera3_stream_buffer_t *buffer, uint32_t frame_number)
{
if (metadata_buf) {
if (mBatchSize) {
//批处理模式,但代码也是循环调用handleMetadataWithLock方法
handleBatchMetadata(metadata_buf, true /* free_and_bufdone_meta_buf */);
} else { /* mBatchSize = 0 */
pthread_mutex_lock(&mMutex);
//处理元数据
handleMetadataWithLock(metadata_buf, true /* free_and_bufdone_meta_buf */);
pthread_mutex_unlock(&mMutex);
}
} else {
46
pthread_mutex_lock(&mMutex);
handleBufferWithLock(buffer, frame_number);
pthread_mutex_unlock(&mMutex);
}
return;
}
一种是通过循环来进行元数据的批处理,另一种是直接进行元数据的处理,但是批处理最终也是循环调用handleMetadataWithLock来处理:
//
void QCamera3HardwareInterface::handleMetadataWithLock(mm_camera_super_buf_t *metadata_buf,
bool free_and_bufdone_meta_buf){
...
//Partial result on process_capture_result for timestamp
if (urgent_frame_number_valid) {
...
for (List
i != (); i++) {
...
if (i->frame_number == urgent_frame
_number &&i->bUrgentReceived == 0) {
camera3_capture_result_t result;
memset(&result, 0, sizeof(camera3_capture_result_t));
i->partial_result_cnt++;
i->bUrgentReceived = 1;
//提取3A数据
=translateCbUrgentMetadataToResultMetadata(metadata);
...
//对Capture Result进行处理
mCallbackOps->process_capture_result(mCallbackOps, &result);
//释放camera_metadata_t
free_camera_metadata((camera_metadata_t *));
break;
}
}
}
...
for (List
i != () && i->frame_number <= frame_number;) {
camera3_capture_result_t result;
memset(&result, 0, sizeof(camera3_capture_result_t));
...
if (i->frame_number < frame_number) {
//清空数据结构
camera3_notify_msg_t notify_msg;
memset(¬ify_msg, 0, sizeof(camera3_notify_msg_t));
//定义消息类型
notify_ = CAMERA3_MSG_SHUTTER;
notify__number = i->frame_number;
notify_amp = (uint64_t)capture_time (urgent_frame_number -
i->frame_number) * NSEC_PER_33MSEC;
//调用回调通知应用层发生CAMERA3_MSG_SHUTTER消息
mCallbackOps->notify(mCallbackOps, ¬ify_msg);
...
CameraMetadata dummyMetadata;
//更新元数据
(ANDROID_SENSOR_TIMESTAMP,
&i->timestamp, 1);
(ANDROID_REQUEST_ID,
&(i->request_id), 1);
//得到元数据释放结果
= e();
} else {
camera3_notify_msg_t notify_msg;
memset(¬ify_msg, 0, sizeof(camera3_notify_msg_t));
47
// Send shutter notify to frameworks
notify_ = CAMERA3_MSG_SHUTTER;
...
//从HAL中获得Metadata
= translateFromHalMetadata(metadata,
i->timestamp, i->request_id, i->jpegMetadata, i->pipeline_depth,
i->capture_intent);
saveExifParams(metadata);
if (i->blob_request) {
...
if (enabled && metadata->is_tuning_params_valid) {
//将Metadata复制到文件
dumpMetadataToFile(metadata->tuning_params, mMetaFrameCount, enabled,
"Snapshot",frame_number);
}
mPictureChannel->queueReprocMetadata(metadata_buf);
} else {
// Return metadata buffer
if (free_and_bufdone_meta_buf) {
mMetadataChannel->bufDone(metadata_buf);
free(metadata_buf);
}
}
}
...
}
}
其中,首先会调用回调的process_capture_result方法来对Capture Result进行处理,然后会调用回调的notify方法来发送一个CAMERA3_MSG_SHUTTER消息,而
process_capture_result所对应的实现其实就是Camera3Device的processCaptureResult方法,先分析 processCaptureResult:
//
void Camera3Device::processCaptureResult(const camera3_capture_result *result) {
...
//对于HAL3.2+,如果HAL不支持partial,当metadata被包含在result中时,它必须将partial_result设置为1
...
{
Mutex::Autolock l(mInFlightLock);
ssize_t idx = fKey(frameNumber);
...
InFlightRequest &request = lueAt(idx);
if (result->partial_result != 0)
lResultCount = result->partial_result;
// 检查结果是否只有partial metadata
if (mUsePartialResult && result->result != NULL) {
if (mDeviceVersion >= CAMERA_DEVICE_API_VERSION_3_2) {//HAL版本高于3.2
if (result->partial_result > mNumPartialResults || result->partial_result < 1) {
//Log显示错误
return;
}
isPartialResult = (result->partial_result < mNumPartialResults);
if (isPartialResult) {
//将结果加入到请求的结果集中
(result->result);
}
} else {//低于3.2
...
}
if (isPartialResult) {
// Fire off a 3A-only result if possible
if (!nt3A) {
nt3A =processPartial3AResult(frameNumber,
tedResult,Extras);
48
}
}
}
...
if (result->result != NULL && !isPartialResult) {
if (shutterTimestamp == 0) {
gMetadata = result->result;
tedResult = collectedPartialResult;
} else {
CameraMetadata metadata;
metadata = result->result;
//发送Capture Result
sendCaptureResult(metadata, Extras, collectedPartialResult,
frameNumber, hasInputBufferInRequest,gerCancelOverride);
}
}
//结果处理好了,将请求移除
removeInFlightRequestIfReadyLocked(idx);
} // scope for mInFlightLock
...
}
由代码可知,它会处理局部的或者全部的metadata数据,最后如果result不为空,且得到的是请求处理的全部数据,则会调用sendCaptureResult方法来将请求结果发送出去:
//
void Camera3Device::sendCaptureResult(CameraMetadata &pendingMetadata,CaptureResultExtras
&resultExtras,CameraMetadata &collectedPartialResult,uint32_t frameNumber,bool reprocess,
const AeTriggerCancelOverride_t &aeTriggerCancelOverride) {
if (y())//如果数据为空,直接返回
return;
...
CaptureResult captureResult;
tExtras = resultExtras;
ata = pendingMetadata;
//更新metadata
if ((ANDROID_REQUEST_FRAME_COUNT(int32_t*)&frameNumber, 1)
!= OK) {
SET_ERR("Failed to set frame# in metadata (%d)",frameNumber);
return;
} else {
...
}
// Append any previous partials to form a complete result
if (mUsePartialResult && !y()) {
(collectedPartialResult);
}
//排序
();
// Check that there's a timestamp in the result metadata
camera_metadata_entry entry = (ANDROID_SENSOR_TIMESTAMP);
...
overrideResultForPrecaptureCancel(&ata, aeTriggerCancelOverride);
// 有效的结果,将其插入Buffer
List
CaptureResult(captureResult));
...
();
}
最后,它将Capture Result插入了结果队列,并释放了结果的信号量,所以到这里,Capture
Result处理成功,下面分析前面的notify发送CAMERA3_MSG_SHUTTER消息:
49


发布评论