如何解决如何操作相机预览?
| 有几本教程介绍了如何在Android设备上启动并运行简单的相机预览。但是我找不到任何示例来说明如何在渲染图像之前对其进行操作。 我想做的是实现自定义滤色器以模拟例如红色和/或绿色不足。解决方法
我对此进行了一些研究,并提出了一个可行的示例。这是我发现的。从相机中获取原始数据非常容易。它以YUV字节数组形式返回。您需要手动将其绘制到曲面上才能对其进行修改。为此,您需要具有一个SurfaceView,可以使用它手动运行绘制调用。您可以设置几个标志来实现此目的。
为了手动进行绘图调用,您需要将字节数组转换为某种位图。此时,位图和BitmapDecoder似乎不能很好地处理YUV字节数组。为此已提交了一个错误,但请不要怀疑它的状态是什么。因此,人们一直在尝试自己将字节数组解码为RGB格式。
似乎手动进行解码有点慢,人们在此方面取得了不同程度的成功。这样的事情可能实际上应该在NDK级别使用本机代码完成。
尽管如此,仍然有可能使其工作。另外,我的小演示只是我花了几个小时一起破解东西(我想这样做多少引起了我的想象;)。因此,通过一些调整,您可能会大大改善我已经设法工作的内容。
这个小代码片段还包含我发现的其他一些gem。如果只想在表面上绘制,则可以覆盖表面的onDraw函数-您可以分析返回的相机图像并绘制覆盖图-比尝试处理每一帧要快得多。另外,我更改了SurfaceHolder.SURFACE_TYPE_NORMAL,以防您需要显示相机预览。因此,对代码进行了一些更改-注释掉的代码:
//try { mCamera.setPreviewDisplay(holder); } catch (IOException e)
// { Log.e(\"Camera\",\"mCamera.setPreviewDisplay(holder);\"); }
并且:
SurfaceHolder.SURFACE_TYPE_NORMAL //SurfaceHolder.SURFACE_TYPE_PUSH_BUFFERS - for preview to work
应该允许您基于摄像机预览在真实预览之上叠加帧。
无论如何,这是一段有效的代码-应该给您一些开始。
只需在您的其中一个视图中放置一行代码,如下所示:
<pathtocustomview.MySurfaceView android:id=\"@+id/surface_camera\"
android:layout_width=\"fill_parent\" android:layout_height=\"10dip\"
android:layout_weight=\"1\">
</pathtocustomview.MySurfaceView>
并将此类包含在您的源代码中的某个位置:
package pathtocustomview;
import java.io.IOException;
import java.nio.Buffer;
import android.content.Context;
import android.graphics.Bitmap;
import android.graphics.BitmapFactory;
import android.graphics.Canvas;
import android.graphics.Paint;
import android.graphics.Rect;
import android.hardware.Camera;
import android.util.AttributeSet;
import android.util.Log;
import android.view.SurfaceHolder;
import android.view.SurfaceHolder.Callback;
import android.view.SurfaceView;
public class MySurfaceView extends SurfaceView implements Callback,Camera.PreviewCallback {
private SurfaceHolder mHolder;
private Camera mCamera;
private boolean isPreviewRunning = false;
private byte [] rgbbuffer = new byte[256 * 256];
private int [] rgbints = new int[256 * 256];
protected final Paint rectanglePaint = new Paint();
public MySurfaceView(Context context,AttributeSet attrs) {
super(context,attrs);
rectanglePaint.setARGB(100,200,0);
rectanglePaint.setStyle(Paint.Style.FILL);
rectanglePaint.setStrokeWidth(2);
mHolder = getHolder();
mHolder.addCallback(this);
mHolder.setType(SurfaceHolder.SURFACE_TYPE_NORMAL);
}
@Override
protected void onDraw(Canvas canvas) {
canvas.drawRect(new Rect((int) Math.random() * 100,(int) Math.random() * 100,200),rectanglePaint);
Log.w(this.getClass().getName(),\"On Draw Called\");
}
public void surfaceChanged(SurfaceHolder holder,int format,int width,int height) {
}
public void surfaceCreated(SurfaceHolder holder) {
synchronized (this) {
this.setWillNotDraw(false); // This allows us to make our own draw
// calls to this canvas
mCamera = Camera.open();
Camera.Parameters p = mCamera.getParameters();
p.setPreviewSize(240,160);
mCamera.setParameters(p);
//try { mCamera.setPreviewDisplay(holder); } catch (IOException e)
// { Log.e(\"Camera\",\"mCamera.setPreviewDisplay(holder);\"); }
mCamera.startPreview();
mCamera.setPreviewCallback(this);
}
}
public void surfaceDestroyed(SurfaceHolder holder) {
synchronized (this) {
try {
if (mCamera != null) {
mCamera.stopPreview();
isPreviewRunning = false;
mCamera.release();
}
} catch (Exception e) {
Log.e(\"Camera\",e.getMessage());
}
}
}
public void onPreviewFrame(byte[] data,Camera camera) {
Log.d(\"Camera\",\"Got a camera frame\");
Canvas c = null;
if(mHolder == null){
return;
}
try {
synchronized (mHolder) {
c = mHolder.lockCanvas(null);
// Do your drawing here
// So this data value you\'re getting back is formatted in YUV format and you can\'t do much
// with it until you convert it to rgb
int bwCounter=0;
int yuvsCounter=0;
for (int y=0;y<160;y++) {
System.arraycopy(data,yuvsCounter,rgbbuffer,bwCounter,240);
yuvsCounter=yuvsCounter+240;
bwCounter=bwCounter+256;
}
for(int i = 0; i < rgbints.length; i++){
rgbints[i] = (int)rgbbuffer[i];
}
//decodeYUV(rgbbuffer,data,100,100);
c.drawBitmap(rgbints,256,false,new Paint());
Log.d(\"SOMETHING\",\"Got Bitmap\");
}
} finally {
// do this in a finally so that if an exception is thrown
// during the above,we don\'t leave the Surface in an
// inconsistent state
if (c != null) {
mHolder.unlockCanvasAndPost(c);
}
}
}
}
, 我使用了walta的解决方案,但在YUV转换,相机框架输出尺寸和相机发行时崩溃时遇到了一些问题。
最后,以下代码对我有用:
public class MySurfaceView extends SurfaceView implements Callback,Camera.PreviewCallback {
private static final String TAG = \"MySurfaceView\";
private int width;
private int height;
private SurfaceHolder mHolder;
private Camera mCamera;
private int[] rgbints;
private boolean isPreviewRunning = false;
private int mMultiplyColor;
public MySurfaceView(Context context,attrs);
mHolder = getHolder();
mHolder.addCallback(this);
mMultiplyColor = getResources().getColor(R.color.multiply_color);
}
// @Override
// protected void onDraw(Canvas canvas) {
// Log.w(this.getClass().getName(),\"On Draw Called\");
// }
@Override
public void surfaceChanged(SurfaceHolder holder,int height) {
}
@Override
public void surfaceCreated(SurfaceHolder holder) {
synchronized (this) {
if (isPreviewRunning)
return;
this.setWillNotDraw(false); // This allows us to make our own draw calls to this canvas
mCamera = Camera.open();
isPreviewRunning = true;
Camera.Parameters p = mCamera.getParameters();
Size size = p.getPreviewSize();
width = size.width;
height = size.height;
p.setPreviewFormat(ImageFormat.NV21);
showSupportedCameraFormats(p);
mCamera.setParameters(p);
rgbints = new int[width * height];
// try { mCamera.setPreviewDisplay(holder); } catch (IOException e)
// { Log.e(\"Camera\",\"mCamera.setPreviewDisplay(holder);\"); }
mCamera.startPreview();
mCamera.setPreviewCallback(this);
}
}
@Override
public void surfaceDestroyed(SurfaceHolder holder) {
synchronized (this) {
try {
if (mCamera != null) {
//mHolder.removeCallback(this);
mCamera.setPreviewCallback(null);
mCamera.stopPreview();
isPreviewRunning = false;
mCamera.release();
}
} catch (Exception e) {
Log.e(\"Camera\",e.getMessage());
}
}
}
@Override
public void onPreviewFrame(byte[] data,Camera camera) {
// Log.d(\"Camera\",\"Got a camera frame\");
if (!isPreviewRunning)
return;
Canvas canvas = null;
if (mHolder == null) {
return;
}
try {
synchronized (mHolder) {
canvas = mHolder.lockCanvas(null);
int canvasWidth = canvas.getWidth();
int canvasHeight = canvas.getHeight();
decodeYUV(rgbints,width,height);
// draw the decoded image,centered on canvas
canvas.drawBitmap(rgbints,canvasWidth-((width+canvasWidth)>>1),canvasHeight-((height+canvasHeight)>>1),height,null);
// use some color filter
canvas.drawColor(mMultiplyColor,Mode.MULTIPLY);
}
} catch (Exception e){
e.printStackTrace();
} finally {
// do this in a finally so that if an exception is thrown
// during the above,we don\'t leave the Surface in an
// inconsistent state
if (canvas != null) {
mHolder.unlockCanvasAndPost(canvas);
}
}
}
/**
* Decodes YUV frame to a buffer which can be use to create a bitmap. use
* this for OS < FROYO which has a native YUV decoder decode Y,U,and V
* values on the YUV 420 buffer described as YCbCr_422_SP by Android
*
* @param rgb
* the outgoing array of RGB bytes
* @param fg
* the incoming frame bytes
* @param width
* of source frame
* @param height
* of source frame
* @throws NullPointerException
* @throws IllegalArgumentException
*/
public void decodeYUV(int[] out,byte[] fg,int height) throws NullPointerException,IllegalArgumentException {
int sz = width * height;
if (out == null)
throw new NullPointerException(\"buffer out is null\");
if (out.length < sz)
throw new IllegalArgumentException(\"buffer out size \" + out.length + \" < minimum \" + sz);
if (fg == null)
throw new NullPointerException(\"buffer \'fg\' is null\");
if (fg.length < sz)
throw new IllegalArgumentException(\"buffer fg size \" + fg.length + \" < minimum \" + sz * 3 / 2);
int i,j;
int Y,Cr = 0,Cb = 0;
for (j = 0; j < height; j++) {
int pixPtr = j * width;
final int jDiv2 = j >> 1;
for (i = 0; i < width; i++) {
Y = fg[pixPtr];
if (Y < 0)
Y += 255;
if ((i & 0x1) != 1) {
final int cOff = sz + jDiv2 * width + (i >> 1) * 2;
Cb = fg[cOff];
if (Cb < 0)
Cb += 127;
else
Cb -= 128;
Cr = fg[cOff + 1];
if (Cr < 0)
Cr += 127;
else
Cr -= 128;
}
int R = Y + Cr + (Cr >> 2) + (Cr >> 3) + (Cr >> 5);
if (R < 0)
R = 0;
else if (R > 255)
R = 255;
int G = Y - (Cb >> 2) + (Cb >> 4) + (Cb >> 5) - (Cr >> 1) + (Cr >> 3) + (Cr >> 4) + (Cr >> 5);
if (G < 0)
G = 0;
else if (G > 255)
G = 255;
int B = Y + Cb + (Cb >> 1) + (Cb >> 2) + (Cb >> 6);
if (B < 0)
B = 0;
else if (B > 255)
B = 255;
out[pixPtr++] = 0xff000000 + (B << 16) + (G << 8) + R;
}
}
}
private void showSupportedCameraFormats(Parameters p) {
List<Integer> supportedPictureFormats = p.getSupportedPreviewFormats();
Log.d(TAG,\"preview format:\" + cameraFormatIntToString(p.getPreviewFormat()));
for (Integer x : supportedPictureFormats) {
Log.d(TAG,\"suppoterd format: \" + cameraFormatIntToString(x.intValue()));
}
}
private String cameraFormatIntToString(int format) {
switch (format) {
case PixelFormat.JPEG:
return \"JPEG\";
case PixelFormat.YCbCr_420_SP:
return \"NV21\";
case PixelFormat.YCbCr_422_I:
return \"YUY2\";
case PixelFormat.YCbCr_422_SP:
return \"NV16\";
case PixelFormat.RGB_565:
return \"RGB_565\";
default:
return \"Unknown:\" + format;
}
}
}
要使用它,请在您活动的onCreate上运行以下代码:
SurfaceView surfaceView = new MySurfaceView(this,null);
RelativeLayout.LayoutParams layoutParams = new RelativeLayout.LayoutParams(RelativeLayout.LayoutParams.MATCH_PARENT,RelativeLayout.LayoutParams.MATCH_PARENT);
surfaceView.setLayoutParams(layoutParams);
mRelativeLayout.addView(surfaceView);
, 您看过GPUImage吗?
它最初是Brad Larson制作的OSX / iOS库,作为OpenGL / ES的Objective-C包装器存在。
https://github.com/BradLarson/GPUImage
Cyber Agent的人们已经制作了一个Android端口(它没有完整的功能对等),它是OpenGLES之上的一组Java包装器。它具有较高的水平,并且很容易实现,并且具有上述许多相同的功能...
https://github.com/CyberAgent/android-gpuimage
版权声明:本文内容由互联网用户自发贡献,该文观点与技术仅代表作者本人。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如发现本站有涉嫌侵权/违法违规的内容, 请发送邮件至 dio@foxmail.com 举报,一经查实,本站将立刻删除。