GlsurfaceView 抖音27秒视频怎么上传视频

&>&GLSurfaceView控件通过MediaPlayer播放视频
GLSurfaceView控件通过MediaPlayer播放视频
上传大小:45.02MB
GLSurfaceView控件实现 GLSurfaceView.Renderer , SurfaceTexture.OnFrameAvailableListener接口,通过MediaPlayer播放本地视频
综合评分:0
{%username%}回复{%com_username%}{%time%}\
/*点击出现回复框*/
$(".respond_btn").on("click", function (e) {
$(this).parents(".rightLi").children(".respond_box").show();
e.stopPropagation();
$(".cancel_res").on("click", function (e) {
$(this).parents(".res_b").siblings(".res_area").val("");
$(this).parents(".respond_box").hide();
e.stopPropagation();
/*删除评论*/
$(".del_comment_c").on("click", function (e) {
var id = $(e.target).attr("id");
$.getJSON('/index.php/comment/do_invalid/' + id,
function (data) {
if (data.succ == 1) {
$(e.target).parents(".conLi").remove();
alert(data.msg);
$(".res_btn").click(function (e) {
var parentWrap = $(this).parents(".respond_box"),
q = parentWrap.find(".form1").serializeArray(),
resStr = $.trim(parentWrap.find(".res_area_r").val());
console.log(q);
//var res_area_r = $.trim($(".res_area_r").val());
if (resStr == '') {
$(".res_text").css({color: "red"});
$.post("/index.php/comment/do_comment_reply/", q,
function (data) {
if (data.succ == 1) {
var $target,
evt = e || window.
$target = $(evt.target || evt.srcElement);
var $dd = $target.parents('dd');
var $wrapReply = $dd.find('.respond_box');
console.log($wrapReply);
//var mess = $(".res_area_r").val();
var mess = resS
var str = str.replace(/{%header%}/g, data.header)
.replace(/{%href%}/g, 'http://' + window.location.host + '/user/' + data.username)
.replace(/{%username%}/g, data.username)
.replace(/{%com_username%}/g, data.com_username)
.replace(/{%time%}/g, data.time)
.replace(/{%id%}/g, data.id)
.replace(/{%mess%}/g, mess);
$dd.after(str);
$(".respond_box").hide();
$(".res_area_r").val("");
$(".res_area").val("");
$wrapReply.hide();
alert(data.msg);
}, "json");
/*删除回复*/
$(".rightLi").on("click", '.del_comment_r', function (e) {
var id = $(e.target).attr("id");
$.getJSON('/index.php/comment/do_comment_del/' + id,
function (data) {
if (data.succ == 1) {
$(e.target).parent().parent().parent().parent().parent().remove();
$(e.target).parents('.res_list').remove()
alert(data.msg);
//填充回复
function KeyP(v) {
var parentWrap = $(v).parents(".respond_box");
parentWrap.find(".res_area_r").val($.trim(parentWrap.find(".res_area").val()));
评论共有0条
VIP会员动态
热门资源标签
CSDN下载频道资源及相关规则调整公告V11.10
下载频道用户反馈专区
下载频道积分规则调整V1710.18
spring mvc+mybatis+mysql+maven+bootstrap 整合实现增删查改简单实例.zip
资源所需积分/C币
当前拥有积分
当前拥有C币
输入下载码
为了良好体验,不建议使用迅雷下载
GLSurfaceView控件通过MediaPlayer播放视频
会员到期时间:
剩余下载个数:
剩余积分:0
为了良好体验,不建议使用迅雷下载
积分不足!
资源所需积分/C币
当前拥有积分
您可以选择
程序员的必选
绿色安全资源
资源所需积分/C币
当前拥有积分
当前拥有C币
为了良好体验,不建议使用迅雷下载
资源所需积分/C币
当前拥有积分
当前拥有C币
为了良好体验,不建议使用迅雷下载
资源所需积分/C币
当前拥有积分
当前拥有C币
您的积分不足,将扣除 10 C币
为了良好体验,不建议使用迅雷下载
无法举报自己的资源
你当前的下载分为234。
你还不是VIP会员
开通VIP会员权限,免积分下载
你下载资源过于频繁,请输入验证码
您因违反CSDN下载频道规则而被锁定帐户,如有疑问,请联络:!
若举报审核通过,可返还被扣除的积分
被举报人:
fowuchubuzai
请选择类型
资源无法下载 ( 404页面、下载失败、资源本身问题)
资源无法使用 (文件损坏、内容缺失、题文不符)
侵犯版权资源 (侵犯公司或个人版权)
虚假资源 (恶意欺诈、刷分资源)
含色情、危害国家安全内容
含广告、木马病毒资源
*投诉人姓名:
*投诉人联系方式:
*版权证明:
*详细原因:
GLSurfaceView控件通过MediaPlayer播放视频Android VR Player(全景视频播放器) [10]: VR全景视频渲染播放的实现(exoplayer,glsurfaceview,opengl es)
此博客的大部分内容来自我的毕业设计论文,因此语言上会偏正式一点,如果您有任何问题或建议,欢迎留言。在此感谢实验室的聂师兄,全景视频render部分的代码设计主要参考了他所编写的代码来完成,他对视频渲染过程的讲解也让我对此部分有了更好的理解!
为了能播放MEPG-DASH标准的视频,我使用了ExoPlayer作为播放器,而非之前的MediaPlayer
,如有需要,请参考,。
GLSurfaceView的使用介绍
针对VR视频的播放需求,由于需要使用OpenGL ES 来完成视频渲染工作,所以使用了。它是SurfaceView的实现,并使用特定的surface来展示OpenGL的渲染内容。它具有如下特点:
1. 管理一个surface(一块可以组合进Android view系统的特殊内存);
2. 管理EGL显示(即允许使用OpenGL进行渲染);
3. 接受由用户提供的渲染器完成渲染工作;开启特定线程完成渲染;
4. 支持按要求(on-demand rendering.)渲染和连续渲染(continuous rendering)两种模式。
这些特点都说明,GLSurfaceView很适合用来进行渲染工作。
GLSurfaceView的使用包含下面这几部分工作:
首先是初始化:即使用setRenderer(Renderer)来设置一个渲染器。具体的步骤为:
1. 定制android.view.Surface:GLSurfaceView默认创建一个像素格式为PixelFormat.RGB_888的surface,根据具体的需求,可以选择需要的像素格式,如透明格式,则需要调用getHolder().setFormat(PixelFormat.TRANSLUCENT)来设置。
2. 选择EGL配置:一个Android设备可能支持多种EGL配置,如通道(channels)数以及每个通道颜色位数不同则EGL配置也不同,所以,在渲染器之前必须指定EGL的配置。默认使用RGB通道和16位深度,在初始化GLSurfaceView时可以通过调用setEGLConfigChooser(EGLConfigChooser)方法来改变EGL配置。
3. 调试选项(可选):可以通过调用setDebugFlags(int),和setGLWrapper(GLSurfaceView.GLWrapper)方法来指定GLSurfaceView的调试行为。
4. 设置渲染器:初始化的最后一步是设置渲染器,通过调用setRenderer(GLSurfaceView.Renderer)来注册一个GLSurfaceView渲染器。真正的OpenGL渲染工作将由渲染器负责完成。
5. 渲染模式:如在GLSurfaceView的特点中所介绍的,它支持按要求(on-demand rendering.)渲染和连续渲染(continuous rendering)两种模式,设定好渲染器,再使用setRenderMode(int)指定需要使用的渲染模式。
另外两个部分是Activity生命周期和事件处理。
Activity生命周期:GLSurfaceView会在Activity窗口暂停(pause)或恢复(resume)时会收到通知,并调用它的的onPause方法和 onResume方法。这是因为GLSurfaceView是一个重量级的控件,恢复和暂停渲染进程是为了使它能及时释放或重建OpenGL ES的资源。
事件处理:为了处理事件,和其他View的事件处理类似,即继承GLSurfaceView类并重载它的事件方法。事件处理过程中可能涉及到和渲染对象所在的渲染线程的通信,使用queueEvent(Runnable)可以简化这部分工作,当然也可以使用其他标准的进程通信机制中的方法。
VR视频的渲染
VR视频的渲染工作总结起来,主要是两大部分,一是球体的绘制,二是进行球体的纹理贴图工作。具体的实现细节较为复杂,如在完成贴图后,为了能让观众自由地切换视角,还需要使用投影和相机视图,并进行窗口裁剪等工作。VR全景视频完整的渲染播放流程如图1所示:
创建 GLSurfaceView 对象
在GLSurfaceView的使用介绍中提到真正的OpenGL渲染工作由渲染器来完成,而GLSurfaceView本身所做的工作并不多。根据Android官网的开发者指导,可以直接使用GLSurfaceView,但为了进行事件处理,本应用必须创建一个自己的MyGLSurfaceView,它继承自GLSurfaceView。
创建GLSurfaceView.Renderer类
GLSurfaceView.Renderer负责向GLSurfaceView的渲染工作,而渲染工作主要有下面这三个方法来完成:onSurfaceCreated():这个方法在GLSurfaceView被创建时,会调用一次,通常在这里进行一些初始化工作,如形状初始化,着色器的编译等;onDrawFrame():这个方法在每次绘制图像时被调用,绘制主要在这里完成; onSurfaceChanged():当几何图形发送改变时,会调用这个方法,如屏幕大小发生变化时。
本应用创建一个SphereVideoRenderer,它是GLSurfaceView.Renderer的实现。按照OpenGL ES 2.0可编程管线,绘制一个图形并最终展示在view(送入FrameBuffer中)的过程为:准备顶点(作为输入顶点着色器的顶点数据),顶点着色器处理,图元装配Primitive Assembly,光栅化(rasterization),Per-Fragment Operations(逐片段操作)。下面将对VR视频渲染的详细步骤进行说明。
数据的准备
第一步工作为准备绘制球形的顶点信息,即获取绘制一个球体所需的全部顶点的直角坐标。创建一个函数initSphereCoords(),它用来完成球体顶点坐标的计算,同时绘制球体的缓冲的准备也在此函数中进行。
根据球坐标和直角坐标之间的关系(如图2所示),得到球坐标到直角坐标的公式
<span class="MathJax" id="MathJax-Element-1-Frame" tabindex="0" data-mathml="x=r&#x2217;sin&#x03B8;cos&#x03D5;" role="presentation" style="position:">x=r*sinθcos?x=r*sinθcos?
<span class="MathJax" id="MathJax-Element-2-Frame" tabindex="0" data-mathml="y=r&#x2217;sin&#x03B8;sin&#x03D5;" role="presentation" style="position:">y=r*sinθsin?y=r*sinθsin?
<span class="MathJax" id="MathJax-Element-3-Frame" tabindex="0" data-mathml="z=r&#x2217;cos&#x03B8;" role="presentation" style="position:">z=r*cosθz=r*cosθ
由于在绘制时需要保证同一方向上的三角形被连续绘制,所以在OpenGL绘制方法中GLES20.glDrawArrays中指定绘制类型为GLES20.GL_TRIANGLE_STRIP,采用这种方法时,顶点缓冲中的顶点将按照V0V1V2,V1V2V3,V2V3V4…这样的方式连接成一个个三角形。
使用如下伪代码所示的方法在initSphereCoords()中计算球体顶点的坐标(计算过程示意如图3所示):
for(theta = 0;theta &= PAI;theta += thetaStep)
for(phi = 0;phi &= 2*PAI;phi += phiStep)
spherPoint[pointer++].x = r * sin(theta) * cos(phi);
spherPoint[pointer++].y = r * sin(theta) * sin(phi);
spherPoint[pointer++].z = r * cos(theta);
spherPoint[pointer++].x = r * sin(theta+thetaStep) * cos(phi);
spherPoint[pointer++].y = r * sin(theta+thetaStep) * sin(phi);
spherPoint[pointer++].z = r * cos(theta+thetaStep);
其中r为绘制的球体的半径,设置为5,theta,phi为球体表面一个点的球坐标中的的theta,phi值。thetaStep和phiStep为根据实际需求调整选择的变化步长。在一个内循环中,计算的两个点的坐标,如图11所示的A,B,通过不断改变theta,phi的值,“遍历”整个球体表面,即可得到所要绘制的球体的顶点信息。然后为这些顶点开辟相应的缓冲区域(在OpenGL中称为准备顶点缓冲对象VBO(Vertex Buffer Object)和顶点数组对象VAO(Vertex Array Object))。
在计算绘制球所需要的顶点坐标信息的同时,需要计算纹理的坐标,使用如下伪代码所示的方法来计算纹理的坐标:
for(theta = 0;theta &= PAI;theta += thetaStep)
for(phi = 0;phi &= 2*PAI;phi += phiStep)
texturePoint[pointer++].x = phi / (2*PAI);
texturePoint[pointer++].y = 1 - theta / PAI;
texturePoint[pointer++].x = phi / (2*PAI);
texturePoint[pointer++].y = 1 - (theta+thetaSetp) / PAI;
由于纹理的(0,0)坐标在左下角,所以使用theta / PAI来表示纹理的y坐标就不正确,因为随着theta的递增,y在减小,所以,用1 - theta / PAI表示,而x坐标则可以使用phi / (2*PAI)来表示。
接着是准备顶点着色器和片元着色器。可编程管线给开发者提供了更多的自由,但同时开发者必须自己完成原本在固定管线中由系统自动完成的许多计算工作。具体来说,开发者必须提供如下的图形渲染管线细节:顶点着色器(vertex Shader):它用来绘制图形的形状;。顶点着色器采用着色器语言GLSL来编写(片元着色器相同),它是一种和C语言语法很类似的语言,一个基本的着色器程序包括变量声明,以及一个main函数。在顶点着色器中,完成的工作为指定球体顶点和纹理顶点。片元着色器(Fragment Shader) :它用来绘制图形的颜色或者是纹理。本应用片元着色器完成的工作为指定绘制球体的纹理,在着色器的main()函数中,使用“gl_FragColor =
texture2D(sTexture, v_TexCoordinate);”来实现,sTexture为由外部传入的统一变量(使用uniform修饰符),v_TexCoordinate为纹理坐标,texture2D为着色器内建函数,这句话的作用为指定一个片元的颜色为v_TexCoordinate位置的sTexture纹理。
除了着色器外,还需要一个program,它是一个OpenGL ES的对象,其中包含了用来绘制一个或者多个形状的着色器。在定义好顶点着色器和片元着色器后,需要将其编译然后添加到program中,然后才能使用着色器。编译和添加的着色器的任务可以通过创建一个工具类方法来实现。分为如下几个步骤:创建program对象:GLES20.glCreateProgram();加载着色器 :SphereVideoRenderer.loadShader(GLES20.GL_VERTEX_SHADER, vertexShaderCode),片元着色器采用同样的方法载入,不过将参数换成片元着色器的;添加着色器:GLES20.glAttachShader(mProgram, vertexShader),片元着色器采用同样的方法添加,不过将参数换成片元着色器的;链接到program:GLES20.glLinkProgram(mProgram),使mProgram成为可执行的program对象;使用program:GLES20.glUseProgram(mProgram),将mProgram添加到OpenGL环境中;
在onSurfaceCreated()中使用GLES20.glGetAttribLocation(mProgram, “attrib_name”);和GLES20.glGetUniformLocation(mProgram, “uniform_name”)两个方法可以分别获取着色器中attribute和uniform类型的变量的句柄(handle)。以纹理ID为参数创建一个SurfaceTexture,并使用ExoPlayer.setSurface(surface)把这个surface作为参数传递给mExoPlayer(在MySurfaceView中创建ExoPlayer的实例,它调用ExoPlayer.setDataSource(videourl)来设置视频数据来源)。
至此,数据的准备工作和OpenGL ES环境初始化基本完成,下一步为在onDrawFrame()中进行绘制工作。
使用OpengGL ES来绘制图形需要调用较多的函数,而且会用到很多相关参数,一个常用的做法是创建一个绘制方法,本应用中这个绘制方法为drawSphere(),然后在onDrawFrame()中去调用该方法。准备绘制方法drawSphere():在drawSphere(),并不是简单地直接使用 GLES20.glDrawArrays(GLES20.GL_TRIANGLE_STRIP, 0, vertexCount);来绘制球体,为了能使得用户观看到的渲染效果接近真实世界的观察效果,需要进行一些矩阵的计算。
计算机图形学中的观察是建立在虚拟相机的基础上(《》 图5,6,7引用自此书),图5则是对虚拟相机模型的抽象,其中投影线相交于投影中心(COP:Center of Projection),而COP就对应于人眼或者是相机镜头。
为了使得观察过程更加灵活,通常的做法是把对虚拟相机的控制分解成设置相机的位置和方向和应用投影矩阵两个基本的操作。其中,设置照相机的位置和方向由模-视变换来完成,顶点经过该变换之后会位于相机坐标系中。然后再将指定的投影矩阵应用于顶点,进行投影变换。并将视见体内部对象变换到指定的裁剪立方体的内部。变换的流程如图6所示:
在实际开发中,矩阵计算往往直接使用android.opengl.Matrix提供的方法即可,而不用开发者自己完成复杂的矩阵计算工作。
使用Matrix.setLookAtM(mViewMatrix, 0, eyeX, eyeY, eyeZ, lookX, lookY, lookZ, upX, upY, upZ)方法可以设置相机在空间中的位置,其中eye参数为相机坐标,look参数为观察的目标的坐标,up参数为相机正上方向量。 在本应用中,各个参数设置为Matrix.setLookAtM(mViewMatrix, 0, 0, 0, 0, 0, 0, -1, 0, 1, 0)。通常情况,将相机的初始方向指向z轴负方向,这是因为这样才能看见位于相机前方的观察对象;仅指定相机的空间位置还不能将相机唯一确定下来下来,因为此时相机还可以进行旋转,通过指定相机正上方向量来将相机确定下来。
下一步是投影矩阵的设置。它的作用是设置一个视见体(如图7所示),用以裁剪形状(即在视见体内部的形状才能被投影到投影平面上,其余部分则被裁剪掉)。棱台视见体是定义视见体的常用方法,它由左右裁剪平面(left和right),上下裁剪平面(top和bottom),远近裁剪平面(near和far)来决定。
使用Matrix.frustumM(mProjectionMatrix, 0, left, right, bottom, top, near, far)方法来设置投影矩阵ProjectionMatrix。最后两个参数near,far,根据实际的渲染测试效果,可固定为1.2和5.0,而前面的 left, right, bottom, top几个参数则可能需要在onSurfaceChanged中进行修改,以确保屏幕改变时,GLSurfaceView的显示效果仍然是开发者所需要的。
然后需要一个供用户转换视角的旋转矩阵,同样使用android.opengl.Matrix提供的方法来完成变换矩阵的计算。 如Matrix.setRotateM(mRotationMatrix, 0, angle, 0.0f, -1.0f, 0),为了能通过屏幕触摸,陀螺仪变化等外部事件来改变视角(即进行旋转),可以暴露一个设置旋转角度的方法出来,供其他类调用。
最后通过之前拿到的着色器变量的句柄,将矩阵变换应用于顶点,并调用 GLES20.glDrawArrays(GLES20.GL_TRIANGLE_STRIP, 0, vertexCount)方法进行绘制。在onDrawFrame()方法中调用drawSphere()进行绘制:开始绘制之前,调用GLES20.glClear(GLES20.GL_COLOR_BUFFER_BIT |GLES20.GL_DEPTH_BUFFER_BIT)方法来清除屏幕。为了用户能使用VR眼镜等工具获得立体观影效果,需要作分屏(设置视口)处理,即将屏幕分成左右两个小屏幕,在两个小屏幕中分别进行绘制。使用 GLES20.glViewport(GLint x,GLint y,GLsizei width,GLsizei height)方法来进行分屏,然后调用drawSphere()进行绘制。
在GLSurfaceView的使用介绍的事件处理部分中提到,使用queueEvent(Runnable)可以简化事件处理所在线程与渲染线程之间的通信工作[10]。为了保证应用的正常运行,必须在GLSurfaceView对的窗口暂停(pause)或恢复(resume)这两个事件进行处理。其中onResume()在窗口恢复时调用。而onPause()在窗口暂停时调用:在此回调方法中调用父类的onPause()方法和ExoPlayer.release()释放相关资源。
视频播放控制
为了顺利进行视频的播放和控制,并且保证应用程序的健壮性,需要对ExoPlayer的线程模型(threading model)有较好的了解,确保在合适的时间点调用相应的方法,并在完成播放后及时释放相关资源。下面主要对视频播放/暂停,进度条拖动播放以及根据陀螺仪改变视角进行说明。
视频播放/暂停,进度条拖动播放
安卓提供了一个视频的播放控制MediaController组件,但是它不太适合本系统的要求,所以需要重新定义一个播放控制类MyMediaController,它继承自FrameLayout,使用时作为一个自定义组件来使用,主要完成视频播放/暂停,进度条拖动播放的控制。
播放暂停功能比较简单,在MediaController的布局中添加一个播放暂停的按钮,然后在播放Activity添加按钮的点击事件,通过调用ExoPlayer.setPlayWhenReady(true)和ExoPlayer.setPlayWhenReady(false)来实现播放和暂停功能。
进度条拖动播放的控制是通过对SeekBar的拖动事件的监听结合ExoPlayer.seekTo()方法来实现。在SeekBar的OnSeekBarChangeListener的onProgressChanged回调中,可以获取到SeekBar进度的改变;当onStopTrackingTouch(SeekBar bar)方法被调用时,说明拖动停止,此时视频应该跳转到指定的进度。ExoPlayer.seekTo()接受的参数为毫秒为单位的视频时间点,因此调用Progress.setMax(1000)方法,将进度条最大值设置为1000,视频应该跳转到的进度则可以表示为(Duration * bar.getProgress()) / 1000。
MyMediaController的显示和隐藏。在playerLayout(视频播放Activity的布局)中调用addView即可将自定义组件MyMediaController的view添加到视频播放view上,而要实现MyMediaController的自动隐藏(即在几秒钟无操作的情况下,播放控制组件会自动消失)和触摸呼出,需要使用到Handler和控件的setVisibility()方法。
创建一个内部类Handler来处理与MyMediaController之间的异步线程消息。MyMediaController显示时,调用Handler.sendMessageDelayed来发送一条延迟消息(这里可以指定延迟时间,即MyMediaController的显示时间),重写Handler的handleMessage方法来处理消息,在一定延迟后,Handler收到了消息,如果当前状态不是拖动进度条并且MyMediaController是显示着的话,就发送隐藏MyMediaController的消息给MyMediaController,它将调用setVisibility(View.GONE)方法来进行隐藏;而触摸呼出MyMediaController的方法则是通过对屏幕触摸事件的监听来实现的,当检测到触摸事件时,调用setVisibility(View.VISIBLE)即可显示MyMediaController。
根据陀螺仪改变视角
本部分的工作为获取陀螺仪的数据,用来调整播放视角。在 开始绘制 中的矩阵变换部分提到 SphereVideoRender提供了设置旋转角度的方法出来,供其他类调用。所以,只需要获取陀螺仪的数据并将其转换为相应的旋转角度,传入SphereVideoRender提供的方法即可实现视角变换的功能。
PlayerActivity中注册传感器事件监听器SensorEventListener,由于本应用只需要陀螺仪的数据,所以,使用一个条件判断对事件进行过滤:
if(event.sensor.getType() == Sensor.TYPE_GYROSCOPE)
对于陀螺仪,从事件监听返回的结果是x、y、z三个轴方向上的角速度(弧度/秒),可以分别从values[0]、values[1]、values[2]获取到。取得上述的角速度的数据,计算两次改变之间的时间差,由角速度乘以时间的公式即可计算出各个轴上改变的角度。然后调用Math.toDegrees()方法将弧度至转换成角度制。
笔记——android实现VR视频显示和优化
Android ExoPlayer 简单实现播放本地视频
一款基于ExoPlayer的自定义播放器(已开源)
ExoPlayer里里外外之:核心类和数据流
Unity VR&AR Unity播放全景视频及优化极点变形twist问题
[OpenGL]从零开始写一个Android平台下的全景视频播放器——目录
ExoPlayer 的小解析
Android VR Player(全景视频播放器) [9]:ExoPlayer播放器MPEG-DASH视频播放
ExoPlayer实现4G网络下暂停缓存功能
没有更多推荐了,用jni实现基于opengl的yuv格式的视频渲染
由于项目需要,需要在android上面实现视频流的解码显示,综合考虑决定使用ffmpeg解码,opengl渲染视频。
技术选型确定以后,开始写demo,不做不知道,一做才发现网上的东西太不靠谱了,基于jni实现的opengl不是直接渲染yuv格式的数据,都是yuv转rgb以后在显示的,有实现的资料都是在java层做的,我不是java出生,所以对那个不感冒,综合考虑之后决定自己通过jni来实现,由于以前基于webrtc开发了一款产品,用的是webrtc的c++接口开发(现在的webrtc都基于浏览器开发了,更加成熟了,接口也更加简单,^_^我觉得还是挖c++代码出来自己实现接口层有意思,我那个项目就是这样搞的),废话不多说,开始讲述实现步骤。
注意:android2.3.3版本才开始支持opengl。
写jni的时候需要在Android.mk里面加上opengl的库连接,这里我发一个我的Android.mk出来供大家参考一下:
LOCAL_PATH := $(call my-dir)
MY_LIBS_PATH := /Users/chenjianjun/Documents/work/ffmpeg-android/build/lib
MY_INCLUDE_PATH := /Users/chenjianjun/Documents/work/ffmpeg-android/build/include
include $(CLEAR_VARS)
LOCAL_MODULE := libavcodec
LOCAL_SRC_FILES :=
$(MY_LIBS_PATH)/libavcodec.a
include $(PREBUILT_STATIC_LIBRARY)
include $(CLEAR_VARS)
LOCAL_MODULE := libavfilter
LOCAL_SRC_FILES :=
$(MY_LIBS_PATH)/libavfilter.a
include $(PREBUILT_STATIC_LIBRARY)
include $(CLEAR_VARS)
LOCAL_MODULE := libavformat
LOCAL_SRC_FILES :=
$(MY_LIBS_PATH)/libavformat.a
include $(PREBUILT_STATIC_LIBRARY)
include $(CLEAR_VARS)
LOCAL_MODULE := libavresample
LOCAL_SRC_FILES :=
$(MY_LIBS_PATH)/libavresample.a
include $(PREBUILT_STATIC_LIBRARY)
include $(CLEAR_VARS)
LOCAL_MODULE := libavutil
LOCAL_SRC_FILES :=
$(MY_LIBS_PATH)/libavutil.a
include $(PREBUILT_STATIC_LIBRARY)
include $(CLEAR_VARS)
LOCAL_MODULE := libpostproc
LOCAL_SRC_FILES :=
$(MY_LIBS_PATH)/libpostproc.a
include $(PREBUILT_STATIC_LIBRARY)
include $(CLEAR_VARS)
LOCAL_MODULE := libswresample
LOCAL_SRC_FILES :=
$(MY_LIBS_PATH)/libswresample.a
include $(PREBUILT_STATIC_LIBRARY)
include $(CLEAR_VARS)
LOCAL_MODULE := libswscale
LOCAL_SRC_FILES :=
$(MY_LIBS_PATH)/libswscale.a
include $(PREBUILT_STATIC_LIBRARY)
include $(CLEAR_VARS)
LOCAL_MODULE_TAGS := MICloudPub
LOCAL_MODULE := libMICloudPub
LOCAL_SRC_FILES := H264Decoder.cpp \
#我的H264基于ffmpeg的解码接口代码
render_opengles20.cpp \
#opengl的渲染代码
#测试接口代码
LOCAL_CFLAGS :=
LOCAL_C_INCLUDES := $(MY_INCLUDE_PATH)
LOCAL_CPP_INCLUDES := $(MY_INCLUDE_PATH)
LOCAL_LDLIBS := \
&span style="font-size:32color:#FF0000;"&-lGLESv2 \&/span&
LOCAL_WHOLE_STATIC_LIBRARIES := \
libavcodec \
libavfilter \
libavformat \
libavresample \
libavutil \
libpostproc \
libswresample \
libswscale
include $(BUILD_SHARED_LIBRARY)
上面红色的是opengl的库,我是mac电脑上面编译的,其他系统的不知道是不是叫这个名字哈(不过这么弱智的应该不会变哈).
写java代码(主要是为了jni里面的代码回调java的代码实现,其中的妙用大家后面便知)
我把webrtc里面的代码拿出来改动了一下,就没自己去写了(不用重复造轮子)
ViEAndroidGLES20.java
package hzcw.
import java.util.concurrent.locks.ReentrantL
import javax.microedition.khronos.egl.EGL10;
import javax.microedition.khronos.egl.EGLC
import javax.microedition.khronos.egl.EGLC
import javax.microedition.khronos.egl.EGLD
import javax.microedition.khronos.opengles.GL10;
import android.app.ActivityM
import android.content.C
import android.content.pm.ConfigurationI
import android.graphics.PixelF
import android.opengl.GLSurfaceV
import android.util.L
public class ViEAndroidGLES20 extends GLSurfaceView implements GLSurfaceView.Renderer
private static String TAG = "MICloudPub";
private static final boolean DEBUG =
// True if onSurfaceCreated has been called.
private boolean surfaceCreated =
private boolean openGLCreated =
// True if NativeFunctionsRegistered has been called.
private boolean nativeFunctionsRegisted =
private ReentrantLock nativeFunctionLock = new ReentrantLock();
// Address of Native object that will do the drawing.
private long nativeObject = 0;
private int viewWidth = 0;
private int viewHeight = 0;
public static boolean UseOpenGL2(Object renderWindow) {
return ViEAndroidGLES20.class.isInstance(renderWindow);
public ViEAndroidGLES20(Context context) {
super(context);
init(false, 0, 0);
public ViEAndroidGLES20(Context context, boolean translucent,
int depth, int stencil) {
super(context);
init(translucent, depth, stencil);
private void init(boolean translucent, int depth, int stencil) {
// By default, GLSurfaceView() creates a RGB_565 opaque surface.
// If we want a translucent one, we should change the surface's
// format here, using PixelFormat.TRANSLUCENT for GL Surfaces
// is interpreted as any 32-bit surface with alpha by SurfaceFlinger.
if (translucent) {
this.getHolder().setFormat(PixelFormat.TRANSLUCENT);
// Setup the context factory for 2.0 rendering.
// See ContextFactory class definition below
setEGLContextFactory(new ContextFactory());
// We need to choose an EGLConfig that matches the format of
// our surface exactly. This is going to be done in our
// custom config chooser. See ConfigChooser class definition
setEGLConfigChooser( translucent ?
new ConfigChooser(8, 8, 8, 8, depth, stencil) :
new ConfigChooser(5, 6, 5, 0, depth, stencil) );
// Set the renderer responsible for frame rendering
this.setRenderer(this);
this.setRenderMode(GLSurfaceView.RENDERMODE_WHEN_DIRTY);
private static class ContextFactory implements GLSurfaceView.EGLContextFactory {
private static int EGL_CONTEXT_CLIENT_VERSION = 0x3098;
public EGLContext createContext(EGL10 egl, EGLDisplay display, EGLConfig eglConfig) {
Log.w(TAG, "creating OpenGL ES 2.0 context");
checkEglError("Before eglCreateContext", egl);
int[] attrib_list = {EGL_CONTEXT_CLIENT_VERSION, 2, EGL10.EGL_NONE };
EGLContext context = egl.eglCreateContext(display, eglConfig,
EGL10.EGL_NO_CONTEXT, attrib_list);
checkEglError("After eglCreateContext", egl);
public void destroyContext(EGL10 egl, EGLDisplay display, EGLContext context) {
egl.eglDestroyContext(display, context);
private static void checkEglError(String prompt, EGL10 egl) {
while ((error = egl.eglGetError()) != EGL10.EGL_SUCCESS) {
Log.e(TAG, String.format("%s: EGL error: 0x%x", prompt, error));
private static class ConfigChooser implements GLSurfaceView.EGLConfigChooser {
public ConfigChooser(int r, int g, int b, int a, int depth, int stencil) {
mRedSize =
mGreenSize =
mBlueSize =
mAlphaSize =
mDepthSize =
mStencilSize =
// This EGL config specification is used to specify 2.0 rendering.
// We use a minimum size of 4 bits for red/green/blue, but will
// perform actual matching in chooseConfig() below.
private static int EGL_OPENGL_ES2_BIT = 4;
private static int[] s_configAttribs2 =
EGL10.EGL_RED_SIZE, 4,
EGL10.EGL_GREEN_SIZE, 4,
EGL10.EGL_BLUE_SIZE, 4,
EGL10.EGL_RENDERABLE_TYPE, EGL_OPENGL_ES2_BIT,
EGL10.EGL_NONE
public EGLConfig chooseConfig(EGL10 egl, EGLDisplay display) {
// Get the number of minimally matching EGL configurations
int[] num_config = new int[1];
egl.eglChooseConfig(display, s_configAttribs2, null, 0, num_config);
int numConfigs = num_config[0];
if (numConfigs &= 0) {
throw new IllegalArgumentException("No configs match configSpec");
// Allocate then read the array of minimally matching EGL configs
EGLConfig[] configs = new EGLConfig[numConfigs];
egl.eglChooseConfig(display, s_configAttribs2, configs, numConfigs, num_config);
if (DEBUG) {
printConfigs(egl, display, configs);
// Now return the "best" one
return chooseConfig(egl, display, configs);
public EGLConfig chooseConfig(EGL10 egl, EGLDisplay display,
EGLConfig[] configs) {
for(EGLConfig config : configs) {
int d = findConfigAttrib(egl, display, config,
EGL10.EGL_DEPTH_SIZE, 0);
int s = findConfigAttrib(egl, display, config,
EGL10.EGL_STENCIL_SIZE, 0);
// We need at least mDepthSize and mStencilSize bits
if (d & mDepthSize || s & mStencilSize)
// We want an *exact* match for red/green/blue/alpha
int r = findConfigAttrib(egl, display, config,
EGL10.EGL_RED_SIZE, 0);
int g = findConfigAttrib(egl, display, config,
EGL10.EGL_GREEN_SIZE, 0);
int b = findConfigAttrib(egl, display, config,
EGL10.EGL_BLUE_SIZE, 0);
int a = findConfigAttrib(egl, display, config,
EGL10.EGL_ALPHA_SIZE, 0);
if (r == mRedSize && g == mGreenSize && b == mBlueSize && a == mAlphaSize)
private int findConfigAttrib(EGL10 egl, EGLDisplay display,
EGLConfig config, int attribute, int defaultValue) {
if (egl.eglGetConfigAttrib(display, config, attribute, mValue)) {
return mValue[0];
return defaultV
private void printConfigs(EGL10 egl, EGLDisplay display,
EGLConfig[] configs) {
int numConfigs = configs.
Log.w(TAG, String.format("%d configurations", numConfigs));
for (int i = 0; i & numC i++) {
Log.w(TAG, String.format("Configuration %d:\n", i));
printConfig(egl, display, configs[i]);
private void printConfig(EGL10 egl, EGLDisplay display,
EGLConfig config) {
int[] attributes = {
EGL10.EGL_BUFFER_SIZE,
EGL10.EGL_ALPHA_SIZE,
EGL10.EGL_BLUE_SIZE,
EGL10.EGL_GREEN_SIZE,
EGL10.EGL_RED_SIZE,
EGL10.EGL_DEPTH_SIZE,
EGL10.EGL_STENCIL_SIZE,
EGL10.EGL_CONFIG_CAVEAT,
EGL10.EGL_CONFIG_ID,
EGL10.EGL_LEVEL,
EGL10.EGL_MAX_PBUFFER_HEIGHT,
EGL10.EGL_MAX_PBUFFER_PIXELS,
EGL10.EGL_MAX_PBUFFER_WIDTH,
EGL10.EGL_NATIVE_RENDERABLE,
EGL10.EGL_NATIVE_VISUAL_ID,
EGL10.EGL_NATIVE_VISUAL_TYPE,
0x3030, // EGL10.EGL_PRESERVED_RESOURCES,
EGL10.EGL_SAMPLES,
EGL10.EGL_SAMPLE_BUFFERS,
EGL10.EGL_SURFACE_TYPE,
EGL10.EGL_TRANSPARENT_TYPE,
EGL10.EGL_TRANSPARENT_RED_VALUE,
EGL10.EGL_TRANSPARENT_GREEN_VALUE,
EGL10.EGL_TRANSPARENT_BLUE_VALUE,
0x3039, // EGL10.EGL_BIND_TO_TEXTURE_RGB,
0x303A, // EGL10.EGL_BIND_TO_TEXTURE_RGBA,
0x303B, // EGL10.EGL_MIN_SWAP_INTERVAL,
0x303C, // EGL10.EGL_MAX_SWAP_INTERVAL,
EGL10.EGL_LUMINANCE_SIZE,
EGL10.EGL_ALPHA_MASK_SIZE,
EGL10.EGL_COLOR_BUFFER_TYPE,
EGL10.EGL_RENDERABLE_TYPE,
0x3042 // EGL10.EGL_CONFORMANT
String[] names = {
"EGL_BUFFER_SIZE",
"EGL_ALPHA_SIZE",
"EGL_BLUE_SIZE",
"EGL_GREEN_SIZE",
"EGL_RED_SIZE",
"EGL_DEPTH_SIZE",
"EGL_STENCIL_SIZE",
"EGL_CONFIG_CAVEAT",
"EGL_CONFIG_ID",
"EGL_LEVEL",
"EGL_MAX_PBUFFER_HEIGHT",
"EGL_MAX_PBUFFER_PIXELS",
"EGL_MAX_PBUFFER_WIDTH",
"EGL_NATIVE_RENDERABLE",
"EGL_NATIVE_VISUAL_ID",
"EGL_NATIVE_VISUAL_TYPE",
"EGL_PRESERVED_RESOURCES",
"EGL_SAMPLES",
"EGL_SAMPLE_BUFFERS",
"EGL_SURFACE_TYPE",
"EGL_TRANSPARENT_TYPE",
"EGL_TRANSPARENT_RED_VALUE",
"EGL_TRANSPARENT_GREEN_VALUE",
"EGL_TRANSPARENT_BLUE_VALUE",
"EGL_BIND_TO_TEXTURE_RGB",
"EGL_BIND_TO_TEXTURE_RGBA",
"EGL_MIN_SWAP_INTERVAL",
"EGL_MAX_SWAP_INTERVAL",
"EGL_LUMINANCE_SIZE",
"EGL_ALPHA_MASK_SIZE",
"EGL_COLOR_BUFFER_TYPE",
"EGL_RENDERABLE_TYPE",
"EGL_CONFORMANT"
int[] value = new int[1];
for (int i = 0; i & attributes. i++) {
int attribute = attributes[i];
String name = names[i];
if (egl.eglGetConfigAttrib(display, config, attribute, value)) {
Log.w(TAG, String.format("
%s: %d\n", name, value[0]));
// Log.w(TAG, String.format("
%s: failed\n", name));
while (egl.eglGetError() != EGL10.EGL_SUCCESS);
// Subclasses can adjust these values:
protected int mRedS
protected int mGreenS
protected int mBlueS
protected int mAlphaS
protected int mDepthS
protected int mStencilS
private int[] mValue = new int[1];
// IsSupported
// Return true if this device support Open GL ES 2.0 rendering.
public static boolean IsSupported(Context context) {
ActivityManager am =
(ActivityManager) context.getSystemService(Context.ACTIVITY_SERVICE);
ConfigurationInfo info = am.getDeviceConfigurationInfo();
if(info.reqGlEsVersion &= 0x20000) {
// Open GL ES 2.0 is supported.
public void onDrawFrame(GL10 gl) {
nativeFunctionLock.lock();
if(!nativeFunctionsRegisted || !surfaceCreated) {
nativeFunctionLock.unlock();
if(!openGLCreated) {
if(0 != CreateOpenGLNative(nativeObject, viewWidth, viewHeight)) {
// Failed to create OpenGL
openGLCreated = // Created OpenGL successfully
DrawNative(nativeObject); // Draw the new frame
nativeFunctionLock.unlock();
public void onSurfaceChanged(GL10 gl, int width, int height) {
surfaceCreated =
viewWidth =
viewHeight =
nativeFunctionLock.lock();
if(nativeFunctionsRegisted) {
if(CreateOpenGLNative(nativeObject,width,height) == 0)
openGLCreated =
nativeFunctionLock.unlock();
public void onSurfaceCreated(GL10 gl, EGLConfig config) {
public void RegisterNativeObject(long nativeObject) {
nativeFunctionLock.lock();
this.nativeObject = nativeO
nativeFunctionsRegisted =
nativeFunctionLock.unlock();
public void DeRegisterNativeObject() {
nativeFunctionLock.lock();
nativeFunctionsRegisted =
openGLCreated =
this.nativeObject = 0;
nativeFunctionLock.unlock();
public void ReDraw() {// jni层解码以后的数据回调,然后由系统调用onDrawFrame显示
if(surfaceCreated) {
// Request the renderer to redraw using the render thread context.
this.requestRender();
private native int CreateOpenGLNative(long nativeObject, int width, int height);
private native void DrawNative(long nativeObject);
ViERenderer.javapackage hzcw.
import android.content.C
import android.view.SurfaceV
public class ViERenderer
public static SurfaceView CreateRenderer(Context context) {
return CreateRenderer(context, false);
public static SurfaceView CreateRenderer(Context context,
boolean useOpenGLES2) {
if(useOpenGLES2 == true && ViEAndroidGLES20.IsSupported(context))
return new ViEAndroidGLES20(context);
GL2JNILib.java (native接口代码)
package com.example.
public class GL2JNILib {
System.loadLibrary("MICloudPub");
public static native void init(Object glSurface);
public static native void step(String filepath);
第二步:写jni代码
com_example_filltriangle_GL2JNILib.h (javah自动生成的)
/* DO NOT EDIT THIS FILE - it is machine generated */
#include &jni.h&
/* Header for class com_example_filltriangle_GL2JNILib */
#ifndef _Included_com_example_filltriangle_GL2JNILib
#define _Included_com_example_filltriangle_GL2JNILib
#ifdef __cplusplus
extern "C" {
com_example_filltriangle_GL2JNILib
* Signature: (II)V
JNIEXPORT void JNICALL Java_com_example_filltriangle_GL2JNILib_init
(JNIEnv *, jclass, jobject);
com_example_filltriangle_GL2JNILib
* Signature: ()V
JNIEXPORT void JNICALL Java_com_example_filltriangle_GL2JNILib_step
(JNIEnv *, jclass, jstring);
#ifdef __cplusplus
#include &jni.h&
#include &stdlib.h&
#include &stdio.h&
#include "render_opengles20.h"
#include "com_example_filltriangle_GL2JNILib.h"
#include "H264Decoder.h"
class AndroidNativeOpenGl2Channel
AndroidNativeOpenGl2Channel(JavaVM* jvm,
void* window)
_ptrWindow =
_buffer = (uint8_t*)malloc(1024000);
~AndroidNativeOpenGl2Channel()
bool isAttached =
JNIEnv* env = NULL;
if (_jvm-&GetEnv((void**) &env, JNI_VERSION_1_4) != JNI_OK) {
// try to attach the thread and get the env
// Attach this thread to JVM
jint res = _jvm-&AttachCurrentThread(&env, NULL);
// Get the JNI env for this thread
if ((res & 0) || !env) {
WEBRTC_TRACE(kTraceError, kTraceVideoRenderer, _id,
"%s: Could not attach thread to JVM (%d, %p)",
__FUNCTION__, res, env);
env = NULL;
isAttached =
if (env && _deRegisterNativeCID) {
env-&CallVoidMethod(_javaRenderObj, _deRegisterNativeCID);
env-&DeleteGlobalRef(_javaRenderObj);
env-&DeleteGlobalRef(_javaRenderClass);
if (isAttached) {
if (_jvm-&DetachCurrentThread() & 0) {
WEBRTC_TRACE(kTraceWarning, kTraceVideoRenderer, _id,
"%s: Could not detach thread from JVM",
__FUNCTION__);
free(_buffer);
int32_t Init()
if (!_ptrWindow)
WEBRTC_TRACE(kTraceWarning, kTraceVideoRenderer, _id,
"(%s): No window have been provided.", __FUNCTION__);
return -1;
if (!_jvm)
WEBRTC_TRACE(kTraceWarning, kTraceVideoRenderer, _id,
"(%s): No JavaVM have been provided.", __FUNCTION__);
return -1;
// get the JNI env for this thread
bool isAttached =
JNIEnv* env = NULL;
if (_jvm-&GetEnv((void**) &env, JNI_VERSION_1_4) != JNI_OK) {
// try to attach the thread and get the env
// Attach this thread to JVM
jint res = _jvm-&AttachCurrentThread(&env, NULL);
// Get the JNI env for this thread
if ((res & 0) || !env) {
WEBRTC_TRACE(kTraceError, kTraceVideoRenderer, _id,
"%s: Could not attach thread to JVM (%d, %p)",
__FUNCTION__, res, env);
return -1;
isAttached =
// get the ViEAndroidGLES20 class
jclass javaRenderClassLocal = reinterpret_cast&jclass& (env-&FindClass("hzcw/opengl/ViEAndroidGLES20"));
if (!javaRenderClassLocal) {
WEBRTC_TRACE(kTraceError, kTraceVideoRenderer, _id,
"%s: could not find ViEAndroidGLES20", __FUNCTION__);
return -1;
_javaRenderClass = reinterpret_cast&jclass& (env-&NewGlobalRef(javaRenderClassLocal));
if (!_javaRenderClass) {
WEBRTC_TRACE(kTraceError, kTraceVideoRenderer, _id,
"%s: could not create Java SurfaceHolder class reference",
__FUNCTION__);
return -1;
// Delete local class ref, we only use the global ref
env-&DeleteLocalRef(javaRenderClassLocal);
jmethodID cidUseOpenGL = env-&GetStaticMethodID(_javaRenderClass,
"UseOpenGL2",
"(Ljava/lang/O)Z");
if (cidUseOpenGL == NULL) {
WEBRTC_TRACE(kTraceError, kTraceVideoRenderer, -1,
"%s: could not get UseOpenGL ID", __FUNCTION__);
jboolean res = env-&CallStaticBooleanMethod(_javaRenderClass,
cidUseOpenGL, (jobject) _ptrWindow);
// create a reference to the object (to tell JNI that we are referencing it
// after this function has returned)
_javaRenderObj = reinterpret_cast&jobject& (env-&NewGlobalRef((jobject)_ptrWindow));
if (!_javaRenderObj)
WEBRTC_TRACE(
kTraceError,
kTraceVideoRenderer,
"%s: could not create Java SurfaceRender object reference",
__FUNCTION__);
return -1;
// get the method ID for the ReDraw function
_redrawCid = env-&GetMethodID(_javaRenderClass, "ReDraw", "()V");
if (_redrawCid == NULL) {
WEBRTC_TRACE(kTraceError, kTraceVideoRenderer, _id,
"%s: could not get ReDraw ID", __FUNCTION__);
return -1;
_registerNativeCID = env-&GetMethodID(_javaRenderClass,
"RegisterNativeObject", "(J)V");
if (_registerNativeCID == NULL) {
WEBRTC_TRACE(kTraceError, kTraceVideoRenderer, _id,
"%s: could not get RegisterNativeObject ID", __FUNCTION__);
return -1;
_deRegisterNativeCID = env-&GetMethodID(_javaRenderClass,
"DeRegisterNativeObject", "()V");
if (_deRegisterNativeCID == NULL) {
WEBRTC_TRACE(kTraceError, kTraceVideoRenderer, _id,
"%s: could not get DeRegisterNativeObject ID",
__FUNCTION__);
return -1;
JNINativeMethod nativeFunctions[2] = {
{ "DrawNative",
(void*) &AndroidNativeOpenGl2Channel::DrawNativeStatic, },
{ "CreateOpenGLNative",
(void*) &AndroidNativeOpenGl2Channel::CreateOpenGLNativeStatic },
if (env-&RegisterNatives(_javaRenderClass, nativeFunctions, 2) == 0) {
WEBRTC_TRACE(kTraceDebug, kTraceVideoRenderer, -1,
"%s: Registered native functions", __FUNCTION__);
WEBRTC_TRACE(kTraceError, kTraceVideoRenderer, -1,
"%s: Failed to register native functions", __FUNCTION__);
return -1;
env-&CallVoidMethod(_javaRenderObj, _registerNativeCID, (jlong) this);
if (isAttached) {
if (_jvm-&DetachCurrentThread() & 0) {
WEBRTC_TRACE(kTraceWarning, kTraceVideoRenderer, _id,
"%s: Could not detach thread from JVM", __FUNCTION__);
WEBRTC_TRACE(kTraceDebug, kTraceVideoRenderer, _id, "%s done",
__FUNCTION__);
if (_openGLRenderer.SetCoordinates(zOrder, left, top, right, bottom) != 0) {
return -1;
void DeliverFrame(int32_t widht, int32_t height)
bool isAttached =
JNIEnv* env = NULL;
if (_jvm-&GetEnv((void**) &env, JNI_VERSION_1_4) != JNI_OK) {
// try to attach the thread and get the env
// Attach this thread to JVM
jint res = _jvm-&AttachCurrentThread(&env, NULL);
// Get the JNI env for this thread
if ((res & 0) || !env) {
WEBRTC_TRACE(kTraceError, kTraceVideoRenderer, _id,
"%s: Could not attach thread to JVM (%d, %p)",
__FUNCTION__, res, env);
env = NULL;
isAttached =
if (env && _redrawCid)
env-&CallVoidMethod(_javaRenderObj, _redrawCid);
if (isAttached) {
if (_jvm-&DetachCurrentThread() & 0) {
WEBRTC_TRACE(kTraceWarning, kTraceVideoRenderer, _id,
"%s: Could not detach thread from JVM",
__FUNCTION__);
void GetDataBuf(uint8_t*& pbuf, int32_t& isize)
isize = 1024000;
static jint CreateOpenGLNativeStatic(JNIEnv * env,
jlong context,
jint width,
jint height)
AndroidNativeOpenGl2Channel* renderChannel =
reinterpret_cast&AndroidNativeOpenGl2Channel*& (context);
WEBRTC_TRACE(kTraceInfo, kTraceVideoRenderer, -1, "%s:", __FUNCTION__);
return renderChannel-&CreateOpenGLNative(width, height);
static void DrawNativeStatic(JNIEnv * env,jobject, jlong context)
AndroidNativeOpenGl2Channel* renderChannel =
reinterpret_cast&AndroidNativeOpenGl2Channel*&(context);
renderChannel-&DrawNative();
jint CreateOpenGLNative(int width, int height)
return _openGLRenderer.Setup(width, height);
void DrawNative()
_openGLRenderer.Render(_buffer, _widht, _height);
void* _ptrW
jobject _javaRenderO
jclass _javaRenderC
JNIEnv* _javaRenderJniE
_registerNativeCID;
_deRegisterNativeCID;
RenderOpenGles20 _openGLR
uint8_t* _
static JavaVM* g_jvm = NULL;
static AndroidNativeOpenGl2Channel* p_opengl_channel = NULL;
extern "C"
JNIEXPORT jint JNI_OnLoad(JavaVM* vm, void *reserved)
JNIEnv* env = NULL;
jint result = -1;
if (vm-&GetEnv((void**) &env, JNI_VERSION_1_4) != JNI_OK)
return -1;
return JNI_VERSION_1_4;
extern "C"
int mTrans = 0x0F0F0F0F;
int MergeBuffer(uint8_t *NalBuf, int NalBufUsed, uint8_t *SockBuf, int SockBufUsed, int SockRemain)
//把读取的数剧分割成NAL块
int i = 0;
for (i = 0; i & SockR i++) {
Temp = SockBuf[i + SockBufUsed];
NalBuf[i + NalBufUsed] = T
mTrans &&= 8;
mTrans |= T
if (mTrans == 1) // 找到一个开始字
JNIEXPORT void JNICALL Java_com_example_filltriangle_GL2JNILib_init
(JNIEnv *env, jclass oclass, jobject glSurface)
if (p_opengl_channel)
WEBRTC_TRACE(kTraceInfo, kTraceVideoRenderer, -1, "初期化失败[%d].", __LINE__);
p_opengl_channel = new AndroidNativeOpenGl2Channel(g_jvm, glSurface);
if (p_opengl_channel-&Init() != 0)
WEBRTC_TRACE(kTraceInfo, kTraceVideoRenderer, -1, "初期化失败[%d].", __LINE__);
JNIEXPORT void JNICALL Java_com_example_filltriangle_GL2JNILib_step(JNIEnv* env, jclass tis, jstring filepath)
const char *filename = env-&GetStringUTFChars(filepath, NULL);
WEBRTC_TRACE(kTraceInfo, kTraceVideoRenderer, -1, "step[%d].", __LINE__);
FILE *_imgFileHandle =
fopen(filename, "rb");
if (_imgFileHandle == NULL)
WEBRTC_TRACE(kTraceInfo, kTraceVideoRenderer, -1, "File No Exist[%s][%d].", filename, __LINE__);
H264Decoder* pMyH264 = new H264Decoder();
X264_DECODER_H handle = pMyH264-&X264Decoder_Init();
if (handle &= 0)
WEBRTC_TRACE(kTraceInfo, kTraceVideoRenderer, -1, "X264Decoder_Init Error[%d].", __LINE__);
int iTemp = 0;
int bytesRead = 0;
int NalBufUsed = 0;
int SockBufUsed = 0;
bool bFirst =
bool bFindPPS =
uint8_t *SockBuf = (uint8_t *)malloc(204800);
uint8_t *NalBuf = (uint8_t *)malloc(4098000);
int nWidth, nH
memset(SockBuf, 0, 204800);
uint8_t *buffOut = NULL;
int outSize = 0;
p_opengl_channel-&GetDataBuf(buffOut, outSize);
uint8_t *IIBuf = (uint8_t *)malloc(204800);
int IILen = 0;
bytesRead = fread(SockBuf, 1, 204800, _imgFileHandle);
WEBRTC_TRACE(kTraceInfo, kTraceVideoRenderer, -1, "bytesRead
= %d", bytesRead);
if (bytesRead &= 0) {
SockBufUsed = 0;
while (bytesRead - SockBufUsed & 0) {
nalLen = MergeBuffer(NalBuf, NalBufUsed, SockBuf, SockBufUsed,
bytesRead - SockBufUsed);
NalBufUsed += nalL
SockBufUsed += nalL
while (mTrans == 1) {
mTrans = 0xFFFFFFFF;
if (bFirst == true) // the first start flag
else // a complete NAL data, include 0x trail.
if (bFindPPS == true) // true
if ((NalBuf[4] & 0x1F) == 7 || (NalBuf[4] & 0x1F) == 8)
bFindPPS =
NalBuf[0] = 0;
NalBuf[1] = 0;
NalBuf[2] = 0;
NalBuf[3] = 1;
NalBufUsed = 4;
if (NalBufUsed == 16 || NalBufUsed == 10 || NalBufUsed == 54 || NalBufUsed == 12 || NalBufUsed == 20) {
memcpy(IIBuf + IILen, NalBuf, NalBufUsed);
IILen += NalBufU
memcpy(IIBuf + IILen, NalBuf, NalBufUsed);
IILen += NalBufU
// decode nal
iTemp = pMyH264-&X264Decoder_Decode(handle, (uint8_t *)IIBuf,
IILen, (uint8_t *)buffOut,
outSize, &nWidth, &nHeight);
if (iTemp == 0) {
WEBRTC_TRACE(kTraceInfo, kTraceVideoRenderer, -1, "解码成功,宽度:%d高度:%d,解码数据长度:%d.", nWidth, nHeight, iTemp);
[self.glView setVideoSize:nWidth height:nHeight];
[self.glView displayYUV420pData:buffOut
width:nWidth
height:nHeight];
p_opengl_channel-&DeliverFrame(nWidth, nHeight);
WEBRTC_TRACE(kTraceInfo, kTraceVideoRenderer, -1, "解码失败.");
IILen = 0;
NalBuf[0]=0;
NalBuf[1]=0;
NalBuf[2]=0;
NalBuf[3]=1;
NalBufUsed=4;
}while (bytesRead&0);
fclose(_imgFileHandle);
pMyH264-&X264Decoder_UnInit(handle);
free(SockBuf);
free(NalBuf);
delete pMyH264;
env-&ReleaseStringUTFChars(filepath, filename);
render_opengles20.cpp
#include &GLES2/gl2.h&
#include &GLES2/gl2ext.h&
#include &stdio.h&
#include &stdlib.h&
#include &stdio.h&
#include "render_opengles20.h"
const char RenderOpenGles20::g_indices[] = { 0, 3, 2, 0, 2, 1 };
const char RenderOpenGles20::g_vertextShader[] = {
"attribute vec4 aP\n"
"attribute vec2 aTextureC\n"
"varying vec2 vTextureC\n"
"void main() {\n"
gl_Position = aP\n"
vTextureCoord = aTextureC\n"
// The fragment shader.
// Do YUV to RGB565 conversion.
const char RenderOpenGles20::g_fragmentShader[] = {
"uniform sampler2D Y\n"
"uniform sampler2D Utex,V\n"
"varying vec2 vTextureC\n"
"void main(void) {\n"
float nx,ny,r,g,b,y,u,v;\n"
mediump vec4 txl,ux,"
nx=vTextureCoord[0];\n"
ny=vTextureCoord[1];\n"
y=texture2D(Ytex,vec2(nx,ny)).r;\n"
u=texture2D(Utex,vec2(nx,ny)).r;\n"
v=texture2D(Vtex,vec2(nx,ny)).r;\n"
y=1.1643*(y-0.0625);\n"
u=u-0.5;\n"
v=v-0.5;\n"
r=y+1.5958*v;\n"
g=y-0.39173*u-0.81290*v;\n"
b=y+2.017*u;\n"
gl_FragColor=vec4(r,g,b,1.0);\n"
RenderOpenGles20::RenderOpenGles20() :
_textureWidth(-1),
_textureHeight(-1)
WEBRTC_TRACE(kTraceDebug, kTraceVideoRenderer, _id, "%s: id %d",
__FUNCTION__, (int) _id);
const GLfloat vertices[20] = {
// X, Y, Z, U, V
-1, -1, 0, 1, 0, // Bottom Left
1, -1, 0, 0, 0, //Bottom Right
1, 1, 0, 0, 1, //Top Right
-1, 1, 0, 1, 1 }; //Top Left
memcpy(_vertices, vertices, sizeof(_vertices));
RenderOpenGles20::~RenderOpenGles20() {
glDeleteTextures(3, _textureIds);
int32_t RenderOpenGles20::Setup(int32_t width, int32_t height) {
WEBRTC_TRACE(kTraceDebug, kTraceVideoRenderer, _id,
"%s: width %d, height %d", __FUNCTION__, (int) width,
(int) height);
printGLString("Version", GL_VERSION);
printGLString("Vendor", GL_VENDOR);
printGLString("Renderer", GL_RENDERER);
printGLString("Extensions", GL_EXTENSIONS);
int maxTextureImageUnits[2];
int maxTextureSize[2];
glGetIntegerv(GL_MAX_TEXTURE_IMAGE_UNITS, maxTextureImageUnits);
glGetIntegerv(GL_MAX_TEXTURE_SIZE, maxTextureSize);
WEBRTC_TRACE(kTraceDebug, kTraceVideoRenderer, _id,
"%s: number of textures %d, size %d", __FUNCTION__,
(int) maxTextureImageUnits[0], (int) maxTextureSize[0]);
_program = createProgram(g_vertextShader, g_fragmentShader);
if (!_program) {
WEBRTC_TRACE(kTraceError, kTraceVideoRenderer, _id,
"%s: Could not create program", __FUNCTION__);
return -1;
int positionHandle = glGetAttribLocation(_program, "aPosition");
checkGlError("glGetAttribLocation aPosition");
if (positionHandle == -1) {
WEBRTC_TRACE(kTraceError, kTraceVideoRenderer, _id,
"%s: Could not get aPosition handle", __FUNCTION__);
return -1;
int textureHandle = glGetAttribLocation(_program, "aTextureCoord");
checkGlError("glGetAttribLocation aTextureCoord");
if (textureHandle == -1) {
WEBRTC_TRACE(kTraceError, kTraceVideoRenderer, _id,
"%s: Could not get aTextureCoord handle", __FUNCTION__);
return -1;
// set the vertices array in the shader
// _vertices contains 4 vertices with 5 coordinates.
// 3 for (xyz) for the vertices and 2 for the texture
glVertexAttribPointer(positionHandle, 3, GL_FLOAT, false,
5 * sizeof(GLfloat), _vertices);
checkGlError("glVertexAttribPointer aPosition");
glEnableVertexAttribArray(positionHandle);
checkGlError("glEnableVertexAttribArray positionHandle");
// set the texture coordinate array in the shader
// _vertices contains 4 vertices with 5 coordinates.
// 3 for (xyz) for the vertices and 2 for the texture
glVertexAttribPointer(textureHandle, 2, GL_FLOAT, false, 5
* sizeof(GLfloat), &_vertices[3]);
checkGlError("glVertexAttribPointer maTextureHandle");
glEnableVertexAttribArray(textureHandle);
checkGlError("glEnableVertexAttribArray textureHandle");
glUseProgram(_program);
int i = glGetUniformLocation(_program, "Ytex");
checkGlError("glGetUniformLocation");
glUniform1i(i, 0); /* Bind Ytex to texture unit 0 */
checkGlError("glUniform1i Ytex");
i = glGetUniformLocation(_program, "Utex");
checkGlError("glGetUniformLocation Utex");
glUniform1i(i, 1); /* Bind Utex to texture unit 1 */
checkGlError("glUniform1i Utex");
i = glGetUniformLocation(_program, "Vtex");
checkGlError("glGetUniformLocation");
glUniform1i(i, 2); /* Bind Vtex to texture unit 2 */
checkGlError("glUniform1i");
glViewport(0, 0, width, height);
checkGlError("glViewport");
// SetCoordinates
// Sets the coordinates where the stream shall be rendered.
// Values must be between 0 and 1.
int32_t RenderOpenGles20::SetCoordinates(int32_t zOrder,
const float left,
const float top,
const float right,
const float bottom) {
if ((top & 1 || top & 0) || (right & 1 || right & 0) ||
(bottom & 1 || bottom & 0) || (left & 1 || left & 0)) {
WEBRTC_TRACE(kTraceError, kTraceVideoRenderer, _id,
"%s: Wrong coordinates", __FUNCTION__);
return -1;
X, Y, Z, U, V
// -1, -1, 0, 0, 1, // Bottom Left
1, -1, 0, 1, 1, //Bottom Right
1, 0, 1, 0, //Top Right
1, 0, 0, 0
//Top Left
// Bottom Left
_vertices[0] = (left * 2) - 1;
_vertices[1] = -1 * (2 * bottom) + 1;
_vertices[2] = zO
//Bottom Right
_vertices[5] = (right * 2) - 1;
_vertices[6] = -1 * (2 * bottom) + 1;
_vertices[7] = zO
//Top Right
_vertices[10] = (right * 2) - 1;
_vertices[11] = -1 * (2 * top) + 1;
_vertices[12] = zO
//Top Left
_vertices[15] = (left * 2) - 1;
_vertices[16] = -1 * (2 * top) + 1;
_vertices[17] = zO
GLuint RenderOpenGles20::loadShader(GLenum shaderType, const char* pSource)
GLuint shader = glCreateShader(shaderType);
if (shader) {
glShaderSource(shader, 1, &pSource, NULL);
glCompileShader(shader);
GLint compiled = 0;
glGetShaderiv(shader, GL_COMPILE_STATUS, &compiled);
if (!compiled) {
GLint infoLen = 0;
glGetShaderiv(shader, GL_INFO_LOG_LENGTH, &infoLen);
if (infoLen) {
char* buf = (char*) malloc(infoLen);
if (buf) {
glGetShaderInfoLog(shader, infoLen, NULL, buf);
WEBRTC_TRACE(kTraceError, kTraceVideoRenderer, _id,
"%s: Could not compile shader %d: %s",
__FUNCTION__, shaderType, buf);
free(buf);
glDeleteShader(shader);
shader = 0;
GLuint RenderOpenGles20::createProgram(const char* pVertexSource,
const char* pFragmentSource) {
GLuint vertexShader = loadShader(GL_VERTEX_SHADER, pVertexSource);
if (!vertexShader) {
GLuint pixelShader = loadShader(GL_FRAGMENT_SHADER, pFragmentSource);
if (!pixelShader) {
GLuint program = glCreateProgram();
if (program) {
glAttachShader(program, vertexShader);
checkGlError("glAttachShader");
glAttachShader(program, pixelShader);
checkGlError("glAttachShader");
glLinkProgram(program);
GLint linkStatus = GL_FALSE;
glGetProgramiv(program, GL_LINK_STATUS, &linkStatus);
if (linkStatus != GL_TRUE) {
GLint bufLength = 0;
glGetProgramiv(program, GL_INFO_LOG_LENGTH, &bufLength);
if (bufLength) {
char* buf = (char*) malloc(bufLength);
if (buf) {
glGetProgramInfoLog(program, bufLength, NULL, buf);
WEBRTC_TRACE(kTraceError, kTraceVideoRenderer, _id,
"%s: Could not link program: %s",
__FUNCTION__, buf);
free(buf);
glDeleteProgram(program);
program = 0;
void RenderOpenGles20::printGLString(const char *name, GLenum s) {
const char *v = (const char *) glGetString(s);
WEBRTC_TRACE(kTraceDebug, kTraceVideoRenderer, _id, "GL %s = %s\n",
void RenderOpenGles20::checkGlError(const char* op) {
#ifdef ANDROID_LOG
for (GLint error = glGetError(); error = glGetError()) {
WEBRTC_TRACE(kTraceError, kTraceVideoRenderer, _id,
"after %s() glError (0x%x)\n", op, error);
static void InitializeTexture(int name, int id, int width, int height) {
glActiveTexture(name);
glBindTexture(GL_TEXTURE_2D, id);
glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST);
glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE);
glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE);
glTexImage2D(GL_TEXTURE_2D, 0, GL_LUMINANCE, width, height, 0,
GL_LUMINANCE, GL_UNSIGNED_BYTE, NULL);
// Uploads a plane of pixel data, accounting for stride != width*bpp.
static void GlTexSubImage2D(GLsizei width, GLsizei height, int stride,
const uint8_t* plane) {
if (stride == width) {
We can upload the entire plane in a single GL call.
glTexSubImage2D(GL_TEXTURE_2D, 0, 0, 0, width, height, GL_LUMINANCE,
GL_UNSIGNED_BYTE,
static_cast&const GLvoid*&(plane));
Since GLES2 doesn't have GL_UNPACK_ROW_LENGTH and Android doesn't
// have GL_EXT_unpack_subimage we have to upload a row at a time.
for (int row = 0; row & ++row) {
glTexSubImage2D(GL_TEXTURE_2D, 0, 0, row, width, 1, GL_LUMINANCE,
GL_UNSIGNED_BYTE,
static_cast&const GLvoid*&(plane + (row * stride)));
int32_t RenderOpenGles20::Render(void * data, int32_t widht, int32_t height)
WEBRTC_TRACE(kTraceDebug, kTraceVideoRenderer, _id, "%s: id %d",
__FUNCTION__, (int) _id);
glUseProgram(_program);
checkGlError("glUseProgram");
if (_textureWidth != (GLsizei) widht || _textureHeight != (GLsizei) height) {
SetupTextures(widht, height);
UpdateTextures(data, widht, height);
glDrawElements(GL_TRIANGLES, 6, GL_UNSIGNED_BYTE, g_indices);
checkGlError("glDrawArrays");
void RenderOpenGles20::SetupTextures(int32_t width, int32_t height)
glDeleteTextures(3, _textureIds);
glGenTextures(3, _textureIds); //Generate
the Y, U and V texture
InitializeTexture(GL_TEXTURE0, _textureIds[0], width, height);
InitializeTexture(GL_TEXTURE1, _textureIds[1], width / 2, height / 2);
InitializeTexture(GL_TEXTURE2, _textureIds[2], width / 2, height / 2);
checkGlError("SetupTextures");
_textureWidth =
_textureHeight =
void RenderOpenGles20::UpdateTextures(void* data, int32_t widht, int32_t height)
glActiveTexture(GL_TEXTURE0);
glBindTexture(GL_TEXTURE_2D, _textureIds[0]);
glTexSubImage2D(GL_TEXTURE_2D, 0, 0, 0, widht, height, GL_LUMINANCE, GL_UNSIGNED_BYTE,
glActiveTexture(GL_TEXTURE1);
glBindTexture(GL_TEXTURE_2D, _textureIds[1]);
glTexSubImage2D(GL_TEXTURE_2D, 0, 0, 0, widht / 2, height / 2, GL_LUMINANCE,
GL_UNSIGNED_BYTE, (char *)data + widht * height);
glActiveTexture(GL_TEXTURE2);
glBindTexture(GL_TEXTURE_2D, _textureIds[2]);
glTexSubImage2D(GL_TEXTURE_2D, 0, 0, 0, widht / 2, height / 2, GL_LUMINANCE,
GL_UNSIGNED_BYTE, (char *)data + widht * height * 5 / 4);
checkGlError("UpdateTextures");
H264Decoder.cpp (解码代码,前面的博客贴过代码,这里就不贴了)
第三步:编译jni,生成so文件
第四步:把生成的so文件拷贝到android工程里面去,这里贴一下我的Activity代码,如下:
package com.example.
import java.io.IOE
import java.io.InputS
import hzcw.opengl.ViER
import android.app.A
import android.os.B
import android.os.E
import android.util.L
import android.view.SurfaceV
public class FillTriangle extends Activity {
private SurfaceView mView =
System.loadLibrary("MICloudPub");
@Override protected void onCreate(Bundle icicle) {
super.onCreate(icicle);
mView = ViERenderer.CreateRenderer(this, true);
if (mView == null) {
Log.i("test", "mView is null");
setContentView(mView);
GL2JNILib.init(mView);
new MyThread().start();
public class MyThread extends Thread {
public void run() {
GL2JNILib.step("/sdcard/test.264");
这个demo就是读一个视频文件,解码以后在界面显示出来。便于运行,最后上效果图哈,免得有人怀疑项目真实性。
Android OpenglEs渲染yuv
ffmpeg+OpenGL播放yuv+openSL 快放 慢放 视频播放器
在Android中使用OpenGL效果渲染
Android使用ffmpeg+opengl+opensles实现播放器
Android OpenGL ES(五):GLSurfaceView
利用JNI技术在Android中调用C++形式的OpenGL ES 2.0函数
FFmpeg解码H264为YUV420
没有更多推荐了,

我要回帖

更多关于 抖音27秒视频怎么上传 的文章

 

随机推荐