can not load filelibrar...

为什么我每次开机都会出现 Can not Load Library_百度知道
为什么我每次开机都会出现 Can not Load Library
提问者采纳
krnln貌似是支持库不能加载;和&Administrator\Administrator\的,解压后再放到下面所对应的目录即可。 解决办法 请把附件下载下来。 出现此问题的原因可能是因为用户对系统中&quot,原因是你的杀毒软件误杀了&quot,今天我就把解决方法告诉大家,请选择“工具”→“文件夹选项”→“查看”→然后把里面的[隐藏受保护的操作系统(推荐)]前面的那一项去掉;和&quot.fne&C.fne&quot.fnr&Local Settings\这两个文件,因为此目录是E语言程序运行时临时存放支持库文件&quot.fnr&E_4&quot![无需重装系统] 系统运行有些EXE程序 有时会出现“failed to load kernel library”这种情况:\Temp\Local Settings&#92:\E_4 如果 Temp文件夹下有“E_4”这个文件;Documents and Settings\Documents and Settings\Temp\目录没有可写权限造成的! 把此文件解压后放在下面这个目录里 C;shell
其他类似问题
为您推荐:
开机的相关知识
等待您来回答
下载知道APP
随时随地咨询
出门在外也不愁To resolve this error upgrade to the latest version of SyncBackSE.
Your email address
invalid email
(thinking…)
or sign in with
UserVoice password
I agree to the
Signed in as
we have reset your password.
We've just sent you an email to .
Click the link to create a password, then come back here and sign in.电脑开机出现Can not load the RKCS#11 library什么意思_百度知道
电脑开机出现Can not load the RKCS#11 library什么意思
意思1、无法加载公开密钥加密系统。2、这种情况通常与硬件安全和智能卡有关。解决方法1、重置浏览器。2、给系统安装补丁。
其他类似问题
为您推荐:
是启动项的吧,找那个程序重装一次!
电脑开机的相关知识
其他1条回答
应该是PKCS#吧这个是资料.baidu你打错了吧://zhidao,看能帮到你不./question/" target="_blank">http://zhidao:<a href="http
等待您来回答
下载知道APP
随时随地咨询
出门在外也不愁Spark: Could not load native gpl library
6 messages
Open this post in threaded view
Report Content as Inappropriate
Spark: Could not load native gpl library
I had the following error when trying to run a very simple spark job (which uses logistic regression with SGD in mllib):
ERROR GPLNativeCodeLoader: Could not load native gpl library
java.lang.UnsatisfiedLinkError: no gplcompression in java.library.path
& & at java.lang.ClassLoader.loadLibrary(ClassLoader.java:1738)
& & at java.lang.Runtime.loadLibrary0(Runtime.java:823)
& & at java.lang.System.loadLibrary(System.java:1028)
& & at pression.lzo.GPLNativeCodeLoader.&clinit&(GPLNativeCodeLoader.java:32)
& & at pression.lzo.LzoCodec.&clinit&(LzoCodec.java:71)
& & at java.lang.Class.forName0(Native Method)
& & at java.lang.Class.forName(Class.java:247)
& & at org.apache.hadoop.conf.Configuration.getClassByNameOrNull(Configuration.java:1659)
& & at org.apache.hadoop.conf.Configuration.getClassByName(Configuration.java:1624)
& & at org.apache.pressionCodecFactory.getCodecClasses(CompressionCodecFactory.java:128)
& & at org.apache.pressionCodecFactory.&init&(CompressionCodecFactory.java:175)
& & at org.apache.hadoop.mapred.TextInputFormat.configure(TextInputFormat.java:45)
& & at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
& & at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
& & at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
& & at java.lang.reflect.Method.invoke(Method.java:597)
& & at org.apache.hadoop.util.ReflectionUtils.setJobConf(ReflectionUtils.java:106)
& & at org.apache.hadoop.util.ReflectionUtils.setConf(ReflectionUtils.java:75)
& & at org.apache.hadoop.util.ReflectionUtils.newInstance(ReflectionUtils.java:133)
& & at org.apache.spark.rdd.HadoopRDD.getInputFormat(HadoopRDD.scala:155)
& & at org.apache.spark.rdd.HadoopRDD$$anon$1.&init&(HadoopRDD.scala:187)
& & at org.apache.spark.pute(HadoopRDD.scala:181)
& & at org.apache.spark.pute(HadoopRDD.scala:93)
& & at org.apache.spark.puteOrReadCheckpoint(RDD.scala:262)
& & at org.apache.spark.rdd.RDD.iterator(RDD.scala:229)
& & at org.apache.spark.pute(MappedRDD.scala:31)
& & at org.apache.spark.puteOrReadCheckpoint(RDD.scala:262)
& & at org.apache.spark.rdd.RDD.iterator(RDD.scala:229)
& & at org.apache.spark.pute(MappedRDD.scala:31)
& & at org.apache.spark.puteOrReadCheckpoint(RDD.scala:262)
& & at org.apache.spark.rdd.RDD.iterator(RDD.scala:229)
& & at org.apache.spark.pute(FilteredRDD.scala:34)
& & at org.apache.spark.puteOrReadCheckpoint(RDD.scala:262)
& & at org.apache.spark.rdd.RDD.iterator(RDD.scala:229)
& & at org.apache.spark.pute(MappedRDD.scala:31)
& & at org.apache.spark.puteOrReadCheckpoint(RDD.scala:262)
& & at org.apache.spark.rdd.RDD.iterator(RDD.scala:229)
& & at org.apache.spark.pute(MappedRDD.scala:31)
& & at org.apache.spark.puteOrReadCheckpoint(RDD.scala:262)
& & at org.apache.spark.rdd.RDD.iterator(RDD.scala:229)
& & at org.apache.spark.pute(FilteredRDD.scala:34)
& & at org.apache.spark.puteOrReadCheckpoint(RDD.scala:262)
& & at org.apache.spark.rdd.RDD.iterator(RDD.scala:229)
& & at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:111)
& & at org.apache.spark.scheduler.Task.run(Task.scala:51)
& & at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:187)
& & at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
& & at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
& & at java.lang.Thread.run(Thread.java:662)
14/08/06 20:32:11 ERROR LzoCodec: Cannot load native-lzo without native-hadoop
This is the command I used to submit the job:
~/spark/spark-1.0.0-bin-hadoop2/bin/spark-submit \
--class com.jk.sparktest.Test \
--master yarn-cluster \
--num-executors 40 \
~/sparktest-0.0.1-SNAPSHOT-jar-with-dependencies.jar
The actual java command is :
/usr/java/latest/bin/java -cp /apache/hadoop/share/hadoop/common/hadoop-common-2.2.0.2.0.6.0-61.jar:/apache/hadoop/lib/hadoop-lzo-0.6.0.jar::/home/jilei/spark/spark-1.0.0-bin-hadoop2/conf:/home/jilei/spark/spark-1.0.0-bin-hadoop2/lib/spark-assembly-1.0.0-hadoop2.2.0.jar:/home/jilei/spark/spark-1.0.0-bin-hadoop2/lib/datanucleus-rdbms-3.2.1.jar:/home/jilei/spark/spark-1.0.0-bin-hadoop2/lib/datanucleus-core-3.2.2.jar:/home/jilei/spark/spark-1.0.0-bin-hadoop2/lib/datanucleus-api-jdo-3.2.1.jar:/apache/hadoop/conf:/apache/hadoop/conf &\
-XX:MaxPermSize=128m \
-Djava.library.path=
-Xms512m -Xmx512m org.apache.spark.deploy.SparkSubmit \
--class com.jk.sparktest.Test &\
--master yarn-cluster &\
--num-executors 40 & \
~/sparktest-0.0.1-SNAPSHOT-jar-with-dependencies.jar
Seems the -Djava.library.path is not set. I also tried the java command above and supplied the native lib directory to the java.library.path, but still got the same errors.
Any idea on what's wrong? Thanks.
Open this post in threaded view
Report Content as Inappropriate
Re: Spark: Could not load native gpl library
Is the GPL library only available on the driver node? If that is the
case, you need to add them to `--jars` option of spark-submit.
On Thu, Aug 7, 2014 at 6:59 PM, Jikai Lei && wrote:
& I had the following error when trying to run a very simple spark job (which
& uses logistic regression with SGD in mllib):
& ERROR GPLNativeCodeLoader: Could not load native gpl library
& java.lang.UnsatisfiedLinkError: no gplcompression in java.library.path
& & & at java.lang.ClassLoader.loadLibrary(ClassLoader.java:1738)
& & & at java.lang.Runtime.loadLibrary0(Runtime.java:823)
& & & at java.lang.System.loadLibrary(System.java:1028)
& pression.lzo.GPLNativeCodeLoader.&clinit&(GPLNativeCodeLoader.java:32)
& & & at pression.lzo.LzoCodec.&clinit&(LzoCodec.java:71)
& & & at java.lang.Class.forName0(Native Method)
& & & at java.lang.Class.forName(Class.java:247)
& org.apache.hadoop.conf.Configuration.getClassByNameOrNull(Configuration.java:1659)
& org.apache.hadoop.conf.Configuration.getClassByName(Configuration.java:1624)
& org.apache.pressionCodecFactory.getCodecClasses(CompressionCodecFactory.java:128)
& org.apache.pressionCodecFactory.&init&(CompressionCodecFactory.java:175)
& org.apache.hadoop.mapred.TextInputFormat.configure(TextInputFormat.java:45)
& & & at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
& sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
& sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
& & & at java.lang.reflect.Method.invoke(Method.java:597)
& org.apache.hadoop.util.ReflectionUtils.setJobConf(ReflectionUtils.java:106)
& org.apache.hadoop.util.ReflectionUtils.setConf(ReflectionUtils.java:75)
& org.apache.hadoop.util.ReflectionUtils.newInstance(ReflectionUtils.java:133)
& & & at org.apache.spark.rdd.HadoopRDD.getInputFormat(HadoopRDD.scala:155)
& & & at org.apache.spark.rdd.HadoopRDD$$anon$1.&init&(HadoopRDD.scala:187)
& & & at org.apache.spark.pute(HadoopRDD.scala:181)
& & & at org.apache.spark.pute(HadoopRDD.scala:93)
& & & at org.apache.spark.puteOrReadCheckpoint(RDD.scala:262)
& & & at org.apache.spark.rdd.RDD.iterator(RDD.scala:229)
& & & at org.apache.spark.pute(MappedRDD.scala:31)
& & & at org.apache.spark.puteOrReadCheckpoint(RDD.scala:262)
& & & at org.apache.spark.rdd.RDD.iterator(RDD.scala:229)
& & & at org.apache.spark.pute(MappedRDD.scala:31)
& & & at org.apache.spark.puteOrReadCheckpoint(RDD.scala:262)
& & & at org.apache.spark.rdd.RDD.iterator(RDD.scala:229)
& & & at org.apache.spark.pute(FilteredRDD.scala:34)
& & & at org.apache.spark.puteOrReadCheckpoint(RDD.scala:262)
& & & at org.apache.spark.rdd.RDD.iterator(RDD.scala:229)
& & & at org.apache.spark.pute(MappedRDD.scala:31)
& & & at org.apache.spark.puteOrReadCheckpoint(RDD.scala:262)
& & & at org.apache.spark.rdd.RDD.iterator(RDD.scala:229)
& & & at org.apache.spark.pute(MappedRDD.scala:31)
& & & at org.apache.spark.puteOrReadCheckpoint(RDD.scala:262)
& & & at org.apache.spark.rdd.RDD.iterator(RDD.scala:229)
& & & at org.apache.spark.pute(FilteredRDD.scala:34)
& & & at org.apache.spark.puteOrReadCheckpoint(RDD.scala:262)
& & & at org.apache.spark.rdd.RDD.iterator(RDD.scala:229)
& & & at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:111)
& & & at org.apache.spark.scheduler.Task.run(Task.scala:51)
& & & at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:187)
& java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
& java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
& & & at java.lang.Thread.run(Thread.java:662)
& 14/08/06 20:32:11 ERROR LzoCodec: Cannot load native-lzo without
& native-hadoop
& This is the command I used to submit the job:
& ~/spark/spark-1.0.0-bin-hadoop2/bin/spark-submit \
& --class com.jk.sparktest.Test \
& --master yarn-cluster \
& --num-executors 40 \
& ~/sparktest-0.0.1-SNAPSHOT-jar-with-dependencies.jar
& The actual java command is :
& /usr/java/latest/bin/java -cp
& /apache/hadoop/share/hadoop/common/hadoop-common-2.2.0.2.0.6.0-61.jar:/apache/hadoop/lib/hadoop-lzo-0.6.0.jar::/home/jilei/spark/spark-1.0.0-bin-hadoop2/conf:/home/jilei/spark/spark-1.0.0-bin-hadoop2/lib/spark-assembly-1.0.0-hadoop2.2.0.jar:/home/jilei/spark/spark-1.0.0-bin-hadoop2/lib/datanucleus-rdbms-3.2.1.jar:/home/jilei/spark/spark-1.0.0-bin-hadoop2/lib/datanucleus-core-3.2.2.jar:/home/jilei/spark/spark-1.0.0-bin-hadoop2/lib/datanucleus-api-jdo-3.2.1.jar:/apache/hadoop/conf:/apache/hadoop/conf
& -XX:MaxPermSize=128m \
& -Djava.library.path=
& -Xms512m -Xmx512m org.apache.spark.deploy.SparkSubmit \
& --class com.jk.sparktest.Test &\
& --master yarn-cluster &\
& --num-executors 40 & \
& ~/sparktest-0.0.1-SNAPSHOT-jar-with-dependencies.jar
& Seems the -Djava.library.path is not set. I also tried the java command
& above and supplied the native lib directory to the java.library.path, but
& still got the same errors.
& Any idea on what's wrong? Thanks.
& View this message in context: & Sent from the Apache Spark User List mailing list archive .
& ---------------------------------------------------------------------
& To unsubscribe, e-mail:
& For additional commands, e-mail:
---------------------------------------------------------------------
To unsubscribe, e-mail:
For additional commands, e-mail:
Open this post in threaded view
Report Content as Inappropriate
Re: Spark: Could not load native gpl library
Hi Jikai,It looks like you&#39;re trying to run a Spark job on data that&#39;s stored in HDFS in .lzo format.
Spark can handle this (I do it all the time), but you need to configure your Spark installation to know about the .lzo format.
There are two parts to the hadoop lzo library -- the first is the jar (hadoop-lzo.jar) and the second is the native library (libgplcompression.{a,so,la} and liblzo2.{a,so,la}).
You need the jar on the classpath across your cluster, but also the native libraries exposed as well.
In Spark 1.0.1 I modify entries in spark-env.sh: set SPARK_LIBRARY_PATH to include the path to the native library directory (e.g. /path/to/hadoop/lib/native/Linux-amd64-64) and SPARK_CLASSPATH to include the hadoop-lzo jar.
Hope that helps,AndrewOn Thu, Aug 7, 2014 at 7:19 PM, Xiangrui Meng && wrote:
Is the GPL library only available on the driver node? If that is the
case, you need to add them to `--jars` option of spark-submit.
On Thu, Aug 7, 2014 at 6:59 PM, Jikai Lei && wrote:
& I had the following error when trying to run a very simple spark job (which
& uses logistic regression with SGD in mllib):
& ERROR GPLNativeCodeLoader: Could not load native gpl library
& java.lang.UnsatisfiedLinkError: no gplcompression in java.library.path
at java.lang.ClassLoader.loadLibrary(ClassLoader.java:1738)
at java.lang.Runtime.loadLibrary0(Runtime.java:823)
at java.lang.System.loadLibrary(System.java:1028)
& pression.lzo.GPLNativeCodeLoader.&clinit&(GPLNativeCodeLoader.java:32)
at pression.lzo.LzoCodec.&clinit&(LzoCodec.java:71)
at java.lang.Class.forName0(Native Method)
at java.lang.Class.forName(Class.java:247)
& org.apache.hadoop.conf.Configuration.getClassByNameOrNull(Configuration.java:1659)
& org.apache.hadoop.conf.Configuration.getClassByName(Configuration.java:1624)
& org.apache.pressionCodecFactory.getCodecClasses(CompressionCodecFactory.java:128)
& org.apache.pressionCodecFactory.&init&(CompressionCodecFactory.java:175)
& org.apache.hadoop.mapred.TextInputFormat.configure(TextInputFormat.java:45)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
& sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
& sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
at java.lang.reflect.Method.invoke(Method.java:597)
& org.apache.hadoop.util.ReflectionUtils.setJobConf(ReflectionUtils.java:106)
& org.apache.hadoop.util.ReflectionUtils.setConf(ReflectionUtils.java:75)
& org.apache.hadoop.util.ReflectionUtils.newInstance(ReflectionUtils.java:133)
at org.apache.spark.rdd.HadoopRDD.getInputFormat(HadoopRDD.scala:155)
at org.apache.spark.rdd.HadoopRDD$$anon$1.&init&(HadoopRDD.scala:187)
at org.apache.spark.pute(HadoopRDD.scala:181)
at org.apache.spark.pute(HadoopRDD.scala:93)
at org.apache.spark.puteOrReadCheckpoint(RDD.scala:262)
at org.apache.spark.rdd.RDD.iterator(RDD.scala:229)
at org.apache.spark.pute(MappedRDD.scala:31)
at org.apache.spark.puteOrReadCheckpoint(RDD.scala:262)
at org.apache.spark.rdd.RDD.iterator(RDD.scala:229)
at org.apache.spark.pute(MappedRDD.scala:31)
at org.apache.spark.puteOrReadCheckpoint(RDD.scala:262)
at org.apache.spark.rdd.RDD.iterator(RDD.scala:229)
at org.apache.spark.pute(FilteredRDD.scala:34)
at org.apache.spark.puteOrReadCheckpoint(RDD.scala:262)
at org.apache.spark.rdd.RDD.iterator(RDD.scala:229)
at org.apache.spark.pute(MappedRDD.scala:31)
at org.apache.spark.puteOrReadCheckpoint(RDD.scala:262)
at org.apache.spark.rdd.RDD.iterator(RDD.scala:229)
at org.apache.spark.pute(MappedRDD.scala:31)
at org.apache.spark.puteOrReadCheckpoint(RDD.scala:262)
at org.apache.spark.rdd.RDD.iterator(RDD.scala:229)
at org.apache.spark.pute(FilteredRDD.scala:34)
at org.apache.spark.puteOrReadCheckpoint(RDD.scala:262)
at org.apache.spark.rdd.RDD.iterator(RDD.scala:229)
at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:111)
at org.apache.spark.scheduler.Task.run(Task.scala:51)
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:187)
& java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
& java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
at java.lang.Thread.run(Thread.java:662)
& 14/08/06 20:32:11 ERROR LzoCodec: Cannot load native-lzo without
& native-hadoop
& This is the command I used to submit the job:
& ~/spark/spark-1.0.0-bin-hadoop2/bin/spark-submit \
& --class com.jk.sparktest.Test \
& --master yarn-cluster \
& --num-executors 40 \
& ~/sparktest-0.0.1-SNAPSHOT-jar-with-dependencies.jar
& The actual java command is :
& /usr/java/latest/bin/java -cp
& /apache/hadoop/share/hadoop/common/hadoop-common-2.2.0.2.0.6.0-61.jar:/apache/hadoop/lib/hadoop-lzo-0.6.0.jar::/home/jilei/spark/spark-1.0.0-bin-hadoop2/conf:/home/jilei/spark/spark-1.0.0-bin-hadoop2/lib/spark-assembly-1.0.0-hadoop2.2.0.jar:/home/jilei/spark/spark-1.0.0-bin-hadoop2/lib/datanucleus-rdbms-3.2.1.jar:/home/jilei/spark/spark-1.0.0-bin-hadoop2/lib/datanucleus-core-3.2.2.jar:/home/jilei/spark/spark-1.0.0-bin-hadoop2/lib/datanucleus-api-jdo-3.2.1.jar:/apache/hadoop/conf:/apache/hadoop/conf
& -XX:MaxPermSize=128m \
& -Djava.library.path=
& -Xms512m -Xmx512m org.apache.spark.deploy.SparkSubmit \
& --class com.jk.sparktest.Test
& --master yarn-cluster
& --num-executors 40
& ~/sparktest-0.0.1-SNAPSHOT-jar-with-dependencies.jar
& Seems the -Djava.library.path is not set. I also tried the java command
& above and supplied the native lib directory to the java.library.path, but
& still got the same errors.
& Any idea on what&#39;s wrong? Thanks.
& View this message in context:
& Sent from the Apache Spark User List mailing list archive .
& ---------------------------------------------------------------------
& To unsubscribe, e-mail:
& For additional commands, e-mail:
---------------------------------------------------------------------
To unsubscribe, e-mail:
For additional commands, e-mail:
Open this post in threaded view
Report Content as Inappropriate
Re: Spark: Could not load native gpl library
In reply to
by Xiangrui Meng
Thanks. I tried this option, but still got the same error.
Open this post in threaded view
Report Content as Inappropriate
Re: Spark: Could not load native gpl library
This post was updated on .
In reply to
by Andrew Ash
Thanks Andrew. &Actually my job did not use any data in .lzo format. Here is the program itself:
import org.apache.spark._
import org.apache.spark.mllib.util.MLUtils
import org.apache.spark.mllib.classification.LogisticRegressionWithSGD
object Test {
& def main(args: Array[String]) {
& & val sparkConf = new SparkConf().setAppName(&SparkMLTest&)
& & val sc = new SparkContext(sparkConf)
& & val training = MLUtils.loadLibSVMFile(sc, &hdfs://url:8020/user/jilei/sparktesttraining_libsvmfmt_10k.txt&)
& & &val model = LogisticRegressionWithSGD.train(training, numIterations = 20)
I copied this form a github gist and want to have a try. The file is a libsvm format file and is in HDFS (I removed the actual hdfs url here in the code.)
And in the spark-env.sh file, I set the evns:
export SPARK_LIBRARY_PATH=/apache/hadoop/lib/native/
export SPARK_CLASSPATH=/apache/hadoop/share/hadoop/common/hadoop-common-2.2.0.2.0.6.0-61.jar:/apache/hadoop/lib/hadoop-lzo-0.6.0.jar
Here is the content of the /apache/hadoop/lib/native/ folder:
ls /apache/hadoop/lib/native/
libgplcompression.a & libgplcompression.so & &libgplcompression.so.0.0.0 &libhadooppipes.a &libhadoop.so.1.0.0 &libhdfs.a & libhdfs.so.0.0.0 &libsnappy.so.1
libgplcompression.la &libgplcompression.so.0 &libhadoop.a & & & & & & & & libhadoop.so & & &libhadooputils.a & &libhdfs.so &libsnappy.so & & &libsnappy.so.1.1.4
Andrew Ash wrote
It looks like you're trying to run a Spark job on data that's stored in
HDFS in .lzo format. &Spark can handle this (I do it all the time), but you
need to configure your Spark installation to know about the .lzo format.
There are two parts to the hadoop lzo library -- the first is the jar
(hadoop-lzo.jar) and the second is the native library
(libgplcompression.{a,so,la} and liblzo2.{a,so,la}). &You need the jar on
the classpath across your cluster, but also the native libraries exposed as
In Spark 1.0.1 I modify entries in spark-env.sh: set SPARK_LIBRARY_PATH to
include the path to the native library directory
(e.g. /path/to/hadoop/lib/native/Linux-amd64-64) and SPARK_CLASSPATH to
include the hadoop-lzo jar.
Hope that helps,
On Thu, Aug 7, 2014 at 7:19 PM, Xiangrui Meng && wrote:
& Is the GPL library only available on the driver node? If that is the
& case, you need to add them to `--jars` option of spark-submit.
& -Xiangrui
& On Thu, Aug 7, 2014 at 6:59 PM, Jikai Lei && wrote:
& & I had the following error when trying to run a very simple spark job
& & uses logistic regression with SGD in mllib):
& & ERROR GPLNativeCodeLoader: Could not load native gpl library
& & java.lang.UnsatisfiedLinkError: no gplcompression in java.library.path
& & & & at java.lang.ClassLoader.loadLibrary(ClassLoader.java:1738)
& & & & at java.lang.Runtime.loadLibrary0(Runtime.java:823)
& & & & at java.lang.System.loadLibrary(System.java:1028)
& & & & at
& pression.lzo.GPLNativeCodeLoader.&clinit&(GPLNativeCodeLoader.java:32)
& & & & at pression.lzo.LzoCodec.&clinit&(LzoCodec.java:71)
& & & & at java.lang.Class.forName0(Native Method)
& & & & at java.lang.Class.forName(Class.java:247)
& & & & at
& org.apache.hadoop.conf.Configuration.getClassByNameOrNull(Configuration.java:1659)
& & & & at
& org.apache.hadoop.conf.Configuration.getClassByName(Configuration.java:1624)
& & & & at
& org.apache.pressionCodecFactory.getCodecClasses(CompressionCodecFactory.java:128)
& & & & at
& org.apache.pressionCodecFactory.&init&(CompressionCodecFactory.java:175)
& & & & at
& org.apache.hadoop.mapred.TextInputFormat.configure(TextInputFormat.java:45)
& & & & at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
& & & & at
& sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
& & & & at
& sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
& & & & at java.lang.reflect.Method.invoke(Method.java:597)
& & & & at
& org.apache.hadoop.util.ReflectionUtils.setJobConf(ReflectionUtils.java:106)
& & & & at
& & org.apache.hadoop.util.ReflectionUtils.setConf(ReflectionUtils.java:75)
& & & & at
& org.apache.hadoop.util.ReflectionUtils.newInstance(ReflectionUtils.java:133)
& & & & at org.apache.spark.rdd.HadoopRDD.getInputFormat(HadoopRDD.scala:155)
& & & & at org.apache.spark.rdd.HadoopRDD$$anon$1.&init&(HadoopRDD.scala:187)
& & & & at org.apache.spark.pute(HadoopRDD.scala:181)
& & & & at org.apache.spark.pute(HadoopRDD.scala:93)
& & & & at org.apache.spark.puteOrReadCheckpoint(RDD.scala:262)
& & & & at org.apache.spark.rdd.RDD.iterator(RDD.scala:229)
& & & & at org.apache.spark.pute(MappedRDD.scala:31)
& & & & at org.apache.spark.puteOrReadCheckpoint(RDD.scala:262)
& & & & at org.apache.spark.rdd.RDD.iterator(RDD.scala:229)
& & & & at org.apache.spark.pute(MappedRDD.scala:31)
& & & & at org.apache.spark.puteOrReadCheckpoint(RDD.scala:262)
& & & & at org.apache.spark.rdd.RDD.iterator(RDD.scala:229)
& & & & at org.apache.spark.pute(FilteredRDD.scala:34)
& & & & at org.apache.spark.puteOrReadCheckpoint(RDD.scala:262)
& & & & at org.apache.spark.rdd.RDD.iterator(RDD.scala:229)
& & & & at org.apache.spark.pute(MappedRDD.scala:31)
& & & & at org.apache.spark.puteOrReadCheckpoint(RDD.scala:262)
& & & & at org.apache.spark.rdd.RDD.iterator(RDD.scala:229)
& & & & at org.apache.spark.pute(MappedRDD.scala:31)
& & & & at org.apache.spark.puteOrReadCheckpoint(RDD.scala:262)
& & & & at org.apache.spark.rdd.RDD.iterator(RDD.scala:229)
& & & & at org.apache.spark.pute(FilteredRDD.scala:34)
& & & & at org.apache.spark.puteOrReadCheckpoint(RDD.scala:262)
& & & & at org.apache.spark.rdd.RDD.iterator(RDD.scala:229)
& & & & at
& org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:111)
& & & & at org.apache.spark.scheduler.Task.run(Task.scala:51)
& & & & at
& org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:187)
& & & & at
& java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
& & & & at
& java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
& & & & at java.lang.Thread.run(Thread.java:662)
& & 14/08/06 20:32:11 ERROR LzoCodec: Cannot load native-lzo without
& & native-hadoop
& & This is the command I used to submit the job:
& & ~/spark/spark-1.0.0-bin-hadoop2/bin/spark-submit \
& & --class com.jk.sparktest.Test \
& & --master yarn-cluster \
& & --num-executors 40 \
& & ~/sparktest-0.0.1-SNAPSHOT-jar-with-dependencies.jar
& & The actual java command is :
& & /usr/java/latest/bin/java -cp
& /apache/hadoop/share/hadoop/common/hadoop-common-2.2.0.2.0.6.0-61.jar:/apache/hadoop/lib/hadoop-lzo-0.6.0.jar::/home/jilei/spark/spark-1.0.0-bin-hadoop2/conf:/home/jilei/spark/spark-1.0.0-bin-hadoop2/lib/spark-assembly-1.0.0-hadoop2.2.0.jar:/home/jilei/spark/spark-1.0.0-bin-hadoop2/lib/datanucleus-rdbms-3.2.1.jar:/home/jilei/spark/spark-1.0.0-bin-hadoop2/lib/datanucleus-core-3.2.2.jar:/home/jilei/spark/spark-1.0.0-bin-hadoop2/lib/datanucleus-api-jdo-3.2.1.jar:/apache/hadoop/conf:/apache/hadoop/conf
& & -XX:MaxPermSize=128m \
& & -Djava.library.path=
& & -Xms512m -Xmx512m org.apache.spark.deploy.SparkSubmit \
& & --class com.jk.sparktest.Test &\
& & --master yarn-cluster &\
& & --num-executors 40 & \
& & ~/sparktest-0.0.1-SNAPSHOT-jar-with-dependencies.jar
& & Seems the -Djava.library.path is not set. I also tried the java command
& & above and supplied the native lib directory to the java.library.path, but
& & still got the same errors.
& & Any idea on what's wrong? Thanks.
& & View this message in context:
& & & Sent from the Apache Spark User List mailing list archive .
& & ---------------------------------------------------------------------
& & To unsubscribe, e-mail: & & For additional commands, e-mail: & &
& ---------------------------------------------------------------------
& To unsubscribe, e-mail: & For additional commands, e-mail: &
Open this post in threaded view
Report Content as Inappropriate
Re: Spark: Could not load native gpl library
Hi Jikai,The reason I ask is because your stacktrace has this section in it:pression.lzo.GPLNativeCodeLoader.&clinit&(GPLNativeCodeLoader.java:32)
at pression.lzo.LzoCodec.&clinit&(LzoCodec.java:71)
at java.lang.Class.forName0(Native Method)
at java.lang.Class.forName(Class.java:247)
atorg.apache.hadoop.conf.Configuration.getClassByNameOrNull(Configuration.java:1659)
atorg.apache.hadoop.conf.Configuration.getClassByName(Configuration.java:1624)
atorg.apache.press.CompressionCodecFactory.getCodecClasses(CompressionCodecFactory.java:128)
atorg.apache.press.CompressionCodecFactory.&init&(CompressionCodecFactory.java:175)
Maybe you have the Lzo codec defined in your core-site.pression.codecs setting?
In the short run you could disable it.
In the long run, I wonder if this is an issue with YARN not propagating the setting through to the executors.
Have you tried in other cluster deployment modes?
On Fri, Aug 8, 2014 at 7:38 AM, Jikai Lei && wrote:
Thanks Andrew.
Actually my job did not use any data in .lzo format. Here is
the program itself:
import org.apache.spark._
import org.apache.spark.mllib.util.MLUtils
import org.apache.spark.mllib.classification.LogisticRegressionWithSGD
object Test {
def main(args: Array[String]) {
val sparkConf = new SparkConf().setAppName(&SparkMLTest&)
val sc = new SparkContext(sparkConf)
val training = MLUtils.loadLibSVMFile(sc,
&hdfs://url:8020/user/jilei/sparktesttraining_libsvmfmt_10k.txt&
val model = LogisticRegressionWithSGD.train(training, numIterations
I copied this form a github gist and want to have a try. The file is a
libsvm format file and is in HDFS (I removed the actual hdfs url here in the
And in the spark-env.sh file, I set the evns:
export SPARK_LIBRARY_PATH=/apache/hadoop/lib/native/
SPARK_CLASSPATH=/apache/hadoop/share/hadoop/common/hadoop-common-2.2.0.2.0.6.0-61.jar:/apache/hadoop/lib/hadoop-lzo-0.6.0.jar
Here is the content of the /apache/hadoop/lib/native/ folder:
ls /apache/hadoop/lib/native/
libgplcompression.a
libgplcompression.so
libgplcompression.so.0.0.0
libhadooppipes.a
libhadoop.so.1.0.0
libhdfs.so.0.0.0
libsnappy.so.1
libgplcompression.so.0
libhadoop.a
libhadoop.so
libhadooputils.a
libhdfs.so
libsnappy.so
libsnappy.so.1.1.4
Andrew Ash wrote
& Hi Jikai,
& It looks like you&#39;re trying to run a Spark job on data that&#39;s stored in
& HDFS in .lzo format.
Spark can handle this (I do it all the time), but
& need to configure your Spark installation to know about the .lzo format.
& There are two parts to the hadoop lzo library -- the first is the jar
& (hadoop-lzo.jar) and the second is the native library
& (libgplcompression.{a,so,la} and liblzo2.{a,so,la}).
You need the jar on
& the classpath across your cluster, but also the native libraries exposed
& In Spark 1.0.1 I modify entries in spark-env.sh: set SPARK_LIBRARY_PATH to
& include the path to the native library directory
& (e.g. /path/to/hadoop/lib/native/Linux-amd64-64) and SPARK_CLASSPATH to
& include the hadoop-lzo jar.
& Hope that helps,
& On Thu, Aug 7, 2014 at 7:19 PM, Xiangrui Meng &
& & wrote:
&& Is the GPL library only available on the driver node? If that is the
&& case, you need to add them to `--jars` option of spark-submit.
&& -Xiangrui
&& On Thu, Aug 7, 2014 at 6:59 PM, Jikai Lei &
& hangelwen@
& & wrote:
&& & I had the following error when trying to run a very simple spark job
&& & uses logistic regression with SGD in mllib):
&& & ERROR GPLNativeCodeLoader: Could not load native gpl library
&& & java.lang.UnsatisfiedLinkError: no gplcompression in java.library.path
at java.lang.ClassLoader.loadLibrary(ClassLoader.java:1738)
at java.lang.Runtime.loadLibrary0(Runtime.java:823)
at java.lang.System.loadLibrary(System.java:1028)
&& pression.lzo.GPLNativeCodeLoader.
& &clinit&
& (GPLNativeCodeLoader.java:32)
at pression.lzo.LzoCodec.
& &clinit&
& (LzoCodec.java:71)
at java.lang.Class.forName0(Native Method)
at java.lang.Class.forName(Class.java:247)
&& org.apache.hadoop.conf.Configuration.getClassByNameOrNull(Configuration.java:1659)
&& org.apache.hadoop.conf.Configuration.getClassByName(Configuration.java:1624)
&& org.apache.pressionCodecFactory.getCodecClasses(CompressionCodecFactory.java:128)
&& org.apache.pressionCodecFactory.
& (CompressionCodecFactory.java:175)
&& org.apache.hadoop.mapred.TextInputFormat.configure(TextInputFormat.java:45)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
&& sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
&& sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
at java.lang.reflect.Method.invoke(Method.java:597)
&& org.apache.hadoop.util.ReflectionUtils.setJobConf(ReflectionUtils.java:106)
&& & org.apache.hadoop.util.ReflectionUtils.setConf(ReflectionUtils.java:75)
&& org.apache.hadoop.util.ReflectionUtils.newInstance(ReflectionUtils.java:133)
&& org.apache.spark.rdd.HadoopRDD.getInputFormat(HadoopRDD.scala:155)
at org.apache.spark.rdd.HadoopRDD$$anon$1.
& (HadoopRDD.scala:187)
at org.apache.spark.pute(HadoopRDD.scala:181)
at org.apache.spark.pute(HadoopRDD.scala:93)
at org.apache.spark.puteOrReadCheckpoint(RDD.scala:262)
at org.apache.spark.rdd.RDD.iterator(RDD.scala:229)
at org.apache.spark.pute(MappedRDD.scala:31)
at org.apache.spark.puteOrReadCheckpoint(RDD.scala:262)
at org.apache.spark.rdd.RDD.iterator(RDD.scala:229)
at org.apache.spark.pute(MappedRDD.scala:31)
at org.apache.spark.puteOrReadCheckpoint(RDD.scala:262)
at org.apache.spark.rdd.RDD.iterator(RDD.scala:229)
at org.apache.spark.pute(FilteredRDD.scala:34)
at org.apache.spark.puteOrReadCheckpoint(RDD.scala:262)
at org.apache.spark.rdd.RDD.iterator(RDD.scala:229)
at org.apache.spark.pute(MappedRDD.scala:31)
at org.apache.spark.puteOrReadCheckpoint(RDD.scala:262)
at org.apache.spark.rdd.RDD.iterator(RDD.scala:229)
at org.apache.spark.pute(MappedRDD.scala:31)
at org.apache.spark.puteOrReadCheckpoint(RDD.scala:262)
at org.apache.spark.rdd.RDD.iterator(RDD.scala:229)
at org.apache.spark.pute(FilteredRDD.scala:34)
at org.apache.spark.puteOrReadCheckpoint(RDD.scala:262)
at org.apache.spark.rdd.RDD.iterator(RDD.scala:229)
&& org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:111)
at org.apache.spark.scheduler.Task.run(Task.scala:51)
&& org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:187)
&& java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
&& java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
at java.lang.Thread.run(Thread.java:662)
&& & 14/08/06 20:32:11 ERROR LzoCodec: Cannot load native-lzo without
&& & native-hadoop
&& & This is the command I used to submit the job:
&& & ~/spark/spark-1.0.0-bin-hadoop2/bin/spark-submit \
&& & --class com.jk.sparktest.Test \
&& & --master yarn-cluster \
&& & --num-executors 40 \
&& & ~/sparktest-0.0.1-SNAPSHOT-jar-with-dependencies.jar
&& & The actual java command is :
&& & /usr/java/latest/bin/java -cp
&& /apache/hadoop/share/hadoop/common/hadoop-common-2.2.0.2.0.6.0-61.jar:/apache/hadoop/lib/hadoop-lzo-0.6.0.jar::/home/jilei/spark/spark-1.0.0-bin-hadoop2/conf:/home/jilei/spark/spark-1.0.0-bin-hadoop2/lib/spark-assembly-1.0.0-hadoop2.2.0.jar:/home/jilei/spark/spark-1.0.0-bin-hadoop2/lib/datanucleus-rdbms-3.2.1.jar:/home/jilei/spark/spark-1.0.0-bin-hadoop2/lib/datanucleus-core-3.2.2.jar:/home/jilei/spark/spark-1.0.0-bin-hadoop2/lib/datanucleus-api-jdo-3.2.1.jar:/apache/hadoop/conf:/apache/hadoop/conf
&& & -XX:MaxPermSize=128m \
&& & -Djava.library.path=
&& & -Xms512m -Xmx512m org.apache.spark.deploy.SparkSubmit \
&& & --class com.jk.sparktest.Test
&& & --master yarn-cluster
&& & --num-executors 40
&& & ~/sparktest-0.0.1-SNAPSHOT-jar-with-dependencies.jar
&& & Seems the -Djava.library.path is not set. I also tried the java command
&& & above and supplied the native lib directory to the java.library.path,
&& & still got the same errors.
&& & Any idea on what&#39;s wrong? Thanks.
&& & View this message in context:
&& & Sent from the Apache Spark User List mailing list archive at
&& & ---------------------------------------------------------------------
&& & To unsubscribe, e-mail:
& user-unsubscribe@.apache
&& & For additional commands, e-mail:
& user-help@.apache
&& ---------------------------------------------------------------------
&& To unsubscribe, e-mail:
& user-unsubscribe@.apache
&& For additional commands, e-mail:
& user-help@.apache
View this message in context:
Sent from the Apache Spark User List mailing list archive .
---------------------------------------------------------------------
To unsubscribe, e-mail:
For additional commands, e-mail:
Loading...

我要回帖

更多关于 can not load file 的文章

 

随机推荐