Exception when running hadoop fs -ls igfs://igfs@localhost:10500/

classic Classic list List threaded Threaded
6 messages Options
ydjadi ydjadi
Reply | Threaded
Open this post in threaded view
|

Exception when running hadoop fs -ls igfs://igfs@localhost:10500/

This post has NOT been accepted by the mailing list yet.
I getting below exception when running hadoop fs -ls igfs://igfs@127.0.0.1:10500/

OpenJDK 64-Bit Server VM warning: ignoring option MaxPermSize=512m; support was removed in 8.0
[18:37:40]    __________  ________________
[18:37:40]   /  _/ ___/ |/ /  _/_  __/ __/
[18:37:40]  _/ // (7 7    // /  / / / _/
[18:37:40] /___/\___/_/|_/___/ /_/ /___/
[18:37:40]
[18:37:40] ver. 1.9.0#20170329-sha1:c22a2d48
[18:37:40] 2017 Copyright(C) Apache Software Foundation
[18:37:40]
[18:37:40] Ignite documentation: http://ignite.apache.org
[18:37:40]
[18:37:40] Quiet mode.
[18:37:40]   ^-- Logging to file '/var/log/ignite-hadoop/ignite-922096fc.log'
[18:37:40]   ^-- To see **FULL** console log here add -DIGNITE_QUIET=false or "-v" to ignite.{sh|bat}
[18:37:40]
[18:37:40] OS: Linux 3.10.0-514.10.2.el7.x86_64 amd64
[18:37:40] VM information: OpenJDK Runtime Environment 1.8.0_121-b13 Oracle Corporation OpenJDK 64-Bit Server VM 25.121-b13
[18:37:41] Configured plugins:
[18:37:41]   ^-- None
[18:37:41]
[18:37:41] Message queue limit is set to 0 which may lead to potential OOMEs when running cache operations in FULL_ASYNC or PRIMARY_SYNC modes due to message queues growth on sender and receiv
er sides.
[18:37:42] Security status [authentication=off, tls/ssl=off]
[18:37:47] HADOOP_HOME is set to /usr
[18:37:47] Resolved Hadoop classpath locations: /usr/lib/hadoop, /usr/lib/hadoop-hdfs, /usr/lib/hadoop-mapreduce
[18:37:57] Performance suggestions for grid  (fix if possible)
[18:37:57] To disable, set -DIGNITE_PERFORMANCE_SUGGESTIONS_DISABLED=true
[18:37:57]   ^-- Disable grid events (remove 'includeEventTypes' from configuration)
[18:37:57]   ^-- Enable G1 Garbage Collector (add '-XX:+UseG1GC' to JVM options)
[18:37:57]   ^-- Set max direct memory size if getting 'OOME: Direct buffer memory' (add '-XX:MaxDirectMemorySize=<size>[g|G|m|M|k|K]' to JVM options)
[18:37:57]   ^-- Disable processing of calls to System.gc() (add '-XX:+DisableExplicitGC' to JVM options)
[18:37:57]   ^-- Speed up flushing of dirty pages by OS (alter vm.dirty_expire_centisecs parameter by setting to 500)
[18:37:57]   ^-- Reduce pages swapping ratio (set vm.swappiness=10)
[18:37:57] Refer to this page for more performance suggestions: https://apacheignite.readme.io/docs/jvm-and-system-tuning
[18:37:57]
[18:37:57] To start Console Management & Monitoring run ignitevisorcmd.{sh|bat}
[18:37:57]
[18:37:57] Ignite node started OK (id=922096fc)
[18:37:57] Topology snapshot [ver=1, servers=1, clients=0, CPUs=1, heap=1.0GB]
[18:39:00,688][ERROR][igfs-igfs-ipc-#42%null%][IgfsImpl] File info operation in DUAL mode failed [path=/]
class org.apache.ignite.IgniteException: null
        at org.apache.ignite.internal.processors.hadoop.impl.fs.HadoopLazyConcurrentMap.getOrCreate(HadoopLazyConcurrentMap.java:100)
        at org.apache.ignite.internal.processors.hadoop.impl.delegate.HadoopCachingFileSystemFactoryDelegate.getWithMappedName(HadoopCachingFileSystemFactoryDelegate.java:53)
        at org.apache.ignite.internal.processors.hadoop.impl.delegate.HadoopBasicFileSystemFactoryDelegate.get(HadoopBasicFileSystemFactoryDelegate.java:75)
        at org.apache.ignite.internal.processors.hadoop.impl.delegate.HadoopBasicFileSystemFactoryDelegate.get(HadoopBasicFileSystemFactoryDelegate.java:43)
        at org.apache.ignite.internal.processors.hadoop.impl.delegate.HadoopIgfsSecondaryFileSystemDelegateImpl.fileSystemForUser(HadoopIgfsSecondaryFileSystemDelegateImpl.java:459)
        at org.apache.ignite.internal.processors.hadoop.impl.delegate.HadoopIgfsSecondaryFileSystemDelegateImpl.info(HadoopIgfsSecondaryFileSystemDelegateImpl.java:279)
        at org.apache.ignite.hadoop.fs.IgniteHadoopIgfsSecondaryFileSystem.info(IgniteHadoopIgfsSecondaryFileSystem.java:234)
        at org.apache.ignite.internal.processors.igfs.IgfsImpl.resolveFileInfo(IgfsImpl.java:1599)
        at org.apache.ignite.internal.processors.igfs.IgfsImpl.access$800(IgfsImpl.java:111)
        at org.apache.ignite.internal.processors.igfs.IgfsImpl$6.call(IgfsImpl.java:553)
        at org.apache.ignite.internal.processors.igfs.IgfsImpl$6.call(IgfsImpl.java:546)
        at org.apache.ignite.internal.processors.igfs.IgfsImpl.safeOp(IgfsImpl.java:1755)
        at org.apache.ignite.internal.processors.igfs.IgfsImpl.info(IgfsImpl.java:546)
        at org.apache.ignite.internal.processors.igfs.IgfsIpcHandler$2.apply(IgfsIpcHandler.java:329)
        at org.apache.ignite.internal.processors.igfs.IgfsIpcHandler$2.apply(IgfsIpcHandler.java:320)
        at org.apache.ignite.igfs.IgfsUserContext.doAs(IgfsUserContext.java:54)
        at org.apache.ignite.internal.processors.igfs.IgfsIpcHandler.processPathControlRequest(IgfsIpcHandler.java:320)
        at org.apache.ignite.internal.processors.igfs.IgfsIpcHandler.execute(IgfsIpcHandler.java:241)
        at org.apache.ignite.internal.processors.igfs.IgfsIpcHandler.access$000(IgfsIpcHandler.java:57)
        at org.apache.ignite.internal.processors.igfs.IgfsIpcHandler$1.run(IgfsIpcHandler.java:167)
        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
        at java.lang.Thread.run(Thread.java:745)
Caused by: class org.apache.ignite.IgniteCheckedException: null
        at org.apache.ignite.internal.util.IgniteUtils.cast(IgniteUtils.java:7239)
        at org.apache.ignite.internal.util.future.GridFutureAdapter.get0(GridFutureAdapter.java:170)
        at org.apache.ignite.internal.util.future.GridFutureAdapter.get(GridFutureAdapter.java:119)
        at org.apache.ignite.internal.processors.hadoop.impl.fs.HadoopLazyConcurrentMap$ValueWrapper.getValue(HadoopLazyConcurrentMap.java:191)
        at org.apache.ignite.internal.processors.hadoop.impl.fs.HadoopLazyConcurrentMap.getOrCreate(HadoopLazyConcurrentMap.java:93)
        ... 22 more
Caused by: java.lang.IllegalArgumentException
        at org.objectweb.asm.ClassReader.<init>(Unknown Source)
        at org.objectweb.asm.ClassReader.<init>(Unknown Source)
        at org.objectweb.asm.ClassReader.<init>(Unknown Source)
        at org.apache.ignite.internal.processors.hadoop.HadoopHelperImpl.loadReplace(HadoopHelperImpl.java:93)
        at org.apache.ignite.internal.processors.hadoop.HadoopClassLoader.loadReplace(HadoopClassLoader.java:331)
        at org.apache.ignite.internal.processors.hadoop.HadoopClassLoader.loadClass(HadoopClassLoader.java:290)
        at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
        at org.apache.hadoop.tracing.SpanReceiverHost.get(SpanReceiverHost.java:79)
        at org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:634)
        at org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:619)
        at org.apache.hadoop.hdfs.DistributedFileSystem.initialize(DistributedFileSystem.java:149)
        at org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:2669)
        at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:370)
        at org.apache.hadoop.fs.FileSystem$1.run(FileSystem.java:162)
        at org.apache.hadoop.fs.FileSystem$1.run(FileSystem.java:159)
        at java.security.AccessController.doPrivileged(Native Method)
        at javax.security.auth.Subject.doAs(Subject.java:422)
        at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1698)
        at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:159)
        at org.apache.ignite.internal.processors.hadoop.impl.delegate.HadoopBasicFileSystemFactoryDelegate.create(HadoopBasicFileSystemFactoryDelegate.java:117)
        at org.apache.ignite.internal.processors.hadoop.impl.delegate.HadoopBasicFileSystemFactoryDelegate.getWithMappedName(HadoopBasicFileSystemFactoryDelegate.java:95)
        at org.apache.ignite.internal.processors.hadoop.impl.delegate.HadoopCachingFileSystemFactoryDelegate.access$001(HadoopCachingFileSystemFactoryDelegate.java:32)
        at org.apache.ignite.internal.processors.hadoop.impl.delegate.HadoopCachingFileSystemFactoryDelegate$1.createValue(HadoopCachingFileSystemFactoryDelegate.java:37)
        at org.apache.ignite.internal.processors.hadoop.impl.delegate.HadoopCachingFileSystemFactoryDelegate$1.createValue(HadoopCachingFileSystemFactoryDelegate.java:35)
        at org.apache.ignite.internal.processors.hadoop.impl.fs.HadoopLazyConcurrentMap$ValueWrapper.init(HadoopLazyConcurrentMap.java:173)
        at org.apache.ignite.internal.processors.hadoop.impl.fs.HadoopLazyConcurrentMap$ValueWrapper.access$100(HadoopLazyConcurrentMap.java:154)
        at org.apache.ignite.internal.processors.hadoop.impl.fs.HadoopLazyConcurrentMap.getOrCreate(HadoopLazyConcurrentMap.java:82)
        ... 22 more
ydjadi ydjadi
Reply | Threaded
Open this post in threaded view
|

Re: Exception when running hadoop fs -ls igfs://igfs@localhost:10500/

This post has NOT been accepted by the mailing list yet.
My config filedefault-config.xml

Ivan Veselovsky Ivan Veselovsky
Reply | Threaded
Open this post in threaded view
|

Re: Exception when running hadoop fs -ls igfs://igfs@localhost:10500/

It looks like the exception you're observing caused by the following code:
----
    public ClassReader(final byte[] b, final int off, final int len) {
        this.b = b;
        // checks the class version
        if (readShort(off + 6) > Opcodes.V1_7) {
            throw new IllegalArgumentException();
        }
----

It means that the class parser encountered a class file compiled using bytecode version > 1.7 .
Can you please guess such classes in Ignite classpath and try to rebuild them with 1.7 "target" version?
Ivan Veselovsky Ivan Veselovsky
Reply | Threaded
Open this post in threaded view
|

Re: Exception when running hadoop fs -ls igfs://igfs@localhost:10500/

Even more exactly, from the root cause stack trace it appears that the replacement class  

   /** Hadoop class name: ShutdownHookManager replacement. */
    public static final String CLS_SHUTDOWN_HOOK_MANAGER_REPLACE =
        "org.apache.ignite.internal.processors.hadoop.impl.v2.HadoopShutdownHookManager";
 
had bytecode version > 1.7.

After regular Ignite build the following class file produced:
-----------------
$ javap -v ./org/apache/ignite/internal/processors/hadoop/impl/v2/HadoopShutdownHookManager.class
Classfile /home/ivan/_git/apache-ignite/modules/hadoop/org/apache/ignite/internal/processors/hadoop/impl/v2/HadoopShutdownHookManager.class
  Last modified Apr 17, 2017; size 1802 bytes
  MD5 checksum 0deafdb744c37001448585ff0f7e6c12
  Compiled from "HadoopShutdownHookManager.java"
public class org.apache.ignite.internal.processors.hadoop.impl.v2.HadoopShutdownHookManager
  SourceFile: "HadoopShutdownHookManager.java"
  minor version: 0
  major version: 51
  flags: ACC_PUBLIC, ACC_SUPER
----------------

Major version 51 corresponds to Java class file version 1.7.
mehdi sey mehdi sey
Reply | Threaded
Open this post in threaded view
|

Re: Exception when running hadoop fs -ls igfs://igfs@localhost:10500/

In reply to this post by ydjadi
hi. i have same problem just as you. i follow your post but my problem have
not solved yet. i encounter this error when i execute wordcount example in
hadoop for running on ignite ( i have used IGFS as a cache for HDFS). when i
execute wordcount example i encounter the following error:
[23:11:13]    __________  ________________
[23:11:13]   /  _/ ___/ |/ /  _/_  __/ __/
[23:11:13]  _/ // (7 7    // /  / / / _/  
[23:11:13] /___/\___/_/|_/___/ /_/ /___/  
[23:11:13]
[23:11:13] ver. 2.6.0#20180710-sha1:669feacc
[23:11:13] 2018 Copyright(C) Apache Software Foundation
[23:11:13]
[23:11:13] Ignite documentation: http://ignite.apache.org
[23:11:13]
[23:11:13] Quiet mode.
[23:11:13]   ^-- Logging to file
'/usr/local/apache-ignite-fabric-2.6.0-bin/work/log/ignite-66905db1.log'
[23:11:13]   ^-- Logging by 'Log4JLogger [quiet=true,
config=/usr/local/apache-ignite-fabric-2.6.0-bin/config/ignite-log4j.xml]'
[23:11:13]   ^-- To see **FULL** console log here add -DIGNITE_QUIET=false
or "-v" to ignite.{sh|bat}
[23:11:13]
[23:11:13] OS: Linux 4.15.0-46-generic amd64
[23:11:13] VM information: Java(TM) SE Runtime Environment 1.8.0_192-ea-b04
Oracle Corporation Java HotSpot(TM) 64-Bit Server VM 25.192-b04
[23:11:14] Configured plugins:
[23:11:14]   ^-- Ignite Native I/O Plugin [Direct I/O]
[23:11:14]   ^-- Copyright(C) Apache Software Foundation
[23:11:14]
[23:11:14] Configured failure handler: [hnd=StopNodeOrHaltFailureHandler
[tryStop=false, timeout=0]]
[23:11:14] Message queue limit is set to 0 which may lead to potential OOMEs
when running cache operations in FULL_ASYNC or PRIMARY_SYNC modes due to
message queues growth on sender and receiver sides.
[23:11:14] Security status [authentication=off, tls/ssl=off]
SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in
[jar:file:/usr/local/apache-ignite-fabric-2.6.0-bin/libs/slf4j-nop-1.6.1.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in
[jar:file:/usr/local/apache-ignite-fabric-2.6.0-bin/libs/htrace%20dependency/slf4j-nop-1.6.1.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in
[jar:file:/usr/local/apache-ignite-fabric-2.6.0-bin/libs/ignite-rest-http/slf4j-log4j12-1.7.7.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in
[jar:file:/usr/local/apache-ignite-fabric-2.6.0-bin/libs/ignite-yarn/ignite-yarn-2.6.0.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in
[jar:file:/usr/local/apache-ignite-fabric-2.6.0-bin/libs/ignite-zookeeper/slf4j-log4j12-1.7.7.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an
explanation.
SLF4J: Actual binding is of type [org.slf4j.helpers.NOPLoggerFactory]
[23:11:16] HADOOP_HOME is set to /usr/local/hadoop
[23:11:16] Resolved Hadoop classpath locations:
/usr/local/hadoop/share/hadoop/common, /usr/local/hadoop/share/hadoop/hdfs,
/usr/local/hadoop/share/hadoop/mapreduce
[23:11:18] Performance suggestions for grid  (fix if possible)
[23:11:18] To disable, set -DIGNITE_PERFORMANCE_SUGGESTIONS_DISABLED=true
[23:11:18]   ^-- Enable G1 Garbage Collector (add '-XX:+UseG1GC' to JVM
options)
[23:11:18]   ^-- Set max direct memory size if getting 'OOME: Direct buffer
memory' (add '-XX:MaxDirectMemorySize=<size>[g|G|m|M|k|K]' to JVM options)
[23:11:18]   ^-- Disable processing of calls to System.gc() (add
'-XX:+DisableExplicitGC' to JVM options)
[23:11:18]   ^-- Enable ATOMIC mode if not using transactions (set
'atomicityMode' to ATOMIC)
[23:11:18]   ^-- Disable fully synchronous writes (set
'writeSynchronizationMode' to PRIMARY_SYNC or FULL_ASYNC)
[23:11:18] Refer to this page for more performance suggestions:
https://apacheignite.readme.io/docs/jvm-and-system-tuning
[23:11:18]
[23:11:18] To start Console Management & Monitoring run
ignitevisorcmd.{sh|bat}
[23:11:18]
[23:11:18] Ignite node started OK (id=66905db1)
[23:11:18] Topology snapshot [ver=1, servers=1, clients=0, CPUs=8,
offheap=1.6GB, heap=1.0GB]
[23:11:18]   ^-- Node [id=66905DB1-732F-40F3-BD65-7CE9E73DB610,
clusterState=ACTIVE]
[23:11:18] Data Regions Configured:
[23:11:18]   ^-- default [initSize=256.0 MiB, maxSize=1.6 GiB,
persistenceEnabled=false]
[23:11:28] New version is available at ignite.apache.org: 2.7.0
[2019-03-12 23:11:29,119][ERROR][igfs-igfs-ipc-#52][IgfsImpl] File info
operation in DUAL mode failed [path=/output]
class org.apache.ignite.IgniteException: For input string: "30s"
        at
org.apache.ignite.internal.processors.hadoop.impl.fs.HadoopLazyConcurrentMap.getOrCreate(HadoopLazyConcurrentMap.java:100)
        at
org.apache.ignite.internal.processors.hadoop.impl.delegate.HadoopCachingFileSystemFactoryDelegate.getWithMappedName(HadoopCachingFileSystemFactoryDelegate.java:53)
        at
org.apache.ignite.internal.processors.hadoop.impl.delegate.HadoopBasicFileSystemFactoryDelegate.get(HadoopBasicFileSystemFactoryDelegate.java:75)
        at
org.apache.ignite.internal.processors.hadoop.impl.delegate.HadoopBasicFileSystemFactoryDelegate.get(HadoopBasicFileSystemFactoryDelegate.java:43)
        at
org.apache.ignite.internal.processors.hadoop.impl.delegate.HadoopIgfsSecondaryFileSystemDelegateImpl.fileSystemForUser(HadoopIgfsSecondaryFileSystemDelegateImpl.java:517)
        at
org.apache.ignite.internal.processors.hadoop.impl.delegate.HadoopIgfsSecondaryFileSystemDelegateImpl.info(HadoopIgfsSecondaryFileSystemDelegateImpl.java:296)
        at
org.apache.ignite.hadoop.fs.IgniteHadoopIgfsSecondaryFileSystem.info(IgniteHadoopIgfsSecondaryFileSystem.java:240)
        at
org.apache.ignite.internal.processors.igfs.IgfsImpl.resolveFileInfo(IgfsImpl.java:1600)
        at
org.apache.ignite.internal.processors.igfs.IgfsImpl.access$800(IgfsImpl.java:110)
        at
org.apache.ignite.internal.processors.igfs.IgfsImpl$6.call(IgfsImpl.java:524)
        at
org.apache.ignite.internal.processors.igfs.IgfsImpl$6.call(IgfsImpl.java:517)
        at
org.apache.ignite.internal.processors.igfs.IgfsImpl.safeOp(IgfsImpl.java:1756)
        at
org.apache.ignite.internal.processors.igfs.IgfsImpl.info(IgfsImpl.java:517)
        at
org.apache.ignite.internal.processors.igfs.IgfsIpcHandler$2.apply(IgfsIpcHandler.java:341)
        at
org.apache.ignite.internal.processors.igfs.IgfsIpcHandler$2.apply(IgfsIpcHandler.java:332)
        at org.apache.ignite.igfs.IgfsUserContext.doAs(IgfsUserContext.java:54)
        at
org.apache.ignite.internal.processors.igfs.IgfsIpcHandler.processPathControlRequest(IgfsIpcHandler.java:332)
        at
org.apache.ignite.internal.processors.igfs.IgfsIpcHandler.execute(IgfsIpcHandler.java:241)
        at
org.apache.ignite.internal.processors.igfs.IgfsIpcHandler.access$000(IgfsIpcHandler.java:57)
        at
org.apache.ignite.internal.processors.igfs.IgfsIpcHandler$1.run(IgfsIpcHandler.java:167)
        at
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
        at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
        at java.lang.Thread.run(Thread.java:748)
Caused by: class org.apache.ignite.IgniteCheckedException: For input string:
"30s"
        at org.apache.ignite.internal.util.IgniteUtils.cast(IgniteUtils.java:7307)
        at
org.apache.ignite.internal.util.future.GridFutureAdapter.resolve(GridFutureAdapter.java:259)
        at
org.apache.ignite.internal.util.future.GridFutureAdapter.get0(GridFutureAdapter.java:171)
        at
org.apache.ignite.internal.util.future.GridFutureAdapter.get(GridFutureAdapter.java:140)
        at
org.apache.ignite.internal.processors.hadoop.impl.fs.HadoopLazyConcurrentMap$ValueWrapper.getValue(HadoopLazyConcurrentMap.java:191)
        at
org.apache.ignite.internal.processors.hadoop.impl.fs.HadoopLazyConcurrentMap.getOrCreate(HadoopLazyConcurrentMap.java:93)
        ... 22 more
Caused by: java.lang.NumberFormatException: For input string: "30s"
        at
java.lang.NumberFormatException.forInputString(NumberFormatException.java:65)
        at java.lang.Long.parseLong(Long.java:589)
        at java.lang.Long.parseLong(Long.java:631)
        at org.apache.hadoop.conf.Configuration.getLong(Configuration.java:1538)
        at org.apache.hadoop.hdfs.DFSClient$Conf.<init>(DFSClient.java:430)
        at org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:540)
        at org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:524)
        at
org.apache.hadoop.hdfs.DistributedFileSystem.initialize(DistributedFileSystem.java:146)
        at org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:3303)
        at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:476)
        at org.apache.hadoop.fs.FileSystem$1.run(FileSystem.java:217)
        at org.apache.hadoop.fs.FileSystem$1.run(FileSystem.java:214)
        at java.security.AccessController.doPrivileged(Native Method)
        at javax.security.auth.Subject.doAs(Subject.java:422)
        at
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1730)
        at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:214)
        at
org.apache.ignite.internal.processors.hadoop.impl.delegate.HadoopBasicFileSystemFactoryDelegate.create(HadoopBasicFileSystemFactoryDelegate.java:117)
        at
org.apache.ignite.internal.processors.hadoop.impl.delegate.HadoopBasicFileSystemFactoryDelegate.getWithMappedName(HadoopBasicFileSystemFactoryDelegate.java:95)
        at
org.apache.ignite.internal.processors.hadoop.impl.delegate.HadoopCachingFileSystemFactoryDelegate.access$001(HadoopCachingFileSystemFactoryDelegate.java:32)
        at
org.apache.ignite.internal.processors.hadoop.impl.delegate.HadoopCachingFileSystemFactoryDelegate$1.createValue(HadoopCachingFileSystemFactoryDelegate.java:37)
        at
org.apache.ignite.internal.processors.hadoop.impl.delegate.HadoopCachingFileSystemFactoryDelegate$1.createValue(HadoopCachingFileSystemFactoryDelegate.java:35)
        at
org.apache.ignite.internal.processors.hadoop.impl.fs.HadoopLazyConcurrentMap$ValueWrapper.init(HadoopLazyConcurrentMap.java:173)
        at
org.apache.ignite.internal.processors.hadoop.impl.fs.HadoopLazyConcurrentMap$ValueWrapper.access$100(HadoopLazyConcurrentMap.java:154)
        at
org.apache.ignite.internal.processors.hadoop.impl.fs.HadoopLazyConcurrentMap.getOrCreate(HadoopLazyConcurrentMap.java:82)
        ... 22 more




--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/
mehdi sey mehdi sey
Reply | Threaded
Open this post in threaded view
|

Re: Exception when running hadoop fs -ls igfs://igfs@localhost:10500/

In reply to this post by Ivan Veselovsky
hi. i have same problem just as you. i follow your post but my problem have
not solved yet. i encounter this error when i execute wordcount example in
hadoop for running on ignite ( i have used IGFS as a cache for HDFS). when i
execute wordcount example i encounter the following error:
[23:11:13]    __________  ________________
[23:11:13]   /  _/ ___/ |/ /  _/_  __/ __/
[23:11:13]  _/ // (7 7    // /  / / / _/  
[23:11:13] /___/\___/_/|_/___/ /_/ /___/  
[23:11:13]
[23:11:13] ver. 2.6.0#20180710-sha1:669feacc
[23:11:13] 2018 Copyright(C) Apache Software Foundation
[23:11:13]
[23:11:13] Ignite documentation: http://ignite.apache.org
[23:11:13]
[23:11:13] Quiet mode.
[23:11:13]   ^-- Logging to file
'/usr/local/apache-ignite-fabric-2.6.0-bin/work/log/ignite-66905db1.log'
[23:11:13]   ^-- Logging by 'Log4JLogger [quiet=true,
config=/usr/local/apache-ignite-fabric-2.6.0-bin/config/ignite-log4j.xml]'
[23:11:13]   ^-- To see **FULL** console log here add -DIGNITE_QUIET=false
or "-v" to ignite.{sh|bat}
[23:11:13]
[23:11:13] OS: Linux 4.15.0-46-generic amd64
[23:11:13] VM information: Java(TM) SE Runtime Environment 1.8.0_192-ea-b04
Oracle Corporation Java HotSpot(TM) 64-Bit Server VM 25.192-b04
[23:11:14] Configured plugins:
[23:11:14]   ^-- Ignite Native I/O Plugin [Direct I/O]
[23:11:14]   ^-- Copyright(C) Apache Software Foundation
[23:11:14]
[23:11:14] Configured failure handler: [hnd=StopNodeOrHaltFailureHandler
[tryStop=false, timeout=0]]
[23:11:14] Message queue limit is set to 0 which may lead to potential OOMEs
when running cache operations in FULL_ASYNC or PRIMARY_SYNC modes due to
message queues growth on sender and receiver sides.
[23:11:14] Security status [authentication=off, tls/ssl=off]
SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in
[jar:file:/usr/local/apache-ignite-fabric-2.6.0-bin/libs/slf4j-nop-1.6.1.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in
[jar:file:/usr/local/apache-ignite-fabric-2.6.0-bin/libs/htrace%20dependency/slf4j-nop-1.6.1.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in
[jar:file:/usr/local/apache-ignite-fabric-2.6.0-bin/libs/ignite-rest-http/slf4j-log4j12-1.7.7.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in
[jar:file:/usr/local/apache-ignite-fabric-2.6.0-bin/libs/ignite-yarn/ignite-yarn-2.6.0.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in
[jar:file:/usr/local/apache-ignite-fabric-2.6.0-bin/libs/ignite-zookeeper/slf4j-log4j12-1.7.7.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an
explanation.
SLF4J: Actual binding is of type [org.slf4j.helpers.NOPLoggerFactory]
[23:11:16] HADOOP_HOME is set to /usr/local/hadoop
[23:11:16] Resolved Hadoop classpath locations:
/usr/local/hadoop/share/hadoop/common, /usr/local/hadoop/share/hadoop/hdfs,
/usr/local/hadoop/share/hadoop/mapreduce
[23:11:18] Performance suggestions for grid  (fix if possible)
[23:11:18] To disable, set -DIGNITE_PERFORMANCE_SUGGESTIONS_DISABLED=true
[23:11:18]   ^-- Enable G1 Garbage Collector (add '-XX:+UseG1GC' to JVM
options)
[23:11:18]   ^-- Set max direct memory size if getting 'OOME: Direct buffer
memory' (add '-XX:MaxDirectMemorySize=<size>[g|G|m|M|k|K]' to JVM options)
[23:11:18]   ^-- Disable processing of calls to System.gc() (add
'-XX:+DisableExplicitGC' to JVM options)
[23:11:18]   ^-- Enable ATOMIC mode if not using transactions (set
'atomicityMode' to ATOMIC)
[23:11:18]   ^-- Disable fully synchronous writes (set
'writeSynchronizationMode' to PRIMARY_SYNC or FULL_ASYNC)
[23:11:18] Refer to this page for more performance suggestions:
https://apacheignite.readme.io/docs/jvm-and-system-tuning
[23:11:18]
[23:11:18] To start Console Management & Monitoring run
ignitevisorcmd.{sh|bat}
[23:11:18]
[23:11:18] Ignite node started OK (id=66905db1)
[23:11:18] Topology snapshot [ver=1, servers=1, clients=0, CPUs=8,
offheap=1.6GB, heap=1.0GB]
[23:11:18]   ^-- Node [id=66905DB1-732F-40F3-BD65-7CE9E73DB610,
clusterState=ACTIVE]
[23:11:18] Data Regions Configured:
[23:11:18]   ^-- default [initSize=256.0 MiB, maxSize=1.6 GiB,
persistenceEnabled=false]
[23:11:28] New version is available at ignite.apache.org: 2.7.0
[2019-03-12 23:11:29,119][ERROR][igfs-igfs-ipc-#52][IgfsImpl] File info
operation in DUAL mode failed [path=/output]
class org.apache.ignite.IgniteException: For input string: "30s"
        at
org.apache.ignite.internal.processors.hadoop.impl.fs.HadoopLazyConcurrentMap.getOrCreate(HadoopLazyConcurrentMap.java:100)
        at
org.apache.ignite.internal.processors.hadoop.impl.delegate.HadoopCachingFileSystemFactoryDelegate.getWithMappedName(HadoopCachingFileSystemFactoryDelegate.java:53)
        at
org.apache.ignite.internal.processors.hadoop.impl.delegate.HadoopBasicFileSystemFactoryDelegate.get(HadoopBasicFileSystemFactoryDelegate.java:75)
        at
org.apache.ignite.internal.processors.hadoop.impl.delegate.HadoopBasicFileSystemFactoryDelegate.get(HadoopBasicFileSystemFactoryDelegate.java:43)
        at
org.apache.ignite.internal.processors.hadoop.impl.delegate.HadoopIgfsSecondaryFileSystemDelegateImpl.fileSystemForUser(HadoopIgfsSecondaryFileSystemDelegateImpl.java:517)
        at
org.apache.ignite.internal.processors.hadoop.impl.delegate.HadoopIgfsSecondaryFileSystemDelegateImpl.info(HadoopIgfsSecondaryFileSystemDelegateImpl.java:296)
        at
org.apache.ignite.hadoop.fs.IgniteHadoopIgfsSecondaryFileSystem.info(IgniteHadoopIgfsSecondaryFileSystem.java:240)
        at
org.apache.ignite.internal.processors.igfs.IgfsImpl.resolveFileInfo(IgfsImpl.java:1600)
        at
org.apache.ignite.internal.processors.igfs.IgfsImpl.access$800(IgfsImpl.java:110)
        at
org.apache.ignite.internal.processors.igfs.IgfsImpl$6.call(IgfsImpl.java:524)
        at
org.apache.ignite.internal.processors.igfs.IgfsImpl$6.call(IgfsImpl.java:517)
        at
org.apache.ignite.internal.processors.igfs.IgfsImpl.safeOp(IgfsImpl.java:1756)
        at
org.apache.ignite.internal.processors.igfs.IgfsImpl.info(IgfsImpl.java:517)
        at
org.apache.ignite.internal.processors.igfs.IgfsIpcHandler$2.apply(IgfsIpcHandler.java:341)
        at
org.apache.ignite.internal.processors.igfs.IgfsIpcHandler$2.apply(IgfsIpcHandler.java:332)
        at org.apache.ignite.igfs.IgfsUserContext.doAs(IgfsUserContext.java:54)
        at
org.apache.ignite.internal.processors.igfs.IgfsIpcHandler.processPathControlRequest(IgfsIpcHandler.java:332)
        at
org.apache.ignite.internal.processors.igfs.IgfsIpcHandler.execute(IgfsIpcHandler.java:241)
        at
org.apache.ignite.internal.processors.igfs.IgfsIpcHandler.access$000(IgfsIpcHandler.java:57)
        at
org.apache.ignite.internal.processors.igfs.IgfsIpcHandler$1.run(IgfsIpcHandler.java:167)
        at
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
        at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
        at java.lang.Thread.run(Thread.java:748)
Caused by: class org.apache.ignite.IgniteCheckedException: For input string:
"30s"
        at org.apache.ignite.internal.util.IgniteUtils.cast(IgniteUtils.java:7307)
        at
org.apache.ignite.internal.util.future.GridFutureAdapter.resolve(GridFutureAdapter.java:259)
        at
org.apache.ignite.internal.util.future.GridFutureAdapter.get0(GridFutureAdapter.java:171)
        at
org.apache.ignite.internal.util.future.GridFutureAdapter.get(GridFutureAdapter.java:140)
        at
org.apache.ignite.internal.processors.hadoop.impl.fs.HadoopLazyConcurrentMap$ValueWrapper.getValue(HadoopLazyConcurrentMap.java:191)
        at
org.apache.ignite.internal.processors.hadoop.impl.fs.HadoopLazyConcurrentMap.getOrCreate(HadoopLazyConcurrentMap.java:93)
        ... 22 more
Caused by: java.lang.NumberFormatException: For input string: "30s"
        at
java.lang.NumberFormatException.forInputString(NumberFormatException.java:65)
        at java.lang.Long.parseLong(Long.java:589)
        at java.lang.Long.parseLong(Long.java:631)
        at org.apache.hadoop.conf.Configuration.getLong(Configuration.java:1538)
        at org.apache.hadoop.hdfs.DFSClient$Conf.<init>(DFSClient.java:430)
        at org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:540)
        at org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:524)
        at
org.apache.hadoop.hdfs.DistributedFileSystem.initialize(DistributedFileSystem.java:146)
        at org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:3303)
        at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:476)
        at org.apache.hadoop.fs.FileSystem$1.run(FileSystem.java:217)
        at org.apache.hadoop.fs.FileSystem$1.run(FileSystem.java:214)
        at java.security.AccessController.doPrivileged(Native Method)
        at javax.security.auth.Subject.doAs(Subject.java:422)
        at
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1730)
        at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:214)
        at
org.apache.ignite.internal.processors.hadoop.impl.delegate.HadoopBasicFileSystemFactoryDelegate.create(HadoopBasicFileSystemFactoryDelegate.java:117)
        at
org.apache.ignite.internal.processors.hadoop.impl.delegate.HadoopBasicFileSystemFactoryDelegate.getWithMappedName(HadoopBasicFileSystemFactoryDelegate.java:95)
        at
org.apache.ignite.internal.processors.hadoop.impl.delegate.HadoopCachingFileSystemFactoryDelegate.access$001(HadoopCachingFileSystemFactoryDelegate.java:32)
        at
org.apache.ignite.internal.processors.hadoop.impl.delegate.HadoopCachingFileSystemFactoryDelegate$1.createValue(HadoopCachingFileSystemFactoryDelegate.java:37)
        at
org.apache.ignite.internal.processors.hadoop.impl.delegate.HadoopCachingFileSystemFactoryDelegate$1.createValue(HadoopCachingFileSystemFactoryDelegate.java:35)
        at
org.apache.ignite.internal.processors.hadoop.impl.fs.HadoopLazyConcurrentMap$ValueWrapper.init(HadoopLazyConcurrentMap.java:173)
        at
org.apache.ignite.internal.processors.hadoop.impl.fs.HadoopLazyConcurrentMap$ValueWrapper.access$100(HadoopLazyConcurrentMap.java:154)
        at
org.apache.ignite.internal.processors.hadoop.impl.fs.HadoopLazyConcurrentMap.getOrCreate(HadoopLazyConcurrentMap.java:82)
        ... 22 more




--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/