igfs as cache for hdfs run on apache ignite accelerator but not on apache ignite 2.6

classic Classic list List threaded Threaded
1 message Options
mehdi sey mehdi sey
Reply | Threaded
Open this post in threaded view
|

igfs as cache for hdfs run on apache ignite accelerator but not on apache ignite 2.6

i want to execute a wordcount example of hadoop over apache ignite. i have used IGFS as cache for HDFS configuration in ignite, but after submitting job via hadoop for execution on ignite i encountered with below error. thanks in advance to anyone who could help me! there is a note that i can execute igfs as cache for hdfs over apache ignite hadoop accelerator version 2.6. Using configuration: examples/config/filesystem/example-igfs-hdfs.xml [00:47:13] __________ ________________ [00:47:13] / _/ ___/ |/ / _/_ __/ __/ [00:47:13] _/ // (7 7 // / / / / _/ [00:47:13] /___/\___/_/|_/___/ /_/ /___/ [00:47:13] [00:47:13] ver. 2.6.0#20180710-sha1:669feacc [00:47:13] 2018 Copyright(C) Apache Software Foundation [00:47:13] [00:47:13] Ignite documentation: http://ignite.apache.org [00:47:13] [00:47:13] Quiet mode. [00:47:13] ^-- Logging to file '/usr/local/apache-ignite-fabric-2.6.0-bin/work/log/ignite-f3712946.log' [00:47:13] ^-- Logging by 'Log4JLogger [quiet=true, config=/usr/local/apache-ignite-fabric-2.6.0-bin/config/ignite-log4j.xml]' [00:47:13] ^-- To see **FULL** console log here add -DIGNITE_QUIET=false or "-v" to ignite.{sh|bat} [00:47:13] [00:47:13] OS: Linux 4.15.0-46-generic amd64 [00:47:13] VM information: Java(TM) SE Runtime Environment 1.8.0_192-ea-b04 Oracle Corporation Java HotSpot(TM) 64-Bit Server VM 25.192-b04 [00:47:13] Configured plugins: [00:47:13] ^-- Ignite Native I/O Plugin [Direct I/O] [00:47:13] ^-- Copyright(C) Apache Software Foundation [00:47:13] [00:47:13] Configured failure handler: [hnd=StopNodeOrHaltFailureHandler [tryStop=false, timeout=0]] [00:47:22] Message queue limit is set to 0 which may lead to potential OOMEs when running cache operations in FULL_ASYNC or PRIMARY_SYNC modes due to message queues growth on sender and receiver sides. [00:47:22] Security status [authentication=off, tls/ssl=off] SLF4J: Class path contains multiple SLF4J bindings. SLF4J: Found binding in [jar:file:/usr/local/apache-ignite-fabric-2.6.0-bin/libs/slf4j-nop-1.6.1.jar!/org/slf4j/impl/StaticLoggerBinder.class] SLF4J: Found binding in [jar:file:/usr/local/apache-ignite-fabric-2.6.0-bin/libs/htrace%20dependency/slf4j-nop-1.6.1.jar!/org/slf4j/impl/StaticLoggerBinder.class] SLF4J: Found binding in [jar:file:/usr/local/apache-ignite-fabric-2.6.0-bin/libs/ignite-rest-http/slf4j-log4j12-1.7.7.jar!/org/slf4j/impl/StaticLoggerBinder.class] SLF4J: Found binding in [jar:file:/usr/local/apache-ignite-fabric-2.6.0-bin/libs/ignite-yarn/ignite-yarn-2.6.0.jar!/org/slf4j/impl/StaticLoggerBinder.class] SLF4J: Found binding in [jar:file:/usr/local/apache-ignite-fabric-2.6.0-bin/libs/ignite-zookeeper/slf4j-log4j12-1.7.7.jar!/org/slf4j/impl/StaticLoggerBinder.class] SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation. SLF4J: Actual binding is of type [org.slf4j.helpers.NOPLoggerFactory] [00:47:23] HADOOP_HOME is set to /usr/local/hadoop [00:47:23] Resolved Hadoop classpath locations: /usr/local/hadoop/share/hadoop/common, /usr/local/hadoop/share/hadoop/hdfs, /usr/local/hadoop/share/hadoop/mapreduce [00:47:26] Performance suggestions for grid (fix if possible) [00:47:26] To disable, set -DIGNITE_PERFORMANCE_SUGGESTIONS_DISABLED=true [00:47:26] ^-- Enable G1 Garbage Collector (add '-XX:+UseG1GC' to JVM options) [00:47:26] ^-- Set max direct memory size if getting 'OOME: Direct buffer memory' (add '-XX:MaxDirectMemorySize=[g|G|m|M|k|K]' to JVM options) [00:47:26] ^-- Disable processing of calls to System.gc() (add '-XX:+DisableExplicitGC' to JVM options) [00:47:26] ^-- Enable ATOMIC mode if not using transactions (set 'atomicityMode' to ATOMIC) [00:47:26] ^-- Disable fully synchronous writes (set 'writeSynchronizationMode' to PRIMARY_SYNC or FULL_ASYNC) [00:47:26] Refer to this page for more performance suggestions: https://apacheignite.readme.io/docs/jvm-and-system-tuning [00:47:26] [00:47:26] To start Console Management & Monitoring run ignitevisorcmd.{sh|bat} [00:47:26] [00:47:26] Ignite node started OK (id=f3712946) [00:47:26] Topology snapshot [ver=1, servers=1, clients=0, CPUs=8, offheap=1.6GB, heap=1.0GB] [00:47:26] ^-- Node [id=F3712946-0810-440F-A440-140FE4AB6FA7, clusterState=ACTIVE] [00:47:26] Data Regions Configured: [00:47:27] ^-- default [initSize=256.0 MiB, maxSize=1.6 GiB, persistenceEnabled=false] [00:47:35] New version is available at ignite.apache.org: 2.7.0 [2019-03-13 00:47:46,978][ERROR][igfs-igfs-ipc-#53][IgfsImpl] File info operation in DUAL mode failed [path=/output] class org.apache.ignite.IgniteException: For input string: "30s" at org.apache.ignite.internal.processors.hadoop.impl.fs.HadoopLazyConcurrentMap.getOrCreate(HadoopLazyConcurrentMap.java:100) at org.apache.ignite.internal.processors.hadoop.impl.delegate.HadoopCachingFileSystemFactoryDelegate.getWithMappedName(HadoopCachingFileSystemFactoryDelegate.java:53) at org.apache.ignite.internal.processors.hadoop.impl.delegate.HadoopBasicFileSystemFactoryDelegate.get(HadoopBasicFileSystemFactoryDelegate.java:75) at org.apache.ignite.internal.processors.hadoop.impl.delegate.HadoopBasicFileSystemFactoryDelegate.get(HadoopBasicFileSystemFactoryDelegate.java:43) at org.apache.ignite.internal.processors.hadoop.impl.delegate.HadoopIgfsSecondaryFileSystemDelegateImpl.fileSystemForUser(HadoopIgfsSecondaryFileSystemDelegateImpl.java:517) at org.apache.ignite.internal.processors.hadoop.impl.delegate.HadoopIgfsSecondaryFileSystemDelegateImpl.info(HadoopIgfsSecondaryFileSystemDelegateImpl.java:296) at org.apache.ignite.hadoop.fs.IgniteHadoopIgfsSecondaryFileSystem.info(IgniteHadoopIgfsSecondaryFileSystem.java:240) at org.apache.ignite.internal.processors.igfs.IgfsImpl.resolveFileInfo(IgfsImpl.java:1600) at org.apache.ignite.internal.processors.igfs.IgfsImpl.access$800(IgfsImpl.java:110) at org.apache.ignite.internal.processors.igfs.IgfsImpl$6.call(IgfsImpl.java:524) at org.apache.ignite.internal.processors.igfs.IgfsImpl$6.call(IgfsImpl.java:517) at org.apache.ignite.internal.processors.igfs.IgfsImpl.safeOp(IgfsImpl.java:1756) at org.apache.ignite.internal.processors.igfs.IgfsImpl.info(IgfsImpl.java:517) at org.apache.ignite.internal.processors.igfs.IgfsIpcHandler$2.apply(IgfsIpcHandler.java:341) at org.apache.ignite.internal.processors.igfs.IgfsIpcHandler$2.apply(IgfsIpcHandler.java:332) at org.apache.ignite.igfs.IgfsUserContext.doAs(IgfsUserContext.java:54) at org.apache.ignite.internal.processors.igfs.IgfsIpcHandler.processPathControlRequest(IgfsIpcHandler.java:332) at org.apache.ignite.internal.processors.igfs.IgfsIpcHandler.execute(IgfsIpcHandler.java:241) at org.apache.ignite.internal.processors.igfs.IgfsIpcHandler.access$000(IgfsIpcHandler.java:57) at org.apache.ignite.internal.processors.igfs.IgfsIpcHandler$1.run(IgfsIpcHandler.java:167) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) at java.lang.Thread.run(Thread.java:748) Caused by: class org.apache.ignite.IgniteCheckedException: For input string: "30s" at org.apache.ignite.internal.util.IgniteUtils.cast(IgniteUtils.java:7307) at org.apache.ignite.internal.util.future.GridFutureAdapter.resolve(GridFutureAdapter.java:259) at org.apache.ignite.internal.util.future.GridFutureAdapter.get0(GridFutureAdapter.java:171) at org.apache.ignite.internal.util.future.GridFutureAdapter.get(GridFutureAdapter.java:140) at org.apache.ignite.internal.processors.hadoop.impl.fs.HadoopLazyConcurrentMap$ValueWrapper.getValue(HadoopLazyConcurrentMap.java:191) at org.apache.ignite.internal.processors.hadoop.impl.fs.HadoopLazyConcurrentMap.getOrCreate(HadoopLazyConcurrentMap.java:93) ... 22 more Caused by: java.lang.NumberFormatException: For input string: "30s" at java.lang.NumberFormatException.forInputString(NumberFormatException.java:65) at java.lang.Long.parseLong(Long.java:589) at java.lang.Long.parseLong(Long.java:631) at org.apache.hadoop.conf.Configuration.getLong(Configuration.java:1538) at org.apache.hadoop.hdfs.DFSClient$Conf.(DFSClient.java:430) at org.apache.hadoop.hdfs.DFSClient.(DFSClient.java:540) at org.apache.hadoop.hdfs.DFSClient.(DFSClient.java:524) at org.apache.hadoop.hdfs.DistributedFileSystem.initialize(DistributedFileSystem.java:146) at org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:3303) at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:476) at org.apache.hadoop.fs.FileSystem$1.run(FileSystem.java:217) at org.apache.hadoop.fs.FileSystem$1.run(FileSystem.java:214) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:422) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1730) at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:214) at org.apache.ignite.internal.processors.hadoop.impl.delegate.HadoopBasicFileSystemFactoryDelegate.create(HadoopBasicFileSystemFactoryDelegate.java:117) at org.apache.ignite.internal.processors.hadoop.impl.delegate.HadoopBasicFileSystemFactoryDelegate.getWithMappedName(HadoopBasicFileSystemFactoryDelegate.java:95) at org.apache.ignite.internal.processors.hadoop.impl.delegate.HadoopCachingFileSystemFactoryDelegate.access$001(HadoopCachingFileSystemFactoryDelegate.java:32) at org.apache.ignite.internal.processors.hadoop.impl.delegate.HadoopCachingFileSystemFactoryDelegate$1.createValue(HadoopCachingFileSystemFactoryDelegate.java:37) at org.apache.ignite.internal.processors.hadoop.impl.delegate.HadoopCachingFileSystemFactoryDelegate$1.createValue(HadoopCachingFileSystemFactoryDelegate.java:35) at org.apache.ignite.internal.processors.hadoop.impl.fs.HadoopLazyConcurrentMap$ValueWrapper.init(HadoopLazyConcurrentMap.java:173) at org.apache.ignite.internal.processors.hadoop.impl.fs.HadoopLazyConcurrentMap$ValueWrapper.access$100(HadoopLazyConcurrentMap.java:154) at org.apache.ignite.internal.processors.hadoop.impl.fs.HadoopLazyConcurrentMap.getOrCreate(HadoopLazyConcurrentMap.java:82) ... 22 more for execution hadoop wordcount example i have created forlder in hdfs as name /user/input/ and put a text file on it and execute wordcout example with below command: time hadoop --config /home/mehdi/ignite-conf/ignite-configs-master/igfs-hadoop-fs-cache/ignite_conf jar /usr/local/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-examples-3.2.0.jar wordcount /user/input/ /output i have used this below configuration file for executing wordcount example of hadoop over ignite. i have a folde name as ignite_config which consist of two file core-site.xml and mapred-site.xml with attache content. core-site.xml core-site.xml is it necessary to execute igfs as cache for hdfs only over apache ignite hadoop accelerator version or we can use apache ignite also? anybody knows?

Sent from the Apache Ignite Users mailing list archive at Nabble.com.