S3AFileSystem as IGFS secondary file system

classic Classic list List threaded Threaded
2 messages Options
otorreno otorreno
Reply | Threaded
Open this post in threaded view
|

S3AFileSystem as IGFS secondary file system

Hi,

I am struggling to get the S3AFileSystem configured as an IGFS secondary
file system.

I am using IGFS as my default file system, and do not want to have an HDFS
cluster up and running besides the IGFS one.

I have been able to reproduce the steps contained at
https://apacheignite-fs.readme.io/docs/secondary-file-system BUT that's not
the behaviour I am looking for.

What I want to do is having an instance of S3AFileSystem, which is an
implementation of the Hadoop FileSystem, configure IGFS to use it as
secondary file system.

Is it possible?

Best,
Oscar



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/
otorreno otorreno
Reply | Threaded
Open this post in threaded view
|

Re: S3AFileSystem as IGFS secondary file system

I have been able to do it using the following lines:
        BasicHadoopFileSystemFactory f = new BasicHadoopFileSystemFactory();
        f.setConfigPaths("cfg.xml");

        IgniteHadoopIgfsSecondaryFileSystem sec = new
IgniteHadoopIgfsSecondaryFileSystem();
        sec.setFileSystemFactory(f);

        fileSystemCfg.setSecondaryFileSystem(sec);
        fileSystemCfg.setDefaultMode(IgfsMode.DUAL_ASYNC);

The "cfg.xml" file contains the S3 access and secret keys, and the bucket
URI. However, I would like to set the configuration in the code not in a
configuration file. Taking a look at the BasicHadoopFileSystemFactory class
you can only specify a file path. Is there any reason to not allow passing a
Hadoop Configuration instance?

Best,
Oscar



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/