Accessing Apache Hadoop from Karaf

In Hadoop, the core of a cluster is the distributed and replicated filesystem. We have HDFS running and can access it from our command line window as a regular user. Actually, getting to it from an OSGi container will prove to be slightly more complicated than just writing the Java components.

Hadoop requires us to provide configuration metadata for our cluster that can be looked up as file or classpath resources. In this recipe, we will simply copy the HDFS site-specific files we created earlier in the chapter to our src/main/resources folder.

We will also include the default metadata definitions into our resources by copying them from a dependency, and finally, we'll allow our bundle classloader to perform fully ...

Get Apache Karaf Cookbook now with the O’Reilly learning platform.

O’Reilly members experience books, live events, courses curated by job role, and more from O’Reilly and nearly 200 top publishers.