Beyond Basics

Beyond Basics

Beyond Basics pages go into custom configurations of Ozone, including how to run Ozone concurrently with an existing HDFS cluster. These pages also take deep into how to run profilers and leverage tracing support built into Ozone.

Runing with HDFS

Ozone is designed to work with HDFS. So it is easy to deploy ozone in an existing HDFS cluster. The container manager part of Ozone can run inside DataNodes as a pluggable module or as a standalone component. This document describe how can it be started as a HDFS datanode plugin. To activate ozone you should define the service plugin implementation class. Important: It should be added to the hdfs-site.xml as the plugin should be activated as part of the normal HDFS Datanode bootstrap.

Runing with HDFS

Ozone Containers

Docker heavily is used at the ozone development with three principal use-cases: dev: We use docker to start local pseudo-clusters (docker provides unified environment, but no image creation is required) test: We create docker images from the dev branches to test ozone in kubernetes and other container orchestrator system We provide apache/ozone images for each release to make it easier for evaluation of Ozone. These images are not created for production usage.

Ozone Containers

Docker Cheat Sheet

In the compose directory of the ozone distribution there are multiple pseudo-cluster setup which can be used to run Ozone in different way (for example: secure cluster, with tracing enabled, with prometheus etc.). If the usage is not document in a specific directory the default usage is the following: cd compose/ozone docker-compose up -d The data of the container is ephemeral and deleted together with the docker volumes. docker-compose down Useful Docker & Ozone Commands If you make any modifications to ozone, the simplest way to test it is to run freon and unit tests.

Docker Cheat Sheet